Recently, at my full time job, my team and I introduced a new system to replace Sonarqube for static code analysis. Two years prior, we started a POC to see what developers at ING Germany thought about either buying a license for Sonarqube or if we should introduce a brand new tool, Teamscale.
The POC ended last year in early April and we decided to start the process of replacing Sonarqube with Teamscale. At banks, it’s usually not a simple task to introduce new tools, since they have to go through a complex vetting process to ensure that the tools comply with regulations that have been set into place by the German government.
Just two months ago, we reached the point in the approval process that our team has decided to replace the current integration of Sonarqube in one of our own developed applications and integrate Teamscale.
The integration with Sonarqube worked as follows:
- A Jenkins build built an application and executed tests
- Once the tests had been executed, JUnit reports, Surefire reports and a JaCoCo report were generated.
- A Sonarqube scan was started.
- Once the scan was finished, the maven plugin (also developed by us) in the build sent a request to Sonarqube to retrieve metrics about the previously uploaded test reports.
- The retrieved metrics were sent to our central application for storage in an Oracle DB and the generated reports were sent to an S3 bucket to be archived.
Since we are replacing Sonarqube for Teamscale, we had to replace the section of code in the maven plugin retrieved metrics from Sonarqube and uploaded them to our central application.
In the past, teams using our maven plugin, had issues with it causing their builds to take an unexpectedly long time and we decided that we wanted to change this with the Teamscale integration.
We noticed that the bottleneck for the builds was usually our maven plugin and decided that it’s not the responsibility of the plugin to retrieve the metrics from the static code analysis tool, but rather it’s the responsibility of our central application to retrieve the metrics, since it needs to store the metrics in a DB.
By planning the implementation in a different way, we realized that we could easily could out the wait time for the build and allow the central application to retrieve the required metrics by pushing an event when the reports from the plugin are uploaded. This meant that we had to introduce some kind of event processing into the application.
Getting Started with Events
We had already integrated some event processing into our application when we developed a feature for email notifications when a FitNesse test failed. We enabled the user to subscribe to a certain domain so that they were notified via email when a test failed.
We started off by doing using Spring events first. If you are unfamiliar with Spring events, I’ll give a short insight into them.
Spring offers developers a mechanism to create their own event object and publish it when they want something to happen in the application. The mechanism consists of three main components:
- The event class itself. The class extends the Spring provided ApplicationEvent
- The event publisher. The publisher publishes a previously created object of your event. The publisher has to be annotated with the @Component annoation and contains an autowires the ApplicationEventPublisher bean.
- The event listener. The listener listens for a certain event to be published and does some further processing once it receives the event. The listener extends the ApplicationListener with the type, which would be class of the custom event.
An implementation of this would look like this:
We had implemented it like this as well and adjusted all of our tests to accept the changes we made. However, we were unhappy with adding another autowired field into our service class just to publish the event and thought that there had to be a different way of doing this. Afterall, we were saving an entity before sending the event.
That’s when we stumbled upon JPA Events.
JPA events allow developers to create listeners that listen to events that are sent during the JPA lifecycle. The available events are:
Since we needed to update previously saved data, we created a JPA listener that waited for the @PostPersist event to be sent. Once the event was received we were able to continue doing some processing, in our case retrieving metrics from Teamscale.
Other use cases could, for example, be sending a confirmation email to a user when a user object is persisted in a database or sending a two factor token to the users smart phone. The applications for using JPA events are endless, since at some point of your application you need to save some data anyway.
The implementation for a JPA event listener is quite simple:
The user is annotated with the @EventListeners annotation which is passed a class that we reserved for JPA events that affect the user. The listener has two methods, one annotated with @PrePersist and the other with @PostPersist. The methods require the user as an argument, with which we can do some further processing.
Any of the above stated annotations can be used to do something when JPA is told to do something.
Personally, I much prefer this solution, since it is very clean and doesn’t require many extra dependencies to work. If it’s already offered by JPA, then why not use it?
By using JPA events, we were able to eliminate the need to push the event to fetch metrics ourselves. This simplified our code and reduced the amount dependencies we needed to wire into our service class, giving us cleaner code in the end.
If you want to experiment with the code, check out the project on Github: https://github.com/Felix-Seip/spring-events