Persistent EJB Timers in Payara Micro

Photo of Ondrej Mihályi by Ondrej Mihályi

Payara Micro is packed with a lot of the APIs that come with Payara Server Full Profile, and even more features targeted at clustered deployments. Now, since version 163, it is also possible to use persistent EJB Timers, which are stored across your micro instances inside the distributed Hazelcast cache.





Download Payara Micro


The persisted timer information is replicated within the whole cluster in multiple copies. That means even more resilience and flexibility in production compared to a store backed by a relational database. On the other hand, it also means that the timers are persisted only while some members of the cluster are up and running. This is highly desirable in modern resilient systems and can easily be ensured with multiple cluster members. Payara Micro makes it easy to provision a huge cluster and scale it dynamically! We encourage you to take advantage of it as much as possible to make your architecture more robust and resilient to failures.


In order to make use of a persistent timer in Payara Micro, an application must declare a timer to be persistent, as defined by the EJB specification. This is actually the default, unless timers are specified as non-persistent. One more thing to do is to specify a name for your Payara Micro instances and you’re ready to go!


How to define persistent timers

Let’s demonstrate with a simple example. This code snippet defines a timer: 



private TimerService timerService;
public void run() {"Timer triggered at " + new Date());



In the same managed bean, we will create a persistent interval timer programmatically with the following code:

timerService.createTimer(10000, 10000, "triggers every 10 seconds");

Let’s put the pieces together into a REST resource to make it possible to start the timer externally:

public class TimerResource {
    private static final Logger logger = Logger.getLogger(TimerResource.class.getName());
    private TimerService timerService;
    public void run() {"Timer triggered at " + new Date());
    public String startTimer() {
        timerService.createTimer(10000, 10000, "triggers every 10 seconds");"Timer scheduled to fire every 10 seconds");
        return "Timer scheduled to fire every 10 seconds";

We also need to turn our REST resource into a managed bean so that the TimerService is injected automatically. In the above example, we use @Stateless to turn the object into an EJB, but we might have turned it into a CDI bean as well.


And don’t forget to define a JAX-RS application's configuration class to expose the REST resource.  We’ll go with the following, exposing the resource at http://localhost:8080/persistentTimers/timers:

public class TimersApplication extends Application {

Finally, we’ll build the web application to get persistentTimers.war and we’re ready to run it.


How to run an application with persistent timers

Our persistentTimers.war is just a usual web application and can be executed using Payara Micro as is. However, in order to restore the timers from any previous execution, we have to give our Payara Micro instance a name so that the old timers are paired with the new application instance upon startup. This is because, with version 163, the timers are only executed on a single instance to avoid executing it multiple times within the cluster.

Therefore, we will run our application as follows, specifying "payara1" as the instance name:


java -jar payara-micro.jar --name payara1 --deploy persistentTimers.war



If all went well, the REST resource should be available at http://localhost:8080/persistentTimers/timers using the POST method. Once we access it, the request will trigger a timer scheduled to fire every 10 seconds. After a while, you should be able to see messages like these in the console output:

… Timer scheduled to fire every 10 seconds
… Timer triggered at Tue Aug 23 10:49:52 CEST 2016
… Timer triggered at Tue Aug 23 10:50:01 CEST 2016

At this stage, there is one step missing to make the timers persistent. Can you guess what it is?

Yes, we need to run at least one more Payara Micro instance to form a cluster with the first one. Without other members in the cluster, any persistent timer would be forgotten after a restart.

Therefore, we need to execute the following command to bring up one more Payara Micro instance and bind it automatically to a free HTTP port:


java -jar payara-micro.jar --name payara2 --autoBindHttp --deploy persistentTimers.war



In fact, it is not necessary that our application is deployed on these additional instances, nor it is needed to give them a name. Therefore, the following command would be enough to provide the persistent storage for the Hazelcast based persistent timer:

java -jar payara-micro.jar –-autoBindHttp

Now, we are ready to demonstrate that the timers are really being persisted. When we shutdown and restart our first instance of Payara Micro with the name payara1, we should see in the console output that the previously scheduled timer is being scheduled automatically:

… ==> Restoring Timers ...
… EJB Timers owned by this server will be restored when timeout beans are loaded
… <== ... Timers Restored
… Timer triggered at Tue Aug 23 11:21:41 CEST 2016
… Timer triggered at Tue Aug 23 11:21:51 CEST 2016



Once again, the inclusion of Hazelcast into Payara Micro provided the means to implement another useful feature easily and reliably. Besides providing distributed cache services using JCache API, Hazelcast is also at the core of the Payara Micro CDI event bus. And starting with Payara Micro 163, it provides the persistent storage for the newly added persistent EJB timers, since Hazelcast also works as a replicated NoSQL database.


This still doesn't mean that persistent timers will be coordinated across the Payara Micro cluster right now, but it does mean there is no additional dependency required to provide the persistent storage. This takes us one step closer to being able to drop the embedded Derby database from Payara Micro and reduce its footprint even more!



Payara Server Clustering  click for more articles