10 Strategies for Developing Reliable Jakarta EE Applications for the Cloud

Photo of Fabio Turizo by Fabio Turizo

What happens when an application designed for a small user base needs to be scaled up and moved to the cloud?

It needs to live in a distributed environment: responding to an appropriate number of concurrent user requests per second and ensuring users find the application reliable. 

Though Jakarta EE and Eclipse MicroProfile can help with reliable clustering, there is no standard API in Jakarta EE that defines how clustering should work currently. This might change in the future, but in the meantime, this gap must be filled by DevOps engineers.

In this blog, we will cover 10 technical strategies to deal with clustering challenges when developing Jakarta EE and MicroProfile for cloud environments.

1. Statelessness 

When moving to the cloud, the state of applications should not be handled in memory, as this is complicated to maintain in the long-term. It is best to focus on stateless components, where the state of the application is handled either by the client or by the data source, such as a relational database. This will use less memory, make the application easier to coordinate, and require less code overall.

Here's an example of a stateless calculator REST resource that uses standard CDI constructs:

@Path("/calculate")
@RequestScoped
public class CalculatorResource {
  
    @Inject Instance<CalculatorService> calculatorService;
    @Inject Principal currentPrincipal;

    @POST
    @Path("/")
    public void executeCalculation(CalculationRequestData data){
        if(isCalculationPossible(currentPrincipal.getName())){
            calculatorService.get().execute(data, currentPrincipal.getName());
        }
    }
}

2. Singletons 

Avoiding handling the state of applications in memory does not mean that singletons can't be used. They are useful to handle the coordination of single resources. You should use a singleton only when concurrent modifications aren't needed or when only a single aspect of an application requires coordination. For example, if you need to access a cache holding sensitive information, you can create a singleton that acts as a wrapper for the cache and provides access when it is right for the user.

Here's a concrete example of a CDI application-scoped bean that generates JWT access tokens:

@ApplicationScoped
public class TokenGenerator {

    @Inject
    @ConfigProperty(name = "mp.jwt.verify.issuer")
    private String issuer;

    public String generateFor(Attendee attendee){
        try{
            SignedJWT signedJWT = createSignedJWT(issuer);
            return signedJWT.sign(new RSASSASigner(readPrivateKey("/META-INF/keys/privateKey.pem"))).serialize();
        } catch(Exception ex){
            throw new RuntimeException("Failed generating JWT", ex);
        }
    }
}

3. "True" Singletons 

However, the problem with singletons is you restrict the class to one "single" instance - meaning you need a singleton per JVM. This means that you will have multiple instances for as many nodes as you have in a distributed arrangement, so they are not truly representative of the singleton pattern. 

If you need to update data and the singleton data is stored across multiple JVMs, inconsistencies will arise. There is currently no Jakarta EE standard to combat this, and this is where vendors must step in. For example, the Payara Platform has a proprietary feature that allows coordinating singleton classes on a cluster-wide level - you define the singleton, and the container does the rest. In other words, it becomes a true singleton in that it really does provide one global access point to an instance.

Here's an example of how to use this feature that converts an existing CDI application-scoped bean as a true singleton in the Payara Platform:

@ApplicationScoped
@Clustered(callPostConstructOnAttach = false)
public class SessionRatingService implements Serializable{

     @PersistenceContext(unitName = "Vote")
     EntityManager em;

     @PostConstruct
     public void longInitializationProcess(){
           …
     }
}

4. Caching

Statelessness is good, but there is always a need to handle state in-memory, such as preventing data reprocessing. It may be that a user wants to repeat a process. A cache would store the data in memory for that specific user, and it can be presented again, preventing you from reprocessing the data and avoiding the user having to go through the same delay as data is retrieved again. This also allows you to optimise resource management, as you now have that data stored in memory, and it can be re-used for multiple users.

Unfortunately, there are no standard Jakarta EE APIs to unlock these benefits, but some excellent third-party solutions, such as Hazelcast In Memory Cache, EhCache, and Spring Cache, are available.  The Payara Platform also supports the JCache API (which is sadly not part of Jakarta EE yet), which allows both programmatic and declarative definition of caches on existing components. Here's an example that injects a Cache object and uses it to store data in a singleton component:

@ApplicationScoped
public class SessionRatingService {

      @Inject     
      Cache<Integer, SessionRating> cachedRatings;

      public SessionRating getRating(Integer id) {
         cachedRatings.putIfAbsent(id, retrieveSessionFromDB(id));
         return cachedRatings.get(id);
      }
}

5. CDI over EJB 

For application developers that start new Jakarta EE projects, the usual question is: what component model should I consider? There are 2 options: EJB and CDI.

EJB's older component model is resilient, robust but, due to a lack of recent releases, now less important. CDI is a more modern, flexible, and extensive model that is leaner and more powerful, giving you the capabilities to write your own extensions. CDI is a cornerstone of MicroProfile, with EJB components unable to interact with MicroProfile APIs. For these reasons, it is better to stick with CDI on modern projects, unless there's an strict necessity to depend on EJB's feature set.

6. JPA Caching 

Data stored in a relational database also needs to be cached, preventing re-execution to the database, which can be an expensive process when done repetitively, not only for the user but for you. This is relevant on the Jakarta EE world if you rely on the Java Persistence API (JPA) to handle data persistence.

JPA has defined a standard caching mechanism that implementers can use, consisting of two levels:

  • Level 1 (L1), which consists on in-memory caching courtesy of the PerstistenceContext
  • Level 2 (L2), which is proprietary to the JPA implementation and relies on coordinated cache mechanisms.

While Level 1 is standardised, Level 2 - which offers fast data retrieval and for which there are no standard mechanisms - is handled differently by every runtime vendor. The Payara Platform's JPA implementation library, EclipseLink, has it's own implementation of L2 cache, which is useful for non-distributed arrangements. For distributed applications, the Payara Platform introduces a proprietary feature via the Hazelcast's EclipseLink Cache Coordinationto coordinate caches across multiple nodes in an existing cluster.

7. Configuration 

One of the common questions a developer will ask themselves starting a new project is "where will the application configuration located?" - knowing they need to store it in a way that is easy and intuitive to retrieve for all application components.

In the absence of a Jakarta EE standard, the Eclipse MicroProfile framework introduced the MicroProfile Config API to compliment the entire body of specs. It allows a standardised configuration mechanism, able to work in most environments but especially suited to cloud environments. You can easily injects configuration variables in existing code like this:

@Inject
@ConfigProperty(name="demo.conference.speaker.venues", defaultValue = "Ocarina")
private List<String> venues;

It relies on centralised data sources - accessible from all nodes in an application - and sensible default values, allowing applications to work in stated environments during development. Lastly, the API is engineered towards retrieving configuration values from environment variables, making it extremely useful in cloud-native environments.

8. Fault Tolerance 

A specific challenge that many people don't think about is what to do in the case of faults. For example, when a cluster node in your distribution arrangement fails, a database is not reachable, an external service is not working, or the system is at a critical state. 

You need to adapt your application code to accommodate these failures. The MicroProfile Fault Tolerance API allows you to do this. It is a set of standard patterns that guide business logic flow: specifying what happens in the case of a fail and how the application should behave in response, separating the execution logic from the execution itself. A simple use case of this API can be to define a retry policy for a business method in the case of runtime exceptions:

@POST
@Consumes(MediaType.APPLICATION_JSON)
@Retry(maxRetries = 5, delay = 30, delayUnit = ChronoUnit.SECONDS)
public Response register(Attendee attendee){
    Attendee result = attendeeService.create(attendee);
    return Response.created(URI.create("/attendee/" + result.getId()))
                   .entity(result).build();
}

9. Stateless Security  

When using stateless services, as recommended above, you must appropriately secure access to these resource endpoints. Not only that, but they must be available to only the right people (authorisation). Modern applications and microservices use multiple data sources, meaning you also need to consider what happens when a service being authenticated for a specific user calls on another service. Do you need to re-authenticate the user? Can you use the information you already verified? 

A solution needs to make sure that each request is validated separately in isolation. User data must be idempotent - meaning, every time you execute the same operation, you get the same result - and portable, with each node in the cluster having the right tools to validate information. JSON web tokens are the best solution: a set of payloads, namely encoded tokens, that can be sent to identify the users. Luckily, the Eclipse MicroProfile JWT specification brings seamless integration with any Jakarta EE application that wishes to integrate with any JWT provider service (Okta, Auth0, Keycloak).

10. Metrics

Developers need to be asking themselves three questions daily: can I see how good or bad is the state of my system; can I optimise my environment based on real-time data; and can I analyse the data generated by my applications?

MicroProfile Metrics allows you to do this. Based on the Prometheus' OpenMetrics format, a cloud-ready standard metrics format will enable you to aggregate data from multiple sources and requires little set-up, automatically integrating within the container with no need for bootstrapping code. This will allow your metrics to be collected at runtime and be used in decision making to fine-tune the state of the system when needed.

Wrap-up 

This blog is based on an original presentation where I detail implementing each strategy in your codebase. You can watch this in full  on theJakarta EE YouTube channel

 

If you are interested in how Payara Server enables reliable Enterprise Java on the Cloud, download our guide "How to Manage and Operate the Payara Platform in the Cloud":

Download guide

 

 

Comments