Building Your Next Microservice With Eclipse MicroProfile

Photo of Ondro Mihályi by Ondro Mihályi

This quick tutorial will show you how to build your next microservice with the latest version of Eclipse MicroProfile APIs.

 

Using Payara Micro with Docker -   Guide Download

 

Eclipse MicroProfile aims to deliver a growing set of APIs for Java applications composed of multiple microservices. The project has been gaining a lot of attention recently, with a growing list of corporate supporters that also includes Oracle and IBM. There are many servers and frameworks providing the API and that means you can choose the best tool to run your microservices while keeping the same familiar API and behavior. This article is a quick tutorial to using the MicroProfile API for building your next microservice.

 

MicroProfile is built from core JavaEE, now called Jakarta EE, technologies:

While adding to them a set of specifications that make your microservices ready for the cloud including:

These specifications together make up Eclipse MicroProfile 1.3.

 

Initial project setup

So how do you use all of this? This is a quick guide to writing your first application. MicroProfile only specifies the API and the behavior but doesn’t include the specified functionality. It’s up to an implementation like Payara Micro to provide the functionality. With Payara Micro, you can run a WAR file from command line but it's also possible to assemble a single executable JAR file.  There are many other implementations and you can find them in the list of  MicroProfile implementations.

If you choose to run your microservice with Payara Micro, first create a web project that produces a WAR file. If you use Maven or Gradle for your projects, you would set up a standard web application project (with war packaging or war plugin). Once you build the WAR file, you can download Payara Micro from https://www.payara.fish/downloads and run your application from the command line with:

 

java -jar payara-micro.jar application.war

 

Then, add the MicroProfile dependency to your project.

 

Maven:

<dependency>

  <groupId>org.eclipse.microprofile</groupId>

  <artifactId>microprofile</artifactId>

  <version>1.3</version>

  <type>pom</type>

  <scope>provided</scope>

</dependency>

 

Gradle:
dependencies {  

providedCompile 'org.eclipse.microprofile:microprofile:1.3'} }

This one dependency brings in all of the needed APIs to build your application. So what would a typical microservice look like?

  1. A JAX-RS Controller. Since we're exposing a REST API, we want a controller to handle the API calls.
  2. A service of some kind. You need some backing component to generate or consume data. We're going to be using some mock data, for now, just to explain the paradigm.
  3. Configurability. We don't want the client specifying the data volume, we want to do it declaratively.
  4. Security. Need both declarative and business logic driven security to know how to respond to requests.
  5. Fault Tolerance. We care about any services we consume and ensuring we can fail fast or recover from failures
  6. Monitoring. We want to know how often this service is invoked, and how long each request takes.

 

A REST controller and service

First, we have our rest controller, which should look very familiar to Java EE developers:

 

@Path("/api/books") // just a basic JAX-RS resource

@Counted // track the number of times this endpoint is invoked

@RequestScoped

public class BooksController {

 @Inject //use CDI to inject a service

 private BookService bookService;

 @GET

 @RolesAllowed("read-books")

 // uses common annotations to declare a role required

 public Books findAll() {

  return bookService.getAll();

 }

}

 

For small services, the controller can also contain the service logic. However, it would usually delegate handling of the business logic to another service bean like bookService in our example.

 

If we dive in further to the book service, we can start to see how configurability works.

@ApplicationScoped

public class BookService {

 @Inject

 // JPA is not provided out of the box, but most providers support it at

 // some level.  worst case, create your own producer for the field

 private EntityManager entityManager;

 @Inject

 // use configuration to control how much data you want to supply at

 // a given time

 @ConfigProperty(name = "max.books.per.page", defaultValue = "20")

 private int maxBooks;

 public Books getAll() {

  List < Book > bookList = entityManager

   .createQuery("select b from Book b", Book.class)

   .setMaxResults(maxBooks) // use that configuration to do a paginated look up

   .getResultList();

  return new Books(bookList);

 }

}

 

Configurability

 

Configuration values can be simply injected into the service using the @ConfigProperty annotation on the injection point. The configuration is supplied based on the configuration name, which is used as a key to retrieve the configuration value from the container. Other optional attributes can be supplied, such as the defaultValue, which is used if there’s no configuration for the given name. Even the name attribute is optional. If not provided, it will be generated based on the class and field names so that the configuration value can still be provided later.

 

So the configuration can also be injected simply like this:

 

@Inject

@ConfigProperty

private int maxBooks

 

If the default value isn’t provided, a configuration for the name generated according to the specified algorithm has to be available when the application starts.

 

The configuration is decoupled from bookService and can be supplied by the configuration inside the application or even later, from external sources such as system properties when the application is started.

 

Security

Next, let's suppose we also want to handle the creation of books, the publication process. And we want to secure the service so that this process is allowed only for callers with a certain role.

 

MicroProfile offers a solution based on JSON tokens according to the JWT standard. We can inject a JsonWebToken object into our service and easily find out whether the caller has a required role by calling getClaim method:

 

 @Inject

 private JsonWebToken jsonWebToken;

 

And then in a method:
   boolean createAny = jsonWebToken.getClaim("create.books.for.other.authors");

   if (!createAny) {

    throw new NotAuthorizedException("Cannot create book, wrong author");

   }

 

 

The caller is then required to add a valid JWT token with the required claim to the header of the REST call.

 

A complete publication service to support that may look like this:

 

@RequestScoped

public class PublishBookService {

 @Inject

 // we can inject a JsonWebToken, a Principal specific to the JWT specification

 private JsonWebToken jsonWebToken;

 // we could also inject individual ClaimValue objects.

 @Inject

 private AuthorService authorService;

 @Inject

 private EntityManager entityManager;

 @Timeout(500)

 // we want to limit how long it takes to publish and if it

 // exceeds, return an exception to the caller.

 public BookId publish(PublishBook publishBook) {

  // we check the standard claim of subject

  if (!publishBook.getAuthor().equals(jsonWebToken.getSubject())) {

   // as well as a custom claim as a boolean

   boolean createAny = jsonWebToken.getClaim("create.books.for.other.authors");

   if (!createAny) {

    throw new NotAuthorizedException("Cannot create book, wrong author");

   }

  }

  Author author = authorService.findAuthor(publishBook.getAuthor());

  if (author == null) {

   throw new NotAuthorizedException("The list author is not an author");

  }

  Book book = entityManager.merge(new Book(publishBook.getIsbn(),

                                           publishBook.getAuthor()));

  return new BookId(book.getIsbn(), book.getAuthor());

 }

}

 

 

For all the above to work, it’s also necessary to enable the JWT security on the JAX-RS application class with the LoginConfig annotation. It’s also important to turn that class into a CDI bean, e.g. by adding ApplicationScoped annotation, because JAX-RS classes aren’t automatically CDI-enabled.

 

This is how it may look like in the code:

 

@LoginConfig(authMethod = "MP-JWT", realmName = "admin-realm")

@ApplicationScoped

@ApplicationPath("/")

public class BookServiceConfig extends javax.ws.rs.Application {

}

 

Adding Fault Tolerance

If we consider that managing authors is a separate bounded context, we want that to be represented as a discreet service. Therefore we’ll implement it as a separate REST service in the same manner as the book service. As a result, we want the book service to check that the author exists by connecting to the new author REST service. Below is the complete code for the connector to an external author service:

 

@ApplicationScoped

public class AuthorService {

 @Inject

 @RestClient

 AuthorConnector authorConnector;

 // inject a REST proxy for a URL given by a generated config property

 private ConcurrentMap < String, Author > authorCache = new ConcurrentHashMap < > ();

 @Retry

 // Retry indicates that this should trigger retry the method call several times in case the remote server call results in an exception

 @CircuitBreaker

 // CircuiBreaker wraps the call in a circuit breaker which opens after several failures and closes again after some time

 @Fallback(fallbackMethod = "getCachedAuthor")

 // Fallback indicates that we should fall back to the local cache

 // if the method fails even after several retries

 // or the circuit is open

 public Author findAuthor(String id) {

  // call to an external Author service

  Author author = authorConnector.get(id); 

 

  // Ideally we want to read from the remote server. 

  // However, we can build

  // a cache as a fallback when the server is down

  authorCache.put(id, author);

  return author;

 }

 public Author getCachedAuthor(String id) {

  return authorCache.get(id);

 }

}

 

Annotations Retry, CircuitBreaker, Timeout and others trigger interceptors that implement respective fault tolerance patterns in case of a failure of the underlying action. They are used on an individual method or on a class to apply them for all methods. The Fallback annotation specifies which method should be called if the interceptors cannot recover from failures. This method can provide an alternative result or notify about the error.

 

Configurability is also fully supported by the fault tolerance annotations. The attributes of the annotations can be overridden via the same configuration mechanism that we used earlier. When any of the interceptors is enabled for a method, it reads the configuration from configuration names generated from the class and field names. For example, to specify the number of retries for the method findAuthor, we can specify a configuration property with the name ws.ament.microprofile.gettingstarted.AuthorService/findAuthor/Retry/maxRetries.

That also means that you can use the annotations without any attributes in the code and configure them later and with different values for each environment.

 

In the code, we also see a REST client proxy provided by the MicroProfile container. The URL is specified by an external configuration for a generated configuration name, similar to the fault tolerance annotations. And the rest is just calling a method on the proxy which does all the work to do the remote call and return an Author instance.

Monitor what’s going on

So there you have it! A couple of rest controllers, services, and you have a microservice built with Eclipse MicroProfile to manage books.

 

The last thing is to find out what’s going on inside your application. Metrics and Health Check functionality in MicroProfile containers provide a lot of information out of the box. It’s available via REST endpoints.

 

Various metrics collected during the lifetime of the application are automatically exposed via REST over HTTP under the /metrics base path, in either JSON or Prometheus format. Common metrics about JVM, threads, loaded classes and operating system are provided out of the box. Other custom metrics can be provided by the implementation. The application can also collect metrics very easily using method interceptors or producer methods.

 

For example, if a service is running on localhost and port 8080, you can simply access http://localhost:8080/metrics with the HTTP header Accept = application/json and you’ll get something like this:

 

{

    "base": {

        "classloader.totalLoadedClass.count": 16987,

        "cpu.systemLoadAverage": 1.22,

        "thread.count": 141,

        "classloader.currentLoadedClass.count": 16986,

        "jvm.uptime": 52955,

        "memory.committedNonHeap": 131727360,

        "gc.PS MarkSweep.count": 3,

        "memory.committedHeap": 503316480,

        "thread.max.count": 143,

        "gc.PS Scavenge.count": 20,

        "cpu.availableProcessors": 8,

        "thread.daemon.count": 123,

        "classloader.totalUnloadedClass.count": 2,

        "memory.usedNonHeap": 117340624,

        "memory.maxHeap": 503316480,

        "memory.usedHeap": 139449848,

        "gc.PS MarkSweep.time": 428,

        "memory.maxNonHeap": -1,

        "gc.PS Scavenge.time": 220

    }

}

 

You can also access http://localhost:8080/health to find out whether the service is running OK or has some errors. This is a simple Yes/No check, giving HTTP 200 status code if all is OK. This is suitable in systems that can detect and restart services with failures automatically, such as Kubernetes.

 

There are still some more components of MicroProfile 1.3, such as Open API and Open Tracing. We won’t cover them here and let it up to you to explore the API and the documentation, which you can find at microprofile.io. You can find more documentation about Microprofile API, including additional enhancements added by Payara Micro, in the Payara MicroProfile Documentation.

 

You can also download the full sample code used in this article on GitHub.

 

Co-authored with John D Ament

 

 Payara Server & Payara Micro  Download Here 

   

Comments