Taking Payara To The Cloud
Originally published on 29 Mar 2018
Last updated on 21 Aug 2019
It may be hard to believe in 2018, but there was once a time before Amazon Web Services. In 2006, Amazon launched what was to become the most dominant platform in cloud computing - the Elastic Compute Cloud (EC2). While there were a lot of early adopters who could see the benefits of "Infrastructure as a Service" (IaaS) style cloud computing - a notable example being Dropbox - there were many who were sceptical of the hype around the "cloud" and prompted stickers like the one pictured.
It's always easy for cynicism to reign in our industry, and for good reason. There have been many new technologies announced as the "next big thing" which have turned out to be either an abject failure or a re-branding of something that already exists. Most of the critics of cloud computing (particularly IaaS) saw the technology as nothing more than a remote data centre with some extra marketing.
I'm confident in saying all this because I (initially, at least) thought the same. The key change in thinking which leads to unlocking the benefits of cloud computing in 2018 (whether that's IaaS, containers or serverless) comes in treating server resources as cattle, not pets. Martin Fowler has written about this in the past, referring to "Snowflake Servers"; since the word "snowflake" has taken on some political baggage since then, it is probably more helpful to think of the difference between a pet and cattle. A "Pet" server needs to be looked after and maintained since their lifecycle is of critical importance. In contrast, "Cattle" servers likely have a shorter lifecycle and it should be expected that they will fail at some point, with these events managed automatically by infrastructure. This concept was taken to extremes by Netflix's famous Chaos Monkey.
While this principle emerged (or rather, grew in popularity) in public cloud environments where the only limit is the size of your budget, it can be applied in many other environments including traditional data centres or on-premise servers when using the right tooling.
The fact is, there really is a cloud, but it's not the hardware that runs your applications, it's the tooling that enables servers as cattle.
Where Does Payara Fit In?
The key question, then, for us as vendors is how best to make life easy for our users who have embraced this development approach. Our goal, in Payara's director, Steve's words, is to be aggressively compatible, meaning that we want to slot in beside other platform technology, without straying outside our own problem domain. In Steve's words:
"Payara Server and Payara Micro are not software islands. They exist in a heterogeneous ecosystem of cloud infrastructure, security infrastructure and well deployed open source and proprietary technologies...We want to enable you to use Payara Server and Payara Micro wherever and with whatever makes sense to you."
In short, it is no longer acceptable for vendors such as Payara to be ignorant of ecosystems outside of their own, as was the case in the recent past. In the "bad old days" of Enterprise SOA solutions, it was often advisable to throw all your money at a single vendor to make sure that you had a suite which integrated well and would at least be compatible with other products from the same vendor. Trying to "roll your own" solution by choosing the best solution for each problem simply wasn't an option. Today's cloud-native ecosystems are built with this idea of being highly compatible right from the beginning. Google's success with Kubernetes is a great example; it provides extension points like the Container Network Interface (CNI) or Container Runtime Interface (CRI) to allow other technologies to replace existing solutions. A good example is CoreDNS, which can fully replace kube-dns in a Kubernetes cluster. With this kind of interoperability, Google, and other vendors, can focus on their core goals and let other vendors bring their own specialisations to the table, meaning a richer experience for all users.
The Enterprise Java world has not slowed down either, despite a 4-year delay between releases of Java EE. Vendors of enterprise Java application servers have been the source of much of the innovation during that time, a point I made in a recent talk at JavaOne.
In Payara's case, we began looking to a "cloud-native" future almost from the beginning with our introduction of Hazelcast as an optional clustering technology and Payara Micro as a new, future-focused way of packaging Java EE APIs in a way that felt more natural to developers who had grown used to the "Cattle" way of thinking.
We didn't do all this work in a vacuum either; back in mid-2016 we helped launch the Eclipse MicroProfile along with Red Hat, IBM, Tomitribe, the LJC and SouJava. Since then, we have collaborated in an unprecedented way to rapidly bring new APIs to developers.
Growing List of Features
Payara Server and Payara Micro have both been developed and adapted for cloud environments for a while now and the list of features available to make life easier is growing longer with each release:
- Environment variable replacement
- Payara can read environment variables set through, for example, a Dockerfile or Kubernetes Yaml descriptor. These can be used anywhere in XML configuration files or even in annotations like @ActivationConfigProperty. These are especially useful when the same application must be deployed to multiple environments where things like database URLs need to be different between development, test and production environments. By keeping the configuration separate from the deployment, the need to either rebuild an artifact between environments or unjar and rejar an artifact is removed.
- Payara can read environment variables set through, for example, a Dockerfile or Kubernetes Yaml descriptor. These can be used anywhere in XML configuration files or even in annotations like @ActivationConfigProperty. These are especially useful when the same application must be deployed to multiple environments where things like database URLs need to be different between development, test and production environments. By keeping the configuration separate from the deployment, the need to either rebuild an artifact between environments or unjar and rejar an artifact is removed.
- Preboot, postboot and postdeploy command files
- What started as a feature for Payara Micro to enable running long configuration scripts was included in Payara Server from the 172 release. The big advantage there is for those running in Docker to be able to deploy applications on startup, without the risk of enabling the autodeploy scanner in production. To see a working example of this, our official Dockerfile for Payara Server uses this technique to deploy any archive copied to the $DEPLOYMENT_DIR on startup only.
- What started as a feature for Payara Micro to enable running long configuration scripts was included in Payara Server from the 172 release. The big advantage there is for those running in Docker to be able to deploy applications on startup, without the risk of enabling the autodeploy scanner in production. To see a working example of this, our official Dockerfile for Payara Server uses this technique to deploy any archive copied to the $DEPLOYMENT_DIR on startup only.
- Payara Cloud Connectors
- With the 172 release of Payara Micro, we enabled JCA, meaning JMS RAR files could be deployed to allow Payara Micro to be used as a JMS client. While JMS is still widely used, there are many other competing messaging frameworks which are used in cloud environments, such as Amazon SQS or Microsoft's Azure Service Bus. This repository allows developers to publish and subscribe to messages using SQS or Azure Service Bus using a standard Message Driven Bean (MDB) and all the configuration options that that entails.
- With the 172 release of Payara Micro, we enabled JCA, meaning JMS RAR files could be deployed to allow Payara Micro to be used as a JMS client. While JMS is still widely used, there are many other competing messaging frameworks which are used in cloud environments, such as Amazon SQS or Microsoft's Azure Service Bus. This repository allows developers to publish and subscribe to messages using SQS or Azure Service Bus using a standard Message Driven Bean (MDB) and all the configuration options that that entails.
- Clustered CDI Event Bus
- Both Payara Server and Payara Micro can leverage the power of a Hazelcast data grid for effortless highly resilient clustering. New annotations (@Outbound and @Inbound) have been added to allow events to be fired across a cluster and handled by a separate instance of Payara Server or Payara Micro. When elastically scaling up in a cloud environment, this allows for the easy distribution of work without any extra configuration required.
What's New in Payara Server 5 and Payara Micro?
On Monday, March 19th, Payara Server and Payara Micro 5 were released along with a host of ground-breaking new features:
MicroProfile 1.2 Support
Payara has been a part of MicroProfile since the beginning. Eclipse MicroProfile declares itself to be primarily concerned with providing tools for microservice architectures to Java EE developers, however many of the specifications can apply equally to a monolith in a cloud-native environment. The specifications can be used, for example, in a monolith right at the beginning of a transition to a new environment or architecture. There is no need to wait for a redesign to be able to take advantage of the new specifications and transition in a more gradual, iterative way.
Our support for MicroProfile 1.2 in Payara Server and Payara Micro 5 includes:
- Config
- The MicroProfile Config API allows for configuration of both the container and the application to be pulled from outside of the container at runtime. This configuration can, in Payara's case, be held in the Hazelcast data grid, meaning any newly created instance of an existing application can inherit environment-specific configuration values.
- The MicroProfile Config API allows for configuration of both the container and the application to be pulled from outside of the container at runtime. This configuration can, in Payara's case, be held in the Hazelcast data grid, meaning any newly created instance of an existing application can inherit environment-specific configuration values.
- Metrics
- The implementation of MicroProfile Metrics is activated by default, so Payara Server or Payara Micro can be monitored out-of-the-box by any product which can consume Prometheus-format data, which is fast becoming a near standard for metrics collection. Indeed, since Prometheus is the default product for monitoring with Kubernetes, this makes both Payara Server and Payara Micro ready for use in a modern container-orchestrated environment like Kubernetes.
- The implementation of MicroProfile Metrics is activated by default, so Payara Server or Payara Micro can be monitored out-of-the-box by any product which can consume Prometheus-format data, which is fast becoming a near standard for metrics collection. Indeed, since Prometheus is the default product for monitoring with Kubernetes, this makes both Payara Server and Payara Micro ready for use in a modern container-orchestrated environment like Kubernetes.
- Fault Tolerance
- Fault Tolerance collects multiple patterns for dealing with potentially unreliable distributed services into a single API. This is particularly useful when, for example, a dependent service might go down for some reason. The Fault Tolerance API gives the tools to present a fallback method to use instead or to wait for a given timeout to give the service a chance to recover or be restarted by the cloud platform.
- Fault Tolerance collects multiple patterns for dealing with potentially unreliable distributed services into a single API. This is particularly useful when, for example, a dependent service might go down for some reason. The Fault Tolerance API gives the tools to present a fallback method to use instead or to wait for a given timeout to give the service a chance to recover or be restarted by the cloud platform.
- Service Healthchecks
- Healthchecks are very important for orchestraters to have a view into whether a service is healthy or not. It deliberately only has two states so that a tool like a service mesh or dynamic proxy can react to a service which reports itself as being unhealthy and choose an alternative service to handle the request.
- Healthchecks are very important for orchestraters to have a view into whether a service is healthy or not. It deliberately only has two states so that a tool like a service mesh or dynamic proxy can react to a service which reports itself as being unhealthy and choose an alternative service to handle the request.
- JWT-auth
- JSON Web Tokens have emerged as a popular method of communicating claims between language and framework agnostic applications. By supporting JWT, MicroProfile (and Payara Server and Payara Micro) can better interoperate with other services that may be developed in other languages.
- JSON Web Tokens have emerged as a popular method of communicating claims between language and framework agnostic applications. By supporting JWT, MicroProfile (and Payara Server and Payara Micro) can better interoperate with other services that may be developed in other languages.
Rethinking the Domain Concept with the Domain Data Grid
As Steve wrote in his recent blog post, Payara Server 5 is all about rethinking the classic Domain concept. As he states in that post, the old concept which isolated shared data by cluster is replaced by one which shares all data across all instances. This has huge implications for the way that applications can be designed for resiliency - a major step towards taking application servers which were traditionally thought of as "pets" to those thought of as "cattle".
Instances can now join Deployment Groups to replicate some of the behaviour of clusters. Every member of a deployment group will run all the apps targeted to that deployment group, so a load balancer can route requests to any member.
This also means that servers can be added or removed from a deployment group at runtime to increase or reduce capacity from a pool of active servers, rather than starting up new instances. A big feature to help enable this is full data sharing between all members of the Domain Data Grid (Payara Server and Payara Micro), so by adding an existing server to a deployment group, all necessary session data is already available.
For times when total capacity still needs to increase, there is transparent clustering on IaaS cloud platforms using the well-known-address of the DAS, for both Payara Server and Payara Micro.
Where Are We Heading?
At the beginning of the year, we published our roadmap for 2018. As a whole, the roadmap covers lots of areas, but there is an underlying focus on ecosystems and technologies typically associated with modern application development. This includes a wider choice of security providers, including OAuth2 which has seen such wide adoption thanks to implementations from OpenID, Google, and Facebook, among others. All this is is currently targeted for our security-themed release in our Q2 release - 5.182.
Beyond that, we are looking to focus on tools for asynchronous programming styles, particularly Reactive programming. While not explicitly designed for Cloud Native environments, principles of asynchronous message passing and responsiveness lend themselves very well to environments where dependent services are liable to elastically scale in response to load. There are already many ways already to write applications using Java EE APIs using reactive principles, but there are some areas where improvements can be made, such as enhancing CDI annotation support and making sure APIs can be used from Lambdas and CompletableFutures.
A Micro Future
Perhaps the most significant factor in the development of both Payara Server and Payara Micro with regard to cloud environments - whether on an IaaS, PaaS or containerised platform - will be the Eclipse MicroProfile effort. The 1.2 specification implemented by Payara Micro already brings features to integrate with cloud ecosystems and the new 1.3 specification will take this further with the addition of OpenAPI - the open standard derived from Swagger - and OpenTracing. The OpenTracing specification is implemented by many popular products and will allow users to monitor requests which propagate through a distributed system.
Looking beyond the immediate developments in MicroProfile, there are already hints as to what the future may hold. One particular category of technology which has emerged in response to some of the changes discussed in this blog is the concept of a Service Mesh. At its most basic, a service mesh can be thought of as a more dynamic evolution of the venerable web proxy. To handle a service which may consist of "cattle" servers appearing and disappearing in response to load, there needs to be a dynamic proxy which can both avoid forwarding requests to servers which are not healthy, or discover new servers and include them in their load balancing.
With this kind of behaviour, the Service Mesh can be seen to move more into the world of orchestration, augmenting technologies like Kubernetes or Mesos to provide more intelligent routing rules or fault tolerance which can be configured without the underlying application being aware. The finer points of how MicroProfile will work with a service mesh are yet to be seen, but discussions have already started.
Related Posts
A More Flexible Way to Deploy Jakarta EE Apps: Introducing Pay As You Go Pricing for Payara Cloud
Published on 05 Dec 2024
by Luqman Saeed
0 Comments
The Payara Monthly Catch - November 2024
Published on 28 Nov 2024
by Chiara Civardi
0 Comments