Both Payara Server and Payara Micro can cluster together and share data using Hazelcast. Out-of-the-box, there is no configuration needed, since Hazelcast uses multicast to discover and join other cluster members. However, when running in cloud environments like AWS, for example, there are a lot of things which can stop discovery being quite so straightforward. The key thing is that Multicast is not available, meaning another discovery strategy is needed; the most common generic alternative is to use TCP, but this assumes that you know at least the intended subnet that your cluster members will be in ahead of time.
The solution to this problem is to use Hazelcast's AWS native discovery plugin. This plugin uses AWS APIs to retrieve a list of all available instances so that the IP addresses of each can be contacted to detect any running instances. Once other members have been discovered, the cluster shares data as normal.
The best way to explain is with a demonstration. For this, I've used the rest-jcache example from the Payara Examples repository. The example uses JCache (implemented by Hazelcast) to store and retrieve simple JSON strings. It's a useful example here because we can add a value to the cache on one node and retrieve the same value immediately from the other node.
To keep things simple, I have already built the example and compiled it into an Uber JAR with Payara Micro 173, so we have a single deployable artefact.
To try this example out yourself, you will need to set up a few things:
- At least one EC2 server
- Java 8 or above
hazelcast-aws plugin does not work outside of the AWS network (or a network with a direct VPN connection to AWS) since it uses a fixed IP address to fetch AWS metadata which is only accessible within AWS:
Run the Example
- Clone the repository: https://github.com/mikecroft/payara-hazelcast-aws
hazelcast.xmlfile to fill in a valid IAM access key and secret key pair and the region where your instances are located (mine were in
Start the Payara Micro Uber JAR on each instance as follows, using
--addjars to add the Hazelcast-AWS plugin and
--hzconfigfile to add the
When Payara Micro starts, Hazelcast will now query AWS to get a list of the IP addresses of all running instances available to the owner of the access key in the region specified. It will use the configured starting port and port range to scan for other Hazelcast members on the discovered EC2 instances and create a cluster. When the cluster is established, there should be an output similar to the following:
In the above, we can see the two cluster members and messages from Payara that various services have started.
Test Cache Replication
Now that the Payara Micro instances have discovered each other, we can test them by using
cURL to add a value to a key (
test) on server2, and retrieving the same value from server1.
1. First, use a GET request to Server1 to get the default response for a cache miss (no value found for the key):
2. Next, use an HTTP PUT on Server2 to add a value for key
test. In this example, we are using the string "
3. Finally, use another GET on Server1 to show the added data is available on both nodes:
In The Real World
Filter Discovered Instances
When the time comes to use this plugin in your production AWS environment, you may want to limit the number of instances included in the discovery for the cluster. There are different ways to approach this but perhaps the easiest way to manage would be to add custom tags to the relevant EC2 instances. To be really dynamic, these tags can be configured in auto scaling groups and then used in the Hazelcast configuration as shown:
Another way would be to configure different groups and passwords within the hazelcast.xml. While it is a good idea in general to make use of groups and passwords (this prevents cross talk in environments where networks are shared), it is better to use instance tags in AWS since this reduces the list of candidate instances for Hazelcast to scan and therefore leads to a lot less unnecessary network traffic.
Use an IAM Role Instead of Access Keys
Access key pairs are extremely sensitive pieces of information and should be protected carefully. The best way to protect these is not to use them; the plugin supports IAM roles so, after creating a new role, you can remove the
<secret-key> tags and replace them as shown:
Further configuration details can be found in the README for the plugin on GitHub: