AWS Native Discovery with Payara Micro

Photo of Mike Croft by Mike Croft

Both Payara Server and Payara Micro can cluster together and share data using Hazelcast. Out-of-the-box, there is no configuration needed, since Hazelcast uses multicast to discover and join other cluster members. However, when running in cloud environments like AWS, for example, there are a lot of things which can stop discovery being quite so straightforward. The key thing is that Multicast is not available, meaning another discovery strategy is needed; the most common generic alternative is to use TCP, but this assumes that you know at least the intended subnet that your cluster members will be in ahead of time.


The solution to this problem is to use Hazelcast's AWS native discovery plugin. This plugin uses AWS APIs to retrieve a list of all available instances so that the IP addresses of each can be contacted to detect any running instances. Once other members have been discovered, the cluster shares data as normal.



The best way to explain is with a demonstration. For this, I've used the rest-jcache example from the Payara Examples repository. The example uses JCache (implemented by Hazelcast) to store and retrieve simple JSON strings. It's a useful example here because we can add a value to the cache on one node and retrieve the same value immediately from the other node.


To keep things simple, I have already built the example and compiled it into an Uber JAR with Payara Micro 173, so we have a single deployable artefact.


Environment Setup

To try this example out yourself, you will need to set up a few things:

  • At least one EC2 server
  • Git
  • Java 8 or above

The hazelcast-aws plugin does not work outside of the AWS network (or a network with a direct VPN connection to AWS) since it uses a fixed IP address to fetch AWS metadata which is only accessible within AWS:


public final class MetadataUtil {
     * This IP is only accessible within an EC2 instance and is used to fetch metadata of running instance
     * See details at
    public static final String INSTANCE_METADATA_URI = "";


Run the Example

  1. Clone the repository:

  2. Modify the hazelcast.xml file to fill in a valid IAM access key and secret key pair and the region where your instances are located (mine were in eu-west-1)


        <property name="hazelcast.discovery.enabled">true</property>
        <port auto-increment="true" port-count="3">5701</port>
                <discovery-strategy enabled="true" class="">
                       <property name="access-key">********************</property>
                       <property name="secret-key">****************************************</property>
                       <property name="region">eu-west-1</property>
                       <property name="host-header"></property>
                       <property name="hz-port">5701</property>


Start the Payara Micro Uber JAR on each instance as follows, using --addjars to add the Hazelcast-AWS plugin and --hzconfigfile to add the hazelcast.xml file:

java -jar payara-micro-rest-jcache.jar --addjars hazelcast-aws-2.1.0.jar --hzconfigfile hazelcast.xml


When Payara Micro starts, Hazelcast will now query AWS to get a list of the IP addresses of all running instances available to the owner of the access key in the region specified. It will use the configured starting port and port range to scan for other Hazelcast members on the discovered EC2 instances and create a cluster. When the cluster is established, there should be an output similar to the following:


[2017-10-30T15:19:36.494+0000] [] [INFO] [] [com.hazelcast.nio.tcp.TcpIpConnectionManager] [tid: _ThreadID=55 _ThreadName=hz._hzInstance_1_dev.cached.thread-1] [timeMillis: 1509376776494] [levelValue: 800] []:5701 [dev] [3.8] Established socket connection between / and /
[2017-10-30T15:19:43.515+0000] [] [INFO] [] [com.hazelcast.system] [tid: _ThreadID=42 _ThreadName=hz._hzInstance_1_dev.priority-generic-operation.thread-0] [timeMillis: 1509376783515] [levelValue: 800] []:5701 [dev] [3.8] Cluster version set to 3.8
[2017-10-30T15:19:43.518+0000] [] [INFO] [] [com.hazelcast.internal.cluster.ClusterService] [tid: _ThreadID=42 _ThreadName=hz._hzInstance_1_dev.priority-generic-operation.thread-0] [timeMillis: 1509376783518] [levelValue: 800] [[
  []:5701 [dev] [3.8]
Members [2] {
    Member []:5701 - 8749141f-8b98-4a69-9892-c66ab6cf8de8
    Member []:5701 - 33ddf44e-71a8-4895-9aa0-39b68f7bd33c this
[2017-10-30T15:19:45.543+0000] [] [INFO] [] [com.hazelcast.core.LifecycleService] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1509376785543] [levelValue: 800] []:5701 [dev] [3.8] []:5701 is STARTED
[2017-10-30T15:19:45.552+0000] [] [INFO] [] [] [tid: _ThreadID=36 _ThreadName=Executor-Service-3] [timeMillis: 1509376785552] [levelValue: 800] Payara Clustered Store Service Enabled
[2017-10-30T15:19:45.553+0000] [] [INFO] [] [fish.payara.nucleus.exec.ClusterExecutionService] [tid: _ThreadID=36 _ThreadName=Executor-Service-3] [timeMillis: 1509376785553] [levelValue: 800] Payara Clustered Executor Service Enabled
[2017-10-30T15:19:45.554+0000] [] [INFO] [] [fish.payara.nucleus.eventbus.EventBus] [tid: _ThreadID=36 _ThreadName=Executor-Service-3] [timeMillis: 1509376785554] [levelValue: 800] Payara Clustered Event Bus Enabled


 In the above, we can see the two cluster members and messages from Payara that various services have started.


Test Cache Replication

Now that the Payara Micro instances have discovered each other, we can test them by using cURL to add a value to a key (test) on server2, and retrieving the same value from server1.


1. First, use a GET request to Server1 to get the default response for a cache miss (no value found for the key):


  ~ curl http://server1:8080/rest-jcache-1.0-SNAPSHOT/webresources/cache\?key\=test


2. Next, use an HTTP PUT on Server2 to add a value for key test. In this example, we are using the string "{data}"

  ~ curl -H "Accept: application/json" -H "Content-Type: application/json" -X PUT -d "{data}" http://server2:8080/rest-jcache-1.0-SNAPSHOT/webresources/cache\?key\=test

3. Finally, use another GET on Server1 to show the added data is available on both nodes:

  ~ curl http://server1:8080/rest-jcache-1.0-SNAPSHOT/webresources/cache\?key\=test



In The Real World

Filter Discovered Instances

When the time comes to use this plugin in your production AWS environment, you may want to limit the number of instances included in the discovery for the cluster. There are different ways to approach this but perhaps the easiest way to manage would be to add custom tags to the relevant EC2 instances. To be really dynamic, these tags can be configured in auto scaling groups and then used in the Hazelcast configuration as shown:


<aws enabled="true">


Another way would be to configure different groups and passwords within the hazelcast.xml. While it is a good idea in general to make use of groups and passwords (this prevents cross talk in environments where networks are shared), it is better to use instance tags in AWS since this reduces the list of candidate instances for Hazelcast to scan and therefore leads to a lot less unnecessary network traffic.


Use an IAM Role Instead of Access Keys

Access key pairs are extremely sensitive pieces of information and should be protected carefully. The best way to protect these is not to use them; the plugin supports IAM roles so, after creating a new role, you can remove the <access-key> and <secret-key> tags and replace them as shown:


<aws enabled="true">


 Further configuration details can be found in the README for the plugin on GitHub: