Kubernetes Native Discovery with Payara Micro
Published on 20 Dec 2017
by Susan RaiPayara Micro supports Hazelcast out of the box, and can be used for clustering. This allows members in the cluster to distribute data between themselves, amongst other things. By default, Hazelcast comes with multiple ways to discover other members in the same network. A multicast discovery strategy is commonly used for this purpose; a multicast request is sent to all members in a network and the members respond with their IP addresses. Another strategy must be employed if a member cannot or does not wish to provide their IP address.
Hazelcast has provided ways for cloud and service discovery vendors to implement their own discovery strategies to fit their needs. One such solution is provided by the Hazelcast Kubernetes discovery plugin. It provides a way to lookup IP addresses of other members by resolving those request against a Kubernetes Service Discovery system. More information on the plugin can be found here.
Kubernetes
Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance and scaling applications. Before starting the demonstration, I would like to go through some basic concepts of Kubernetes.
Pod
A pod is a group of one or more containers with shared storage or network and a specification on how to run the containers. They are also the smallest deployable units of computing that can be created and managed in Kubernetes. The contents are always co-located and co-scheduled and run in a shared context.
Containers within a pod share an IP address and port space, and can find each other through localhost. They also communicate with each other using standard inter-process communications. Containers in different pods have distinct IP addresses and cannot communicate by inter-process communication.
Pods don’t survive scheduling failures, node failures, or evictions (such as due to a lack of resources). Users should use controllers instead of creating pods directly as controllers provide self-healing with a cluster scope, replication and roll-out management.
Services
A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. The set of pods targeted by a service is determined by a Label selector. Using a Kubernetes service you can also expose pods, which you may want to do if parts of your application need to be exposed onto an external IP address (such as front-ends).
Every node in a Kubernetes cluster runs a kube-proxy. Kube-proxy is responsible for implementing a form of virtual IP for Services. You can specify your own cluster IP address as part of a Service creation request.
Deployment
A Deployment controller provides declarative updates for Pods and Replica Sets. You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. Deployments can be defined to create new Replica Sets, or to remove existing Deployments and adopt all their resources with new Deployments
The following are use cases of deployment:
- Create a Deployment to rollout a Replica Set.
- Declare the new state of the Pods
- Rollback to an earlier Deployment revision
- Scale up the Deployment to facilitate more load.
- Pause the Deployment
- Use the status of the Deployment
- Clean up older Replica Sets
A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new Replica Set, it can be complete, or it can fail to progress.
- Progressing: Deployment-Kubernetes marks a Deployment as progressing when one of the following tasks is performed; the Deployment creates a new Replica Set, the Deployment is scaling up its newest Replica Set, the Deployment is scaling down its older Replica Set(s) and new Pods become ready or available.
- Complete: Deployment-Kubernetes marks a Deployment as complete when it has the following characteristics; all the replicas associated with the Deployment have been updated to the latest version you’ve specified, meaning any updates you’ve requested have been completed, all the replicas associated with the Deployment are available and no old replicas for the Deployment are running.
- Failed: Deployment-Kubernetes marks a Deployment which may have gotten stuck trying to deploy its newest Replica Set without ever completing. This can occur due to some of the following factors; insufficient quota, readiness probe failures, image pull errors, insufficient permissions, limit ranges and application runtime misconfiguration.
A Deployment can be rolled back if it’s not stable, such as crash looping. By default, all the Deployment’s roll-out history is kept in the system so that you can rollback anytime you want.
Demonstration
For this demonstration I will be using the rest-jcache example from the Payara Example repository, which can be found here. The example consists of a REST service that uses JCache annotations to retrieve and store JSON data. This example is perfect for this demonstration as values can be added to the cache and be shared across all of the Payara Micro instances.
Prerequisites
This demonstration assumes:
- Minikube is installed and that you have also installed Kubectl, which is a command-line tool for Kubernetes.
- Docker is installed.
- You have basic knowledge on Kubernetes, Docker and Hazelcast.
Running the demonstration
All the components required for this demonstration can be found in this repository: https://github.com/MeroRai/payara-hazelcast-kubernetes.
Modify the hazelcast.xml file to match your service name, label name and value. This will configure the discovery plugin inside of your Hazelcast configuration.
Hazelcast configuration
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.8.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<!-- only necessary prior Hazelcast 3.8 -->
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<!-- deactivate normal discovery -->
<multicast enabled="false"/>
<tcp-ip enabled="false" />
<!-- activate the Kubernetes plugin -->
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<!-- configure discovery service API lookup -->
<property name="service-name">payara-micro</property>
<property name="service-label-name">name</property>
<property name="service-label-value">payara</property>
<property name="namespace">default</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
</hazelcast>
Adding Payara Micro service
As I mentioned above, a Kubernetes service describes a set of pods that perform the same task. For example, the set of Payara Micro instances in a Cluster. Below is the service description for our Payara Micro service:
apiVersion: v1
kind: Service
metadata:
name: payara-micro
labels:
name: payara
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
# the port where application will be exposed on
nodePort: 30001
selector:
name: payara
The label selector is a query over labels, that can identify a set of pods contained within the service. You can see on the above description label selector is name: payaraMicro
. If you look at the deployment description below, you can see that the pod has the corresponding label selector, so it will be selected for membership in this service.
To create the Payara Micro service, execute the command below:
$ kubectl create -f payaraMicroService.yaml |
To view the service information, execute the command below:
$ kubectl get service payara-micro |
Adding a Payara Micro deployment
In Kubernetes, a deployment is responsible for replicating sets of identical pods. Unlike a Service it also has a desired number of replicas, and it will create or delete pods to ensure that the number of pods matches up with its desired state. Below is the deployment description for Payara Micro deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: payara-micro
labels:
name: payara
spec:
replicas: 1
template:
metadata:
labels:
name: payara
spec:
containers:
- name: payara-micro
image: merorai/payara-micro-kubernetes
imagePullPolicy: Always
ports:
- name: payaraMicro
containerPort: 8080
The deployment description template specification, indicates that the Pods will run on one container, which is named PayaraMicro. It will also retrieve and execute payaraMicroKubernetes
image from Docker Hub, open port 8080 for use by the Pods.
To create the deployment, execute the command below:
$ kubectl create -f payaraMicroDeployment.yaml |
To view the deployment, execute the command below:
$ kubectl get deployment payara-micro |
To view the pods, execute the command below:
$ kubectl get pods |
Once the service and deployment descriptions are successfully executed it should start 1 Payara Micro instance. To start the 2nd Payara Micro instance execute both the payaraMicro2Service.yaml
andpayaraMicro2Deployment.yaml
files.
Once both the Payara Micro instances start, the Hazelcast Kubernetes discovery plugin will provide IP addresses of all running members by resolving those request against a Kubernetes Service Discovery system. Once the cluster is formed, output like below should be displayed:
[2017-12-08T14:59:54.988+0000] [] [INFO] [] [com.hazelcast.nio.tcp.TcpIpConnectionManager] [tid: _ThreadID=76 _ThreadName=hz._hzInstance_1_dev.cached.thread-2] [timeMillis: 1512745194988] [levelValue: 800] [172.17.0.5]:5701 [dev] [3.8] Established socket connection between /172.17.0.5:56763 and /172.17.0.3:5701
[2017-12-08T15:00:02.017+0000] [] [INFO] [] [com.hazelcast.system] [tid: _ThreadID=64 _ThreadName=hz._hzInstance_1_dev.generic-operation.thread-1] [timeMillis: 1512745202017] [levelValue: 800] [172.17.0.5]:5701 [dev] [3.8] Cluster version set to 3.8
[2017-12-08T15:00:02.020+0000] [] [INFO] [] [com.hazelcast.internal.cluster.ClusterService] [tid: _ThreadID=64 _ThreadName=hz._hzInstance_1_dev.generic-operation.thread-1] [timeMillis: 1512745202020] [levelValue: 800] [[
[172.17.0.5]:5701 [dev] [3.8]
Members [2] {
Member [172.17.0.3]:5701 - 7ec08a64-25de-42f8-9ddb-e5067753c06b
Member [172.17.0.5]:5701 - 1e4dc9fc-ce0b-4731-bd13-c3cd2f71699f this
}
]]
[2017-12-08T15:00:04.057+0000] [] [INFO] [] [com.hazelcast.core.LifecycleService] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1512745204057] [levelValue: 800] [172.17.0.5]:5701 [dev] [3.8] [172.17.0.5]:5701 is STARTED
To view the log, execute the command below:
$ kubectl logs <name-of-the mpod> |
Test Cache Replication
Once the Payara Micro instances have clustered together and the rest-jcache example application has been deployed to them, we can test that everything is working by using cURL
to add a value to a key on one instance and retrieving the same value from another instance.
1. Insert string “{data}” into one of the instance using:
$ curl -H "Accept: application/json" -H "Content-Type: application/json" -X PUT -d "{data}" http://<NODE-IP-ADDRESS>:30001/rest-jcache/webresources/cache\?key\=test
2. Use another instance to retrieve the added value using:
$ curl http://<NODE-IP-ADDRESS>:30002/rest-jcache/webresources/cache\?key\=test{data}%
Related Posts
The Payara Monthly Catch - October 2024
Published on 30 Oct 2024
by Chiara Civardi
0 Comments
Can You Futureproof Your Enterprise Java Apps or Are They Doomed to Fall Behind?
Published on 16 Oct 2024
by Chiara Civardi
0 Comments