WildFly Bootable JAR cluster application with JKube openshift-maven-plugin
Introduction
This post is a step-by-step guide describing how you can build and deploy on OpenShift an example of a WildFly Bootable JAR application that caches the HTTP session state. We will explore how Bootable JAR uses the KUBE_PING protocol for clustering discovery mechanisms and how you can use JKube openshift-maven-plugin to deploy the application on OpenShift.
Getting started
The demo application is a minimalistic shopping cart that stores items in the HTTP session. The key points are:
-
We want our session data to be replicated across all cluster members. The Jakarta Servlet specification supports distributable web applications. If you want to share your session data, you need to specify your session is distributable on the web.xml file:
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd" version="4.0"> <distributable /> </web-app>
-
Our bootable JAR application is built by using wildfly-jar-maven-plugin. This is the default configuration in our pom.xml:
<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <configuration> <feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)#22.0.0.Final</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>web-clustering</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin>
The feature-pack-location is an expression used to resolve feature-pack artifacts from the remote repository. The feature-packs determine the WildFly version in use. Using a feature pack location has some advantages when you are interested in an automatic version resolution.
NoteIf you are more comfortable by using maven GAVs, the above feature-pack-location can be replaced by the equivalent feature pack:
<feature-packs> <feature-pack> <groupId>org.wildfly</groupId> <artifactId>wildfly-galleon-pack</artifactId> <version>22.0.0.Final</version> </feature-pack> </feature-packs>
-
The layers section specify the Galleon layers we want for our application:
-
jaxrs-server: It adds support for JAX-RS, CDI and JPA. The application stores the items in the session by using a JAX-RS resource.
-
web-clustering: Support for distributable web applications. This layer will supply the Infinispan web container cache and the JGroups subsystem.
-
-
The default maven profile the application uses is to build and deploy the application locally. When we are going to deploy on Openshift, we will activate the openshift maven profile which adds on top of the default maven configuration the required configuration for running the application in OpenShift.
In the aim to get a better understanding, firstly, we will build and verify the application locally. Then we will deploy and verify on OpenShift.
Building and testing the application locally
-
Clone the application and build it:
$ git clone https://github.com/yersan/wildfly-clustering-demo.git $ cd wildfly-clustering-demo wildfly-clustering-demo$ mvn package
Now you can launch two server instances to verify the cluster works as expected. If you are running a Bootable JAR cluster application locally, you have to specify the node name for each server instance and deal with the port numbers to avoid port conflicts between the launched instances.
-
Launch the first application. In a terminal window execute the following:
wildfly-clustering-demo$ java -jar ./target/wildfly-clustering-demo-bootable.jar -Djboss.node.name=node1
-
Launch the second application in a different terminal taking care of the port conflicts:
wildfly-clustering-demo$ java -jar ./target/wildfly-clustering-demo-bootable.jar -Djboss.node.name=node2 -Djboss.socket.binding.port-offset=10
You should see in the logs that a cluster has been formed, and the name of the nodes that have been joined to it:
16:15:06,908 INFO [org.infinispan.CLUSTER] (ServerService Thread Pool -- 44) ISPN000094: Received new cluster view for channel ejb: [node1|1] (2) [node1, node2]
NoteWhen you have built the Bootable JAR application locally, without any further configuration, by default the JGroups subsystem is configured to use the UDP protocol and send messages to discover other cluster members to the 230.0.0.4 multicast address. If you do not see the above log trace, verify that your Operating System is capable of sending and receiving multicast datagrams and can route them to the 230.0.0.4 IP through your ethernet interface.
We are now ready to add some items to our HTTP session. We do not have a load balancer locally, so we will add items by using the first instance. Later, we will stop that instance and verify the session is replicated correctly.
-
Add some items to the application cart:
$ curl --cookie-jar /tmp/session.txt -XPOST http://localhost:8080/api/jeans/2 $ curl --cookie /tmp/session.txt -XPOST http://localhost:8080/api/shorts/4
-
Check the cart items accessing to the first instance:
$ curl --cookie /tmp/session.txt -XGET http://localhost:8080/api/cart {"host":"localhost","sessionId":"vcVIqIU80USV7W11Qoh3QyJU1PPEFFxOPwI-HcEZ","cart":[{"item":"shorts","quantity":4},{"item":"jeans","quantity":2}]}
-
Stop the first instance and check the cart again by accessing to the second node; notice the port is now 8090:
$ curl --cookie /tmp/session.txt -XGET http://localhost:8090/api/cart {"host":"localhost","sessionId":"vcVIqIU80USV7W11Qoh3QyJU1PPEFFxOPwI-HcEZ","cart":[{"item":"shorts","quantity":4},{"item":"jeans","quantity":2}]}
We see the same session id and items on the second node. The session data is correctly replicated. Our cluster and application is working locally.
Moving to OpenShift
The demo application uses a specific maven profile to configure the particularities to build and deploy on OpenShift.
Adapting the wildfly-jar-maven-plugin for the cloud executions
The wildfly-jar-maven-plugin has to know that we intend to build the WildFly Bootable JAR application for cloud execution. In the openshift maven profile, we extend the default configuration adding the cloud configuration item as follows:
<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <configuration> <cloud/> </configuration> </plugin>
This setting adds specific server configuration to run the Bootable JAR in the cloud context, for example, the JGroups subsystem is now configured to use the KUBE_PING protocol for both tcp (default stack) and udp, the microprofile-health Galleon layer is automatically provisioned, the jboss.node.name is set automatically to the pod hostname. You can check Configuring the server for cloud execution section in the Bootable JAR documentation to get more details about this setting.
With the KUBE_PING protocol enabled, cluster member discovery is done by asking Kubernetes for a list of IP addresses of running pods. In order to make it work we need the following:
-
Our pods have to have the KUBERNETES_NAMESPACE environment variable set. This environment variable is used to define the namespace JGroups will use to discover other cluster members from this pod. The JKube OpenShift maven plugin sets this environment for us.
-
We need to grant authorization to the service account the pod is running under so that it can access the Kubernetes REST API to get the list of addresses of all cluster nodes. We need to manually complete this step before deploying the Bootable JAR application.
Using openshift-maven-plugin to deploy on OpenShift
To deploy the application on OpenShift we will use openshift-maven-plugin. This maven plugin is integrated with the Bootable JAR. It allows us to use some defaults for starting up applications keeping a simple and tidy configuration. It also adds automatically the readiness and liveness probes to the Bootable JAR application. These probes are just simple HTTP gets for following endpoints:
-
Readiness: http://localhost:9990/health/ready
-
Liveness: http://localhost:9990/health/live
If you add readiness / liveness checks on your application code, those checks will be taken into account when you are deploying with the JKube plugin, since those checks will be available on the built-in microprofile-health capabilities added by the Bootable JAR maven plugin as an additional Galleon layer. Let us take a look at the JKube plugin configuration:
<profiles> <profile> <id>openshift</id> <properties> <jkube.generator.from>registry.redhat.io/ubi8/openjdk-11:latest</jkube.generator.from> </properties> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.0.2</version> <configuration> <resources> <env> <GC_MAX_METASPACE_SIZE>256</GC_MAX_METASPACE_SIZE> <GC_METASPACE_SIZE>96</GC_METASPACE_SIZE> </env> </resources> </configuration> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> <goal>apply</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles>
The jkube.generator.from specifies the base image our application is going to use. The Zero-Config capability of the JKube maven plugin will add one base layer if you do not specify this configuration. However, for our demo, we have chosen registry.redhat.io/ubi8/openjdk-11 as base image.
When we are using this ubi8/openjdk-11 base image, we have to configure the GC metaspace sizes. We can add environment variables by specifying them in the resources/env section.
We have also configured the oc:resource, oc:build and oc:apply maven goals on the JKube plugin. With the above configuration, we should be able to execute mvn install -Popenshift to kick off all the process to build and deploy on OpenShift. In the following sections, we will go step by step, so we can explain what happens behind the scenes in each phase.
Building and verifying the Bootable Jar application on OpenShift
We will use Red Hat CodeReady Containers (CRC) as a local OpenShift cluster. It brings a minimal OpenShift 4 cluster with one node to our local computer.
-
Start CRC and create the new project where we are going to work on:
$ crc start -p crc_license.txt $ oc login -u kubeadmin -p dpDFV-xamBW-kKAk3-Fi6Lg https://api.crc.testing:6443 $ oc new-project wildfly-cluster-demo Now using project "wildfly-cluster-demo" on server "https://api.crc.testing:6443".
-
Our application uses the KUBE_PING protocol so we need to grant authorization to the service account the pod is running under:
$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q) clusterrole.rbac.authorization.k8s.io/view added: "system:serviceaccount:wildfly-cluster-demo:default"
-
Build our application by using the openshift maven profile:
wildfly-clustering-demo$ mvn clean package -Popenshift
Let us take a look at some points at this stage:
-
The JKube oc:resource is bound to the resource maven phase:
[INFO] --- openshift-maven-plugin:1.0.2:resource (default) @ wildfly-clustering-demo --- [INFO] oc: Using docker image name of namespace: wildfly-cluster-demo [INFO] oc: Running generator wildfly-jar [INFO] oc: wildfly-jar: Using Docker image registry.redhat.io/ubi8/openjdk-11:latest as base / builder [INFO] oc: Using resource templates from /home/yborgess/dev/projects/wildfly-clustering-demo/src/main/jkube [INFO] oc: jkube-controller: Adding a default DeploymentConfig [INFO] oc: jkube-service: Adding a default service 'wildfly-clustering-demo' with ports [8080] [WARNING] oc: jkube-image: Environment variable GC_MAX_METASPACE_SIZE will not be overridden: trying to set the value 256, but its actual value is 256 [WARNING] oc: jkube-image: Environment variable GC_METASPACE_SIZE will not be overridden: trying to set the value 96, but its actual value is 96 [INFO] oc: jkube-healthcheck-wildfly-jar: Adding readiness probe on port 9990, path='/health/ready', scheme='HTTP', with initial delay 10 seconds [INFO] oc: jkube-healthcheck-wildfly-jar: Adding liveness probe on port 9990, path='/health/live', scheme='HTTP', with initial delay 60 seconds [INFO] oc: jkube-revision-history: Adding revision history limit to 2
At the resource phase, JKube prepares all the OpenShift resources needed to deploy the application. You can inspect what resources are going to be deployed by looking at target/classes/META-INF/jkube/openshift.yml file. You will find the following:
-
A service exposing the 8080 port.
-
A route exposing this service.
-
A deploymentConfig which defines and starts our pods. On this file you can see the probes, and our required environment variables; GC_MAX_METASPACE_SIZE and GC_METASPACE_SIZE added manually by us on the plugin configuration, KUBERNETES_NAMESPACE added automatically.
-
-
-
Create the OpenShift specific builds:
$ mvn oc:build -Popenshift
At this step, JKube has created by us:
-
Our contanerized application. You can check the generated dockerfile at target/docker/wildfly-clustering-demo/1.0/build/Dockerfile.
-
An OpenShift BuildConfig object that uses as the source base image our containerized application:
$ oc describe buildconfig/wildfly-clustering-demo-s2i Name: wildfly-clustering-demo-s2i Namespace: wildfly-cluster-demo Created: 47 minutes ago Labels: app=wildfly-clustering-demo group=org.wildfly.s2i provider=jkube version=1.0 Annotations: <none> Latest Version: 1 Strategy: Source From Image: DockerImage registry.redhat.io/ubi8/openjdk-11:latest Pull Secret Name: pullsecret-jkube Output to: ImageStreamTag wildfly-clustering-demo:1.0 Binary: provided on build
This BuildConfig is built automatically resulting in an ImageStreamTag available. You can verify the build by issuing:
$ oc logs pods/wildfly-clustering-demo-s2i-1-build
-
-
We have now an ImageStreamTag named wildfly-clustering-demo:1.0 built. Now we can deploy the application by using oc:apply maven goal:
$ mvn oc:apply -Popenshift [INFO] Scanning for projects... [INFO] [INFO] --------------< org.wildfly.s2i:wildfly-clustering-demo >--------------- [INFO] Building maven-web 1.0 [INFO] --------------------------------[ war ]--------------------------------- [INFO] [INFO] --- openshift-maven-plugin:1.0.2:apply (default-cli) @ wildfly-clustering-demo --- [INFO] oc: Using OpenShift at https://api.crc.testing:6443/ in namespace wildfly-cluster-demo with manifest /home/yborgess/dev/projects/wildfly-clustering-demo/target/classes/META-INF/jkube/openshift.yml [INFO] oc: OpenShift platform detected [INFO] oc: Using project: wildfly-cluster-demo [INFO] oc: Creating a Service from openshift.yml namespace wildfly-cluster-demo name wildfly-clustering-demo [INFO] oc: Created Service: target/jkube/applyJson/wildfly-cluster-demo/service-wildfly-clustering-demo.json [INFO] oc: Creating a DeploymentConfig from openshift.yml namespace wildfly-cluster-demo name wildfly-clustering-demo [INFO] oc: Created DeploymentConfig: target/jkube/applyJson/wildfly-cluster-demo/deploymentconfig-wildfly-clustering-demo.json [INFO] oc: Creating Route wildfly-cluster-demo:wildfly-clustering-demo host: null [INFO] oc: HINT: Use the command `oc get pods -w` to watch your pods start up [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------
We should have in our local OpenShift cluster the following DeploymentConfig object created by JKube:
$ oc describe dc/wildfly-clustering-demo Name: wildfly-clustering-demo Namespace: wildfly-cluster-demo Created: 57 seconds ago Labels: app=wildfly-clustering-demo group=org.wildfly.s2i provider=jkube version=1.0 Annotations: app.openshift.io/vcs-ref=master app.openshift.io/vcs-uri=https://github.com/yersan/wildfly-clustering-demo.git jkube.io/git-branch=master jkube.io/git-commit=b5cfa009b7724065260c3a5c9d45733978626797 jkube.io/git-url=https://github.com/yersan/wildfly-clustering-demo.git Latest Version: 1 Selector: app=wildfly-clustering-demo,group=org.wildfly.s2i,provider=jkube Replicas: 1 Triggers: Config, Image(wildfly-clustering-demo@1.0, auto=true) Strategy: Rolling Template: Pod Template: Labels: app=wildfly-clustering-demo group=org.wildfly.s2i provider=jkube version=1.0 Annotations: app.openshift.io/vcs-ref: master app.openshift.io/vcs-uri: https://github.com/yersan/wildfly-clustering-demo.git jkube.io/git-branch: master jkube.io/git-commit: b5cfa009b7724065260c3a5c9d45733978626797 jkube.io/git-url: https://github.com/yersan/wildfly-clustering-demo.git Containers: wildfly-jar: Image: image-registry.openshift-image-registry.svc:5000/wildfly-cluster-demo/wildfly-clustering-demo@sha256:e8274e7de4c7b9d280ff20cb595a627754a80052b4c1e5e54738c490ac7e86e7 Ports: 8080/TCP, 9779/TCP, 8778/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Liveness: http-get http://:9990/health/live delay=60s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:9990/health/ready delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: GC_MAX_METASPACE_SIZE: 256 GC_METASPACE_SIZE: 96 KUBERNETES_NAMESPACE: (v1:metadata.namespace) ...
Notice the environment variables used in the pod template section and the probes. The deployment is also started automatically. You can monitor the progress by checking the pods running on the current OpenShift project:
$ oc get pods -w
-
Once your deployment finishes, scale up the application pod:
$ oc scale dc wildfly-clustering-demo --replicas=2 deploymentconfig.apps.openshift.io/wildfly-clustering-demo scaled
If you check the logs of your pods, you should notice a cluster has been created, for example:
17:15:23,842 INFO [org.infinispan.CLUSTER] (ServerService Thread Pool -- 49) ISPN000094: Received new cluster view for channel ee: [clustering-demo-1-vrt7h|1] (2) [clustering-demo-1-vrt7h, clustering-demo-1-cmmzn]
Now we can verify our cluster is working as expected and verify the session data is replicated across all the cluster members.
-
Create session data and retrieve it to see on which pod it was created:
$ curl --cookie-jar /tmp/session.txt -XPOST $(oc get route wildfly-clustering-demo -o=jsonpath='{.spec.host}')/api/jeans/2 $ curl --cookie /tmp/session.txt -XPOST $(oc get route wildfly-clustering-demo -o=jsonpath='{.spec.host}')/api/shorts/4 $ curl --cookie /tmp/session.txt -XGET $(oc get route wildfly-clustering-demo -o=jsonpath='{.spec.host}')/api/cart {"host":"wildfly-clustering-demo-1-zs8fg","sessionId":"rLHbOGXWUBUmAoySM-1HpxNwFULzbyuhHTdcHUtv","cart":[{"item":"shorts","quantity":4},{"item":"jeans","quantity":2}]}
-
Delete the pod which gave you the latest response and get the cart again through the same route. OpenShift will balance the load to the other pod available. We should get the same session data, verifying the replication works as expected:
$ oc delete pod wildfly-clustering-demo-1-zs8fg pod "wildfly-clustering-demo-1-zs8fg" deleted $ curl --cookie /tmp/session.txt -XGET $(oc get route wildfly-clustering-demo -o=jsonpath='{.spec.host}')/api/cart {"host":"wildfly-clustering-demo-1-cdv27","sessionId":"rLHbOGXWUBUmAoySM-1HpxNwFULzbyuhHTdcHUtv","cart":[{"item":"shorts","quantity":4},{"item":"jeans","quantity":2}]}
Conclusion
Combining the Bootable JAR with the JKube maven plugin is one option to simplify the workflow developing applications on OpenShift. Firstly, we have seen how you can work with your application locally and then, with minimal effort, how to adapt it to be deployed on OpenShift. In this specific example, we have explored the default discovery mechanism available on the Bootable JAR. This mechanism requires granting permissions on your cluster to add to JGroups the ability to discover other cluster members.
Note
|
If you are interested in learning how to configure the DNS_PING protocol instead of KUBE_PING, this follow-up blog post describes how to do it. |
You can find out more examples of how to use and work with the Bootable JAR here. If you have any question related, feel free to contact us joining to the WildFly community forums or Zulip Chat.