Red Hat

Onward to WildFly 17 and Beyond!

Following the release of WildFly 16, I thought it would be a good time to give the WildFly community a sense of what I see coming in the project over the next few releases.

WildFly will continue with the quarterly delivery model we began last year with WildFly 12. These releases are essentially time-boxed; i.e. we typically won’t significantly delay a release in order to get a feature in. So when I discuss general feature roadmaps I’ll be talking in terms of the next two or three releases rather than just WildFly 17.

WildFly on the Cloud

A major focus will be on making WildFly as easy and productive as possible to use on the cloud, particularly on Kubernetes and OpenShift. In particular we’ll be working on the following:

Jakarta EE

Of course, the EE standards are very important to WildFly. We’re very focused on Jakarta EE, with a number of members of the WildFly community involved in the various specs. We’re keeping a close eye on the finalization of Jakarta EE 8 with certification a high priority. As work on Jakarta EE 9 ramps up we’ll be active participants, although I don’t expect significant change in WildFly related to EE 9 until the fall at earliest.

Security

Darran Lofthouse and Farah Juma do an excellent job of maintaining a roadmap for security-related work in WildFly. I encourage you to read Darran’s recent blog post to learn more about what’s coming in WildFly 17.

Other Items

Besides the broader topics I’ve touched on above, there are always individual items that are in progress. Here are a few noteworthy ones:

  • Support for messaging clusters behind http load balancers by disabling automatic topology updates on clients. (This allows the client to continue to address the load balancer rather than trying to communicate with the servers behind the load balancer.)

  • WFLY-6143 — Ability to configure server-side EJB interceptors that should apply to all deployments. Client-side interceptors are also being considered.

  • WFCORE-1295 — Support for expression resolution for deployment descriptors parsed by WildFly Core, e.g. jboss-deployment-structure.xml and permissions.xml.

  • WFCORE-4227 — Ability for the CLI SSL security commands to be able to obtain a server certificate from Let’s Encrypt.

  • In the clustering area:

    • WFLY-5550 — A separate subsystem for configuring distributed web session managers. This will help users avoid common configuration mistakes, and is also a prerequisite for the aforementioned HotRod-based distributed session manager and for…​

    • WFLY-6944 — Support for encoding web session affinity using multiple routes, if supported by the load balancer.

    • WFLY-11098 — Support for Singleton Service election listeners.

Future Work

I regularly hear from community members asking about MicroProfile. Last year we added subsystems to bring support of MicroProfile Config, Health, Metrics and OpenTracing. The overall focus there was on "observability" of WildFly, particular in the cloud. These subsystems were oriented toward allowing monitoring and management tooling to observe the behavior of WildFly servers. The MicroProfile specs were a good choice because observers want to work in a standardized way.

As this year continues we’ll think about adding support for some other MicroProfile specifications, perhaps as new subsystems within the main WildFly codebase, or perhaps via new projects in the WildFly Extras organization along with a Galleon feature pack and a Galleon layer to allow easy integration into a WildFly installation.

I suspect anything on this would be in WildFly 18 or later.

WildFly Feature Delivery Process / Following Development

I’d love to have input both into our roadmap and into the requirements for the implementations of features. If you’re interested in following WildFly feature development one thing to do is to monitor the WFLY and WFCORE projects in JIRA. Beyond that I encourage you to subscribe to the wildfly-dev mailing list. It’s relatively low traffic, and I’ve been encouraging folks to post a brief note to the list when work on a new feature kicks off. So that’s a good way to hear early on about work to which you may have something to add.

When we went to the quarterly time-boxed release model, we formalized our feature development process quite a bit. In order to reliably release on time, we needed to be sure that features were truly ready for release before they ever got merged. No more merging things that were 90% done with the expectation of further improvements before the final release. To help facilitate this we started requiring the creation of an asciidoc analysis document at the start of feature work. This document is meant to cover:

  • Who is going to work on the feature, both in terms of development and of testing.

  • What the requirements for the feature are. (This IMHO is the most important part.)

  • How the feature will be tested.

  • How the feature will be documented. (Some form of documentation is required, either in the WildFly docs or, for simple things, in the software itself, e.g. in help messages.)

The analysis documents are all submitted as github pull requests to a github repo we created for them. Discussion of the document is done via comments on and updates to the PR. The document remains unmerged until the feature code itself is merged. The analysis is meant to be a living document, revised as necessary as new things are learned as the feature is developed.

One of the goals we had with all this is encourage community input to the feature requirements. So I very much encourage you to have a look at and comment on the PRs for any features in which you’re interested. The announcement posts to wildfly-dev that I mentioned are meant to inform you when new PRs are submitted.

Key WildFly 17 Schedule Dates

Following are the expected key dates associated with the WildFly 17 release:

  • Fri, May 10 — Completion date for features coming in via WildFly Core

  • Tue, May 14 — All features ready

  • Wed, May 15 — WildFly 17 Beta. No new features after this date.

  • Fri, May 24 — All changes for WildFly Core ready

  • Tue, May 28 — All changes for WildFly ready

  • Thu, May 30 — WildFly 17 Final released

Finally, thanks, as always, for your interest in and support of WildFly!

WildFly 16 and Galleon, towards a cloud native EE application server

This post has been co-authored with Jorge Morales and Josh Wood from the OpenShift Developer Advocacy Team. Jorge is passionate about Developer experience, Java programming, and, most importantly, improving the integration of Red Hat’s Middleware into the OpenShift platform. Josh is committed to constructing the future of utility computing with open source technologies like Kubernetes.

Problem space

Containers are becoming the default deployment strategy for applications in the enterprise. We’ve seen the software packaged in those containers adapt to this new deployment paradigm. The WildFly team was an early adopter of container technology, driven by running our software on Red Hat’s OpenShift Container Platform. However, only recently have we started adapting WildFly to take advantage of the “cloud-native” features of containers and platforms like Kubernetes and OpenShift, such as elasticity, scalability, and lifecycle automation.

We maintain a pair of WildFly container images. One is a classic container for Docker and other Open Container Image (OCI) compatible runtimes. The second is a variant incorporating OpenShift’s Source-to-Image (s2i) mechanism to work with the platform’s build support. Both have been updated with each WildFly version since WildFly 8.

In that time, we’ve learned a lot about what’s needed to make WildFly and the WildFly container images for OpenShift and Kubernetes more cloud-native — more able to take advantage of the facilities of the environments where they run today. We’ve gathered feedback from many sources, including upstream developers as well as enterprise end-users and customers, and we’ve tried to apply their insight to our own experience.

One recurring theme we’ve heard about is image sizes. The size of a WildFly container image is driven by these three factors:

  • The size of the base layer, or FROM image, that typically provides the essential Operating System user space including the runtimes needed for a Java application.

  • The size of the WildFly runtime added to the image.

  • The size of the application itself.

We can only control the second factor, the size of the WildFly runtime added to the image. In this post, we introduce some experiments we’ve been working on, with the aim of producing more “cloud-native” WildFly image for OpenShift or any other Kubernetes-based container platform

Intro to Galleon

Galleon is a provisioning tool for working with Maven repositories. Galleon automatically retrieves released WildFly Maven artifacts to compose a software distribution of a WildFly-based application server according to a user’s configuration. With no configuration, Galleon installs a complete WildFly server. Users can express which configuration, such as standalone only, or which set of features, such as web-server, jpa, jaxrs, cdi, etc., they want to install.

WildFly Galleon Layers

Starting with WildFly 16, we can use Galleon layers to control the set of features present in a WildFly server. A Galleon layer identifies one or more server features that can be installed on its own or in combination with other layers. For example, if your application, some-microservice, makes use of only the jaxrs and cdi server features, you can choose to install just the jaxrs and cdi layers. The configuration in standalone.xml would then contain only the required subsystems and their dependencies.

If you want to follow along with the examples, download the latest Galleon command line tool.

Using the Galleon cli tool, creating such a jaxrs and cdi-only server distribution would look like:

galleon.sh install wildfly:current --layers=jaxrs,cdi --dir=my-wildfly-server

This command installs the jaxrs and cdi layers of the latest released version of WildFly (wildfly:current argument) into the my-wildfly-server directory specified in the --dir argument. The my-wildfly-server directory will contain only the artifacts needed to run your application.

Here’s a list of commonly used layers. You can find a complete list of wildfly layers in the WildFly Admin Guide

  • web-server: Servlet container

  • cloud-profile: Aggregates layers often required for cloud applications. jaxrs, cdi, jpa (hibernate), and jms (external broker connections)

  • core-server: Aggregates management features (management, elytron, jmx, logging, and others)

  • core-tools: Contains management tools (jboss-cli, add-user, and others)

To provision a lightweight microservice with the management features, run a command like:

galleon.sh install wildfly:current --layers=cloud-profile,core-server,core-tools --dir=my-wildfly-server

Galleon also defines an XML file to describe an installation in a fine-grained way. The following provisioning.xml file provisions a WildFly server with support for jaxrs:

<installation xmlns="urn:jboss:galleon:provisioning:3.0">
    <feature-pack location="wildfly@maven(org.jboss.universe:community-universe):current">
        <default-configs inherit="false"/>
        <packages inherit="false"/>
    </feature-pack>
    <config model="standalone" name="standalone.xml">
        <layers>
            <include name="jaxrs"/>
        </layers>
    </config>
    <options>
        <option name="optional-packages" value="passive+"/>
    </options>
</installation>

In a nutshell, this file captures the following installation customizations:

  • Do not include default configurations.

  • Do not include all packages (JBoss Module modules and other content).

  • Generate a standalone.xml configuration that includes only the jaxrs layer.

  • Include only packages related to the jaxrs layer (option passive+).

Using the Galleon CLI tool’s provision subcommand, we can install from an XML provisioning file like the example above:

galleon.sh provision <path to XML file> --dir=my-wildfly-server

This asciinema recording shows these CLI commands in action, as well as the generated server content and image sizes.

Creating a WildFly server with OpenShift builds

By coupling OpenShift build features with Galleon, we can create customized images according to application requirements.

S2I image for Galleon

For this demonstration, we built an S2I image that adds Galleon tools to the WildFly S2I image. When building your source code into this image, both the application and server are built. The S2I build process looks for the presence of a provisioning.xml file at the root of the application project. If it finds one, it is used as input to Galleon to provision the server it defines. The S2I image has been deployed on quay.io.

You must add this image stream in OpenShift to continue following the example:

oc create -f https://raw.githubusercontent.com/jorgemoralespou/s2i-wildfly-galleon/master/ose3/galleon-s2i-imagestream.yml

Two Build Stages Optimize Production Image Size

In this OpenShift template that automates the build and deployment, we’ve split the build to create 2 separate images:

  1. A “development” image built from the Galleon S2I image. This is a “fat” image containing all of the tooling to build the application (JDK, Maven, Galleon, …). This image is runnable, but it consumes a larger amount of resources. We build it first to produce the artifacts we need for an optimized image intended for production.

  2. A “production” image, built from JRE-8, into which the WildFly server and .war files are copied. This image has a smaller footprint. It contains only the dependencies needed to run the WildFly server and the application.

The template creates a deployment for each image. The “development image” is the primary deployment and scaled to 1 instance, the “production image” is a replica and scaled to 0 instances. When one wants to use the “production image”, this would need to be scaled to 1, and the route will need to be balanced to this “production” deployment. To be conservative on resources, the “development” deployment can be downscaled to 0.

You can add the template to your OpenShift project by running:

oc create -f https://raw.githubusercontent.com/jorgemoralespou/s2i-wildfly-galleon/master/ose3/galleon-s2i-template.yml

Building the development image

We use OpenShift’s s2i support to build the application. Note the s2i-wildfly-galleon:16.0.0.Final image stream specified in this BuildConfig excerpt:

    source:
      git:
        ref: master
        uri: https://github.com/jorgemoralespou/s2i-wildfly-galleon
      contextDir: test/test-app-jaxrs
      type: Git
    strategy:
      sourceStrategy:
        from:
          kind: ImageStreamTag
          name: s2i-wildfly-galleon:16.0.0.Final
      type: Source

Once this build is complete, the server is installed in /output/wildfly and the compiled application is written to /output/deployments/ROOT.war.

Building the production image

This build stage only needs to copy the /output/wildfly directory and /output/deployments/ROOT.war file into a new image. The copy operations comprise most of our production image Dockerfile. It also sets the CMD to start the server when the container image runs:

FROM openjdk:8-jre
COPY /wildfly /wildfly
COPY /deployments /wildfly/standalone/deployments
EXPOSE 8080
CMD ["/wildfly/wildfly/bin/standalone.sh", "-b", "0.0.0.0"]

OpenShift BuildConfig excerpt:

images:
  - from:
      kind: ImageStreamTag
      name: dev-image:latest
    paths:
    - sourcePath: /output/wildfly
      destinationDir: "."
  - from:
      kind: ImageStreamTag
      name: dev-image:latest
    paths:
    - sourcePath: /output/deployments
      destinationDir: "."

Sample Applications

We have developed 3 sample applications to exercise our experimental Galleon S2I image:

  • A simple web server app that serves an HTML and JSP page (derived from the OpenShift sample app). Its provisioning.xml file tells Galleon to provision a WildFly server configured with the web-server layer.

  • A toy JSON endpoint app that depends on jaxrs to expose a simple service that returns some JSON. Its provisioning.xml file tells Galleon to provision a WildFly server configured with the jaxrs layer. Some JBoss Module modules, such as the datatype providers, are useless in this image and can be excluded by Galleon. This makes the server’s footprint even smaller.

  • A persistent state demonstration app that depends on jaxrs, cdi, and jpa to persist user-created tasks (derived from the tasks-rs WildFly quickstart). Postgresql is used as the storage backend. This sample app’s provisioning.xml file tells Galleon to provision a WildFly server configured with cdi,jaxrs,and jpa layers.

Running the jaxrs JSON endpoint sample application

You must have added both the image stream and template to your OpenShift project.
  1. Click on “Add to Project/Select From Project” then select the template “App built with Galleon S2I image and optionally connect to DB”.

  2. Choose an Image name.

  3. The GIT repository is https://github.com/jorgemoralespou/s2i-wildfly-galleon, sub directory is test/test-app-jaxrs.

  4. By default we are using the S2I Image Version 16.0.0.Final. This image has all WildFly artifacts present in the local Maven repository, making provisioning of the WildFly server faster. When using the latest image tag, the artifacts of the latest released WildFly server are retrieved from remote repositories.

  5. You can ignore the Postgresql JDBC URL and credentials, they are not used by this sample.

  6. Click on Create

  7. The development image starts to build. When it is complete, the build of the production image starts. Once both are built, the 2 deployments are created on the OpenShift cluster and a route is created through which external clients can access the JSON service.

Only the development image will have an active instance. The production image is scaled to 0 to save on resources, and the route is balanced to send all traffic to the development image. If you want to use/test the production image, you’ll need to change the scaling of both deployments and the weights used in the route.

Adding Features to WildFly

Developers frequently need to customize server configurations to match their applications. For example, we often need to add a JDBC driver and datasource. In the following example, we extend the server configuration with a PostgreSQL driver and datasource. Problems we need to solve:

  1. Add a JBoss Module module for the PostgreSQL driver to the WildFly installation.

  2. Add the driver to the standalone.xml configuration file.

  3. Add a datasource to the standalone.xml configuration file. Datasources must be configured with contextual information. The JDBC url, user, and password are specific to a deployment and can’t be statically set in the server configuration. We need to adapt the configuration to the container execution context.

Galleon can help us solve these problems.

Using the Galleon API to package a JDBC driver as a Galleon feature-pack

The creation of custom Galleon feature-packs is an advanced topic. The API and overall technique may change in the future.

Galleon has a concept called the feature-pack. The WildFly feature-pack is retrieved when installation occurs. A feature-pack (a zip file) contains features, configurations, layers, and content such as modules and scripts. Features are used to assemble a WildFly configuration. We have been using the Galleon FeaturePack Creator API to build a PostgreSQL feature-pack that extends the standalone.xml configuration with a driver and contains the postgresql driver jar file packaged as a JBoss Module module.

This feature-pack can then be installed on top of an existing WildFly installation to provision the PostgreSQL driver configuration and module. Once the feature-pack is installed, the WildFly server has the plumbing it needs to connect to a PostgreSQL server. We’ve solved problems 1) and 2), above.

Evolving provisioning.xml with the PostgreSQL feature-pack and datasource

As we saw earlier, Galleon allows you to describe the content of an installation in an XML file, called provisioning.xml by convention. We are going to evolve this file to describe both the server and the driver to install. In addition, we extend the standalone configuration with a datasource. The resulting provisioning.xml file contains a complete description of the server installation. We use environment variables to represent the JDBC URL, user, and password so they can be resolved for each running instance of the container.

Postgresql feature-pack installation inside S2I image

The Postgresql feature-pack was built for the purposes of this demonstration. It is not present in public Maven repositories. You can fetch it from this location, then install it in a local Maven repository. In order to inform S2I assembly that some feature-packs must be downloaded and installed locally, the file local-galleon-feature-packs.txt must be present at the root of your project.

Each desired feature-pack is specified with two lines in this file, a line for the feature-pack URL followed by a line naming the path inside the local Maven repository:

https://github.com/jfdenise/galleon-openshift/releases/download/1.0/postgresql-1.0.zip
org/jboss/galleon/demo/postgresql/1.0/

Running the postgresql sample application

Before these steps, you must deploy a PostgreSQL server in your project and create a database on it.

  1. Click on “Add to Project/Select From Project” then select the template “App built with Galleon S2I image and optionally connect to DB”.

  2. Choose an Image name.

  3. The GIT repository is https://github.com/jorgemoralespou/s2i-wildfly-galleon, sub directory is test/test-app-postgres.

  4. By default we are using the S2I Image Version 16.0.0.Final.

  5. If needed, replace the host, port and database of the JDBC URL.

  6. Set the Postgres user name and password.

  7. Click on Create

  8. The build of the development image starts. When completed, the build of the production image starts. Once the two images are built, the deployments are created and a route added through which you can access the service.

  9. To add a new task, open a terminal and run

curl -i  -H "Content-Length: 0" -X POST http://<your route hostname>/tasks/title/task1

Reduced server footprint

When using Galleon layers to provision a WildFly server, the image size as well as runtime memory consumption varies according to the set of installed features. Here are the total file sizes and for the servers we have provisioned in this post. As a reference, a complete WildFly server is around 216MB.

Table 1. WildFly server

Feature

Size

cdi, jaxrs, jpa

122 MB

jaxrs

57 MB

jaxrs with JSON data binding provider only

49 MB

web-server

43 MB

Full server

216 MB

Table 2. Sample memory sizes used by the WildFly server process

App

Features installed (layers)

Actual mem used

Full server mem used

PostgreSQL sample app

cdi, jaxrs, jpa

30 MB

35 MB

jaxrs sample app

jaxrs

19 MB

28 MB

jsp sample app

web-server

16 MB

27 MB

Conclusions

One of the beauties of cloud platforms is that (ideally) you don’t need to care that much about the infrastructure that runs your application. As a developer, you focus on creating your application logic, and then rely on the platform, OpenShift, to keep it available at all times, providing scalability and failover. Your application may run on any worker node in the cluster. These worker nodes must download the container images before running the application. The time it takes to download these images is reduced by reducing the image sizes, although it’s not the only factor. Intelligent use of the filesystem layering inside the container image is also key. Nevertheless, a simple rule still holds: Take only what you need. Removing inessential components not only speeds things up by making images smaller, it also helps reduce the vulnerability surface of the image. A bug can’t be exploited if it is not installed.

Producing smaller, more focused container images is a step toward a more cloud-ready WildFly application server, but it’s not the only thing we’re working on. Integrating with more of the cloud platform’s capabilities will be a topic for a later post.

One last remark: everything here described is not part of the project and hence not supported.

Using Git for configuration history

Until now the history of configuration in WildFly was using the folder + filename pattern. Now we have moved to a proper SCM integrating Git to manage history.

You can now take advantage of a full Git support for your configuration history:

  • every change in your configuration is now a commit.

  • you can use branches to develop in parallel.

  • you can create tags for stable points in your configuration.

  • pull configuration from a remote repository.

  • push your configuration history to a remote repository.

  • use the git-bisect tool at your disposal when things go wrong.

Now if we execute a management operation that modifies the model, for example adding a new system property using the CLI:

[standalone@localhost:9990 /] /system-property=test:add(value="test123")
{"outcome" => "success"}

What happens is:

  • The change is applied to the configuration file.

  • The configuration file is added to a new commit.

The notion of configuration has been updated with the Git support. It covers more than 'just' the standalone.xml history but also the content files (aka managed deployments).

Thus even your deployments are in history, which makes sense in a way since those deployments appear in the configuration file.

Starting with a local Git repository

To start using Git you don’t have to create the repository, WildFly can do that for you. Just start your server with the following command line:

$ __WILDFLY_HOME__/bin/standalone.sh --git-repo=local --git-branch=my_branch

If a --git-branch parameter is added then the repository will be checked out from the supplied branch. Please note that the branch will not be automatically created and must already exist in the repository. By default, if no parameter is specified, the branch master will be used. If a --git-branch parameter is added then the repository will be checked out from the supplied branch. Please note that the branch will not be automatically created and must already exist in the repository. By default, if no parameter is specified, the branch master will be used.

Starting with a remote Git Repository

To start WildFly with a configuration from a remote Git repository is simple too, just use the following command line:

$ __WILDFLY_HOME__/bin/standalone.sh --git-repo=https://github.com/USER_NAME/wildfly-config.git --git-branch=master

Be careful with this as the first step is to delete the configuration files to avoid conflicts when pulling for the first time.

Note that you can use remote aliases if you have added them to your .gitconfig.

Snapshots

In addition to the commits taken by the server as described above, you can manually take snapshots which will be stored as tags in the Git repository.

The ability to take a snapshot has been enhanced to allow you to add a comment to it. This comment will be used when creating the Git tag.

This is how you can take a snapshot from the JBoss CLI tool:

[standalone@localhost:9990 /] :take-snapshot(name="snapshot", comment="1st snapshot")
{
    "outcome" => "success",
    "result" => "1st snapshot"
}

You can also use the CLI to list all the snapshots:

[standalone@localhost:9990 /] :list-snapshots
{
    "outcome" => "success",
    "result" => {
        "directory" => "",
        "names" => [
            "snapshot : 1st snapshot",
            "refs/tags/snapshot",
            "snapshot2 : 2nd snapshot",
            "refs/tags/snapshot2"
        ]
    }
}

To delete a particular snapshot:

[standalone@localhost:9990 /] :delete-snapshot(name="snapshot2")
{"outcome" => "success"}

Note that this is a real Git repository, thus using the git client of your choice you can list those tags, or browse the history.

Publishing

You may 'publish' your changes on a remote repository (provided you have write access to it) so you can share them. For example, if you want to publish on GitHub, you need to create a token and allow for full control of the repository. Then use that token in an Elytron configuration file like this:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <authentication-client xmlns="urn:elytron:1.1">
        <authentication-rules>
            <rule use-configuration="test-login">
            </rule>
        </authentication-rules>
        <authentication-configurations>
            <configuration name="test-login">
                <sasl-mechanism-selector selector="BASIC" />
                <set-user-name name="$GITHUB_USERNAME" />
                <credentials>
                    <clear-password password="$GITHUB_TOKEN" />
                </credentials>
                <set-mechanism-realm name="testRealm" />
            </configuration>
        </authentication-configurations>
    </authentication-client>
</configuration>

Then, to publish your changes:

[standalone@localhost:9990 /] :publish-configuration(location="origin")
{"outcome" => "success"}

References

For the official documentation regarding Git history : Official Documentation.

back to top