Red Hat

Onward to WildFly 17 and Beyond!

Following the release of WildFly 16, I thought it would be a good time to give the WildFly community a sense of what I see coming in the project over the next few releases.

WildFly will continue with the quarterly delivery model we began last year with WildFly 12. These releases are essentially time-boxed; i.e. we typically won’t significantly delay a release in order to get a feature in. So when I discuss general feature roadmaps I’ll be talking in terms of the next two or three releases rather than just WildFly 17.

WildFly on the Cloud

A major focus will be on making WildFly as easy and productive as possible to use on the cloud, particularly on Kubernetes and OpenShift. In particular we’ll be working on the following:

Jakarta EE

Of course, the EE standards are very important to WildFly. We’re very focused on Jakarta EE, with a number of members of the WildFly community involved in the various specs. We’re keeping a close eye on the finalization of Jakarta EE 8 with certification a high priority. As work on Jakarta EE 9 ramps up we’ll be active participants, although I don’t expect significant change in WildFly related to EE 9 until the fall at earliest.


Darran Lofthouse and Farah Juma do an excellent job of maintaining a roadmap for security-related work in WildFly. I encourage you to read Darran’s recent blog post to learn more about what’s coming in WildFly 17.

Other Items

Besides the broader topics I’ve touched on above, there are always individual items that are in progress. Here are a few noteworthy ones:

  • Support for messaging clusters behind http load balancers by disabling automatic topology updates on clients. (This allows the client to continue to address the load balancer rather than trying to communicate with the servers behind the load balancer.)

  • WFLY-6143 — Ability to configure server-side EJB interceptors that should apply to all deployments. Client-side interceptors are also being considered.

  • WFCORE-1295 — Support for expression resolution for deployment descriptors parsed by WildFly Core, e.g. jboss-deployment-structure.xml and permissions.xml.

  • WFCORE-4227 — Ability for the CLI SSL security commands to be able to obtain a server certificate from Let’s Encrypt.

  • In the clustering area:

    • WFLY-5550 — A separate subsystem for configuring distributed web session managers. This will help users avoid common configuration mistakes, and is also a prerequisite for the aforementioned HotRod-based distributed session manager and for…​

    • WFLY-6944 — Support for encoding web session affinity using multiple routes, if supported by the load balancer.

    • WFLY-11098 — Support for Singleton Service election listeners.

Future Work

I regularly hear from community members asking about MicroProfile. Last year we added subsystems to bring support of MicroProfile Config, Health, Metrics and OpenTracing. The overall focus there was on "observability" of WildFly, particular in the cloud. These subsystems were oriented toward allowing monitoring and management tooling to observe the behavior of WildFly servers. The MicroProfile specs were a good choice because observers want to work in a standardized way.

As this year continues we’ll think about adding support for some other MicroProfile specifications, perhaps as new subsystems within the main WildFly codebase, or perhaps via new projects in the WildFly Extras organization along with a Galleon feature pack and a Galleon layer to allow easy integration into a WildFly installation.

I suspect anything on this would be in WildFly 18 or later.

WildFly Feature Delivery Process / Following Development

I’d love to have input both into our roadmap and into the requirements for the implementations of features. If you’re interested in following WildFly feature development one thing to do is to monitor the WFLY and WFCORE projects in JIRA. Beyond that I encourage you to subscribe to the wildfly-dev mailing list. It’s relatively low traffic, and I’ve been encouraging folks to post a brief note to the list when work on a new feature kicks off. So that’s a good way to hear early on about work to which you may have something to add.

When we went to the quarterly time-boxed release model, we formalized our feature development process quite a bit. In order to reliably release on time, we needed to be sure that features were truly ready for release before they ever got merged. No more merging things that were 90% done with the expectation of further improvements before the final release. To help facilitate this we started requiring the creation of an asciidoc analysis document at the start of feature work. This document is meant to cover:

  • Who is going to work on the feature, both in terms of development and of testing.

  • What the requirements for the feature are. (This IMHO is the most important part.)

  • How the feature will be tested.

  • How the feature will be documented. (Some form of documentation is required, either in the WildFly docs or, for simple things, in the software itself, e.g. in help messages.)

The analysis documents are all submitted as github pull requests to a github repo we created for them. Discussion of the document is done via comments on and updates to the PR. The document remains unmerged until the feature code itself is merged. The analysis is meant to be a living document, revised as necessary as new things are learned as the feature is developed.

One of the goals we had with all this is encourage community input to the feature requirements. So I very much encourage you to have a look at and comment on the PRs for any features in which you’re interested. The announcement posts to wildfly-dev that I mentioned are meant to inform you when new PRs are submitted.

Key WildFly 17 Schedule Dates

Following are the expected key dates associated with the WildFly 17 release:

  • Fri, May 10 — Completion date for features coming in via WildFly Core

  • Tue, May 14 — All features ready

  • Wed, May 15 — WildFly 17 Beta. No new features after this date.

  • Fri, May 24 — All changes for WildFly Core ready

  • Tue, May 28 — All changes for WildFly ready

  • Thu, May 30 — WildFly 17 Final released

Finally, thanks, as always, for your interest in and support of WildFly!

WildFly 16 and Galleon, towards a cloud native EE application server

This post has been co-authored with Jorge Morales and Josh Wood from the OpenShift Developer Advocacy Team. Jorge is passionate about Developer experience, Java programming, and, most importantly, improving the integration of Red Hat’s Middleware into the OpenShift platform. Josh is committed to constructing the future of utility computing with open source technologies like Kubernetes.

Problem space

Containers are becoming the default deployment strategy for applications in the enterprise. We’ve seen the software packaged in those containers adapt to this new deployment paradigm. The WildFly team was an early adopter of container technology, driven by running our software on Red Hat’s OpenShift Container Platform. However, only recently have we started adapting WildFly to take advantage of the “cloud-native” features of containers and platforms like Kubernetes and OpenShift, such as elasticity, scalability, and lifecycle automation.

We maintain a pair of WildFly container images. One is a classic container for Docker and other Open Container Image (OCI) compatible runtimes. The second is a variant incorporating OpenShift’s Source-to-Image (s2i) mechanism to work with the platform’s build support. Both have been updated with each WildFly version since WildFly 8.

In that time, we’ve learned a lot about what’s needed to make WildFly and the WildFly container images for OpenShift and Kubernetes more cloud-native — more able to take advantage of the facilities of the environments where they run today. We’ve gathered feedback from many sources, including upstream developers as well as enterprise end-users and customers, and we’ve tried to apply their insight to our own experience.

One recurring theme we’ve heard about is image sizes. The size of a WildFly container image is driven by these three factors:

  • The size of the base layer, or FROM image, that typically provides the essential Operating System user space including the runtimes needed for a Java application.

  • The size of the WildFly runtime added to the image.

  • The size of the application itself.

We can only control the second factor, the size of the WildFly runtime added to the image. In this post, we introduce some experiments we’ve been working on, with the aim of producing more “cloud-native” WildFly image for OpenShift or any other Kubernetes-based container platform

Intro to Galleon

Galleon is a provisioning tool for working with Maven repositories. Galleon automatically retrieves released WildFly Maven artifacts to compose a software distribution of a WildFly-based application server according to a user’s configuration. With no configuration, Galleon installs a complete WildFly server. Users can express which configuration, such as standalone only, or which set of features, such as web-server, jpa, jaxrs, cdi, etc., they want to install.

WildFly Galleon Layers

Starting with WildFly 16, we can use Galleon layers to control the set of features present in a WildFly server. A Galleon layer identifies one or more server features that can be installed on its own or in combination with other layers. For example, if your application, some-microservice, makes use of only the jaxrs and cdi server features, you can choose to install just the jaxrs and cdi layers. The configuration in standalone.xml would then contain only the required subsystems and their dependencies.

If you want to follow along with the examples, download the latest Galleon command line tool.

Using the Galleon cli tool, creating such a jaxrs and cdi-only server distribution would look like: install wildfly:current --layers=jaxrs,cdi --dir=my-wildfly-server

This command installs the jaxrs and cdi layers of the latest released version of WildFly (wildfly:current argument) into the my-wildfly-server directory specified in the --dir argument. The my-wildfly-server directory will contain only the artifacts needed to run your application.

Here’s a list of commonly used layers. You can find a complete list of wildfly layers in the WildFly Admin Guide

  • web-server: Servlet container

  • cloud-profile: Aggregates layers often required for cloud applications. jaxrs, cdi, jpa (hibernate), and jms (external broker connections)

  • core-server: Aggregates management features (management, elytron, jmx, logging, and others)

  • core-tools: Contains management tools (jboss-cli, add-user, and others)

To provision a lightweight microservice with the management features, run a command like: install wildfly:current --layers=cloud-profile,core-server,core-tools --dir=my-wildfly-server

Galleon also defines an XML file to describe an installation in a fine-grained way. The following provisioning.xml file provisions a WildFly server with support for jaxrs:

<installation xmlns="urn:jboss:galleon:provisioning:3.0">
    <feature-pack location="wildfly@maven(org.jboss.universe:community-universe):current">
        <default-configs inherit="false"/>
        <packages inherit="false"/>
    <config model="standalone" name="standalone.xml">
            <include name="jaxrs"/>
        <option name="optional-packages" value="passive+"/>

In a nutshell, this file captures the following installation customizations:

  • Do not include default configurations.

  • Do not include all packages (JBoss Module modules and other content).

  • Generate a standalone.xml configuration that includes only the jaxrs layer.

  • Include only packages related to the jaxrs layer (option passive+).

Using the Galleon CLI tool’s provision subcommand, we can install from an XML provisioning file like the example above: provision <path to XML file> --dir=my-wildfly-server

This asciinema recording shows these CLI commands in action, as well as the generated server content and image sizes.

Creating a WildFly server with OpenShift builds

By coupling OpenShift build features with Galleon, we can create customized images according to application requirements.

S2I image for Galleon

For this demonstration, we built an S2I image that adds Galleon tools to the WildFly S2I image. When building your source code into this image, both the application and server are built. The S2I build process looks for the presence of a provisioning.xml file at the root of the application project. If it finds one, it is used as input to Galleon to provision the server it defines. The S2I image has been deployed on

You must add this image stream in OpenShift to continue following the example:

oc create -f

Two Build Stages Optimize Production Image Size

In this OpenShift template that automates the build and deployment, we’ve split the build to create 2 separate images:

  1. A “development” image built from the Galleon S2I image. This is a “fat” image containing all of the tooling to build the application (JDK, Maven, Galleon, …). This image is runnable, but it consumes a larger amount of resources. We build it first to produce the artifacts we need for an optimized image intended for production.

  2. A “production” image, built from JRE-8, into which the WildFly server and .war files are copied. This image has a smaller footprint. It contains only the dependencies needed to run the WildFly server and the application.

The template creates a deployment for each image. The “development image” is the primary deployment and scaled to 1 instance, the “production image” is a replica and scaled to 0 instances. When one wants to use the “production image”, this would need to be scaled to 1, and the route will need to be balanced to this “production” deployment. To be conservative on resources, the “development” deployment can be downscaled to 0.

You can add the template to your OpenShift project by running:

oc create -f

Building the development image

We use OpenShift’s s2i support to build the application. Note the s2i-wildfly-galleon:16.0.0.Final image stream specified in this BuildConfig excerpt:

        ref: master
      contextDir: test/test-app-jaxrs
      type: Git
          kind: ImageStreamTag
          name: s2i-wildfly-galleon:16.0.0.Final
      type: Source

Once this build is complete, the server is installed in /output/wildfly and the compiled application is written to /output/deployments/ROOT.war.

Building the production image

This build stage only needs to copy the /output/wildfly directory and /output/deployments/ROOT.war file into a new image. The copy operations comprise most of our production image Dockerfile. It also sets the CMD to start the server when the container image runs:

FROM openjdk:8-jre
COPY /wildfly /wildfly
COPY /deployments /wildfly/standalone/deployments
CMD ["/wildfly/wildfly/bin/", "-b", ""]

OpenShift BuildConfig excerpt:

  - from:
      kind: ImageStreamTag
      name: dev-image:latest
    - sourcePath: /output/wildfly
      destinationDir: "."
  - from:
      kind: ImageStreamTag
      name: dev-image:latest
    - sourcePath: /output/deployments
      destinationDir: "."

Sample Applications

We have developed 3 sample applications to exercise our experimental Galleon S2I image:

  • A simple web server app that serves an HTML and JSP page (derived from the OpenShift sample app). Its provisioning.xml file tells Galleon to provision a WildFly server configured with the web-server layer.

  • A toy JSON endpoint app that depends on jaxrs to expose a simple service that returns some JSON. Its provisioning.xml file tells Galleon to provision a WildFly server configured with the jaxrs layer. Some JBoss Module modules, such as the datatype providers, are useless in this image and can be excluded by Galleon. This makes the server’s footprint even smaller.

  • A persistent state demonstration app that depends on jaxrs, cdi, and jpa to persist user-created tasks (derived from the tasks-rs WildFly quickstart). Postgresql is used as the storage backend. This sample app’s provisioning.xml file tells Galleon to provision a WildFly server configured with cdi,jaxrs,and jpa layers.

Running the jaxrs JSON endpoint sample application

You must have added both the image stream and template to your OpenShift project.
  1. Click on “Add to Project/Select From Project” then select the template “App built with Galleon S2I image and optionally connect to DB”.

  2. Choose an Image name.

  3. The GIT repository is, sub directory is test/test-app-jaxrs.

  4. By default we are using the S2I Image Version 16.0.0.Final. This image has all WildFly artifacts present in the local Maven repository, making provisioning of the WildFly server faster. When using the latest image tag, the artifacts of the latest released WildFly server are retrieved from remote repositories.

  5. You can ignore the Postgresql JDBC URL and credentials, they are not used by this sample.

  6. Click on Create

  7. The development image starts to build. When it is complete, the build of the production image starts. Once both are built, the 2 deployments are created on the OpenShift cluster and a route is created through which external clients can access the JSON service.

Only the development image will have an active instance. The production image is scaled to 0 to save on resources, and the route is balanced to send all traffic to the development image. If you want to use/test the production image, you’ll need to change the scaling of both deployments and the weights used in the route.

Adding Features to WildFly

Developers frequently need to customize server configurations to match their applications. For example, we often need to add a JDBC driver and datasource. In the following example, we extend the server configuration with a PostgreSQL driver and datasource. Problems we need to solve:

  1. Add a JBoss Module module for the PostgreSQL driver to the WildFly installation.

  2. Add the driver to the standalone.xml configuration file.

  3. Add a datasource to the standalone.xml configuration file. Datasources must be configured with contextual information. The JDBC url, user, and password are specific to a deployment and can’t be statically set in the server configuration. We need to adapt the configuration to the container execution context.

Galleon can help us solve these problems.

Using the Galleon API to package a JDBC driver as a Galleon feature-pack

The creation of custom Galleon feature-packs is an advanced topic. The API and overall technique may change in the future.

Galleon has a concept called the feature-pack. The WildFly feature-pack is retrieved when installation occurs. A feature-pack (a zip file) contains features, configurations, layers, and content such as modules and scripts. Features are used to assemble a WildFly configuration. We have been using the Galleon FeaturePack Creator API to build a PostgreSQL feature-pack that extends the standalone.xml configuration with a driver and contains the postgresql driver jar file packaged as a JBoss Module module.

This feature-pack can then be installed on top of an existing WildFly installation to provision the PostgreSQL driver configuration and module. Once the feature-pack is installed, the WildFly server has the plumbing it needs to connect to a PostgreSQL server. We’ve solved problems 1) and 2), above.

Evolving provisioning.xml with the PostgreSQL feature-pack and datasource

As we saw earlier, Galleon allows you to describe the content of an installation in an XML file, called provisioning.xml by convention. We are going to evolve this file to describe both the server and the driver to install. In addition, we extend the standalone configuration with a datasource. The resulting provisioning.xml file contains a complete description of the server installation. We use environment variables to represent the JDBC URL, user, and password so they can be resolved for each running instance of the container.

Postgresql feature-pack installation inside S2I image

The Postgresql feature-pack was built for the purposes of this demonstration. It is not present in public Maven repositories. You can fetch it from this location, then install it in a local Maven repository. In order to inform S2I assembly that some feature-packs must be downloaded and installed locally, the file local-galleon-feature-packs.txt must be present at the root of your project.

Each desired feature-pack is specified with two lines in this file, a line for the feature-pack URL followed by a line naming the path inside the local Maven repository:

Running the postgresql sample application

Before these steps, you must deploy a PostgreSQL server in your project and create a database on it.

  1. Click on “Add to Project/Select From Project” then select the template “App built with Galleon S2I image and optionally connect to DB”.

  2. Choose an Image name.

  3. The GIT repository is, sub directory is test/test-app-postgres.

  4. By default we are using the S2I Image Version 16.0.0.Final.

  5. If needed, replace the host, port and database of the JDBC URL.

  6. Set the Postgres user name and password.

  7. Click on Create

  8. The build of the development image starts. When completed, the build of the production image starts. Once the two images are built, the deployments are created and a route added through which you can access the service.

  9. To add a new task, open a terminal and run

curl -i  -H "Content-Length: 0" -X POST http://<your route hostname>/tasks/title/task1

Reduced server footprint

When using Galleon layers to provision a WildFly server, the image size as well as runtime memory consumption varies according to the set of installed features. Here are the total file sizes and for the servers we have provisioned in this post. As a reference, a complete WildFly server is around 216MB.

Table 1. WildFly server



cdi, jaxrs, jpa

122 MB


57 MB

jaxrs with JSON data binding provider only

49 MB


43 MB

Full server

216 MB

Table 2. Sample memory sizes used by the WildFly server process


Features installed (layers)

Actual mem used

Full server mem used

PostgreSQL sample app

cdi, jaxrs, jpa

30 MB

35 MB

jaxrs sample app


19 MB

28 MB

jsp sample app


16 MB

27 MB


One of the beauties of cloud platforms is that (ideally) you don’t need to care that much about the infrastructure that runs your application. As a developer, you focus on creating your application logic, and then rely on the platform, OpenShift, to keep it available at all times, providing scalability and failover. Your application may run on any worker node in the cluster. These worker nodes must download the container images before running the application. The time it takes to download these images is reduced by reducing the image sizes, although it’s not the only factor. Intelligent use of the filesystem layering inside the container image is also key. Nevertheless, a simple rule still holds: Take only what you need. Removing inessential components not only speeds things up by making images smaller, it also helps reduce the vulnerability surface of the image. A bug can’t be exploited if it is not installed.

Producing smaller, more focused container images is a step toward a more cloud-ready WildFly application server, but it’s not the only thing we’re working on. Integrating with more of the cloud platform’s capabilities will be a topic for a later post.

One last remark: everything here described is not part of the project and hence not supported.

WildFly 16 is released!

WildFly 16 Final is now available for download!

Provisioning WildFly with Galleon

As we continue with our quarterly delivery model, a major focus over the next few quarters will be on making WildFly as easy and productive as possible to use on the cloud, particularly on Kubernetes and OpenShift.

An important requirement for the cloud is to be able to reduce the footprint of your server to what you need to run your application, eliminating unneeded runtime memory overhead, cutting down image size and reducing the possibility for security vulnerabilities. So, I’m very excited to announce Tech Preview support for use of the Galleon provisioning tool to allow you to easily provision a slimmed down server tailored toward REST applications. By easily, I mean a simple command that provisions a server that provides the technologies you want, with a correct configuration, and with unneeded libraries not present on disk. Being able to do this is an important piece of foundational technology that we’ll be building upon over the course of 2019, particularly with tooling and best practices aimed at taking advantage of Galleon when creating cloud images.

Galleon provisioning isn’t just useful in cloud; users running on bare metal or virtualized environments can get the same benefits. Easy server slimming has been a goal for as long as I’ve been involved with JBoss AS!

To install the latest final version of WildFly into the directory my-wildfly-server call: install wildfly:current --dir=my-wildfly-server

That’s not so interesting as the result is equivalent to unzipping the standard download zip.

WildFly still provides the usual zip / tar.gz. Using Galleon is not required to use WildFly.

The real power comes when using the Galleon layers that WildFly provides to limit your installation to just the technologies you need. For example, if all you want is jaxrs and cdi: install wildfly:current --dir=my-wildfly-server --layers=cdi,jaxrs

The result is an installation that doesn’t include unnecessary modules, has a correct configuration and has less than a third of the disk footprint of the standard WildFly distribution. And you don’t have to worry about knowing and specifying technologies required by the ones you know you want (e.g. the servlet support that jaxrs needs). Galleon handles that for you.

If you’re ok with a slightly bigger footprint in order to have common WildFly Core management functionality, add the core-server and core-tools layers: install wildfly:current --dir=my-wildfly-server --layers=cdi,jaxrs,core-server,core-tools

WildFly 16 provides a rich set of layers oriented toward letting optimize your server for running HTTP applications. For further details, see the WildFly Admin Guide and the Galleon documentation.

Please give Galleon provisioning a try and give us feedback! We’d love to hear about your use cases and how Galleon can be improved to meet them. We’ll be doing more articles and blog posts explaining how to take advantage of this technology.

JDK 12

While the GA version of JDK 12 has not been released yet (it is in the Release Candidate phase), we are pleased to report that WildFly 16 should run well on JDK 12 once it is GA. I’d like to especially thank Richard Opalka and Matej Novotny for their efforts in making this happen.

Our goal with WildFly is to have our releases run well for most use cases on the most recent GA JDK version available on the WildFly final release date. If practical we’ll try and run well on release candidates for upcoming JDK versions as well, which we’ve achieved with WildFly 16. By run well, I mean our main testsuite runs with no more than a few failures in areas not expected to be commonly used. (In the JDK 12 case we have no failures.) We want developers who are trying to evaluate what the latest JVM means for their applications to be able to look to WildFly as their development platform. It may not always be possible to attain this goal, but it’s one we take seriously.

While we do want to run well on the most recent JDK, our recommendation is that you run WildFly on the most recent long-term support release, i.e. on JDK 11 for WildFly 16. We do considerably more testing on the LTS JDKs.

WildFly 16 also is heavily tested and runs well on Java 8. We plan to continue to support Java 8 at least through WildFly 18.

Please note that WildFly runs on Java 11 and 12 in classpath mode.

Messaging Improvements

  • MDBs can be configured to belong to multiple delivery groups, with delivery only enabled only when all the delivery groups are active.

  • Users can use standard Java EE 8 resource definitions (annotations or xml) to define JMS resources that connect to a remote Artemis-based broker (including AMQ-7 instances).

  • Users can configure the maximum amount of memory that the embedded messaging broker can use to store messages for its addresses before they are considered "full" and their address-full-policy starts to apply (e.g. to drop messages, block producers, etc.)

Clustering Improvements

  • When WildFly servers behind a mod_cluster load balancer start they will instruct the load balancer to gracefully ramp up their load over the first minute or so of operation, instead of having the balancer send the maximum possible amount of traffic, possibly overwhelming the server.

  • Users running a cluster with HA Singleton deployments or services can connect with the CLI to any cluster member and determine which node is the primary provider of a given deployment or service.

Other Notable Items

  • You can use the CLI to list which modules are visible to a deployment. This is helpful in analyzing classloading issues.

  • In a WildFly managed domain, you can suspend and resume all of the servers managed by a particular Host Controller. Previously suspending or resuming multiple servers was limited to all servers in the domain or those in a particular server group.

  • When using Elytron, HTTP Basic authentication mechanism can be configured to only operate in 'silent mode', only sending a challenge if the request contained an authorization header.

Jira Release Notes

The full list of issues resolved is available here. Issues resolved in the WildFly Core 8 release included with WildFly 16 are available here.

back to top