Deploying Red Hat JBoss Fuse using Azure Container Service and Kubernetes

Kublr Team
Kublr Team
Published in
5 min readJul 10, 2017

--

Red Hat JBoss Fuse has been the de-facto standard for building Java Web/RESTful services for over a decade. But how do you run it effectively in today’s Cloud-centric world? As you’ll see, an infrastructure-as-a-code and scalable/fault-tolerant approach are both critical for a successful deployment.

In this tutorial, we’ll show you how to:

1. Build an environment in a Kubernetes (K8s) cluster in Azure.

2. Package your Red Hat JBoss services into a Docker Container.

3. Run your services in a scalable, highly-available cluster.

Building an Environment on a Kubernetes Cluster in Azure

To start, you’ll need an operational Kubernetes cluster. Read our blog to learn how to install a Kubernetes cluster in Azure.

After installation, run the Kubernetes Dashboard (kubectl proxy) and ensure the Kubernetes Dashboard UI (http://127.0.0.1:8001/ui) is working:

Packaging Your Red Hat JBoss Services Into a Docker Container

The typical Red Hat JBoss deployment process requires you to install Red Hat JBoss, configure Red Hat/Karaf features, and deploy your services (i.e., the developed *.jar files). You can automate the installation with the Docker file and get a delivery unit that is ready for testing and deployment to production.

Docker file:

# Use latest jboss/base-jdk:8 image as the base
FROM jboss/base-jdk:8
MAINTAINER Evgeny Pishnyuk <maintainer-email@gmail.com>ENV DEPLOY_LOCAL_STORAGE=install
ENV DEPLOY_CLOUD_STORAGE=https://your-cloud-storage-with-prepared-artifacts
ENV FUSE_VERSION 6.3.0.redhat-262RUN curl $DEPLOY_CLOUD_STORAGE/jboss-fuse-karaf-$FUSE_VERSION.zip > /opt/jboss/jboss-fuse-karaf.zip
WORKDIR /opt/jboss
RUN unzip jboss-fuse-karaf.zip -d /opt/jboss && rm *.zip
RUN ln -s "jboss-fuse-$FUSE_VERSION" jboss-fuse
# We turn on the default admin user. Please consider password
RUN sed -i 's/#admin/admin/' etc/users.properties
# We install components that we needed
RUN bin/fuse server & \
sleep 30 && \
bin/client log:clear && \
bin/client 'osgi:install -s mvn:xom/xom/1.2.5' && \
bin/client features:install camel-jetty && \
bin/client features:install camel-xmljson && \
sleep 10 && \
bin/client log:display && \
bin/client 'shutdown -f' && \
sleep 5
# !Usually it is more affordable to use inheritance of Docker Containers and here will be a split pointWORKDIR /opt/jboss/jboss-fuseCOPY $DEPLOY_LOCAL_STORAGE/*.jar /opt/deploy/# We deploy our service – we do it in different step to save time for building of Docker ImageRUN bin/fuse server & \
sleep 30 && \
bin/client log:clear && \
bin/client 'osgi:install -s file:/opt/deploy/some-service.jar' && \
sleep 10 && \
bin/client log:display && \
bin/client 'shutdown -f' && \
sleep 5
# Add ports of your services
EXPOSE 8181 8101 1099 44444 61616 1883 5672 61613 61617 8883 5671 61614
CMD /opt/jboss/jboss-fuse/bin/fuse server

Set up your Docker Image Registry (or use DockerHub), and configure your Docker to access the Registry.

We use commands “bin/client log:display” in the Docker file to help ensure that the Red Hat reconfigurations and deployments were successful.

After this, the typical developer’s flow will be to build a Docker Container Image, tag the Image with a version, and push the Image to the Docker Registry:

docker build -t rhesb .docker tag rhesb pishnuke/rhesb:latestdocker push pishnuke/rhesb:latest

Want a stress-free K8S cluster management experience? Kublr can help. Take our demo for a spin.

Running Your Services in a Scalable, Highly Available Cluster

You have now successfully configured Kubernetes on Azure Container Service, and you have a Docker Image in a Docker Registry. Next, you’re ready to proceed with Kubernetes!

Basically, you will need to create one Deployment (for Red Hat nodes) and one Service (for a load balancer and publicly accessible IP) in Kubernetes.

To create the Deployment, go to the dashboard, and select “Deployment” in the left menu.

Click “+Create” in the upper right, and select the “Upload a YAML or JSON file” option.

Here is the Kubernetes deployment definition:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rhesb
spec:
replicas: 2
template:
metadata:
labels:
name: rhesb
spec:
containers:
- name: rhesb
imagePullPolicy: Always
image: pishnuke/rhesb:latest
ports:
- containerPort: 8040
- containerPort: 9001
- containerPort: 1099
- containerPort: 44444
- containerPort: 8000
- containerPort: 8001
- containerPort: 8101
- containerPort: 8888
- containerPort: 61616
- containerPort: 2181
- containerPort: 1527
- containerPort: 8082
- containerPort: 8088
- containerPort: 8090

Select Rh-deployment.yaml, and click “Upload”. Next, select “Pods” in the left menu. Wait until “rhesb-….” is ready. This should take approximately 5 minutes because the image is 2GB.

Then go to “Services”, click “+Create”, and select the Kubernetes service definition(rh-service.yaml). Ensure the “selector” attribute in the Service definition matches the “metadata” name of the Deployment definition.

Here is the Kubernetes service definition:

apiVersion: v1
kind: Service
metadata:
name: rhesb
spec:
type: LoadBalancer
ports:
- name: service1
port: your_service1_port
targetPort: your_service1_port
selector:
name: rhesb

Go to “Services” and wait until the new service displays an IP address. This will take a few minutes as the load balancer and rules are created.

You are now ready to test your service using SoapUI, or a similar tool:

While you can go into production with this Docker Image and a couple of Kubernetes yaml files, you should also:

  • Choose an approach for managing environment-specific properties (for example, URLs and ports of services).
  • Set up log shipping using Stash or the Azure Monitoring Agent.
  • Add readinessProbe and livenessProbe (for each service) to Kubernetes Service definition to ensure you are not the owner of a cluster of all-dead nodes.
  • Consider using Kublr to get more auto-scaling/highly available features out-of-the-box!

Share your thoughts and questions in the comments section below.

Need a user-friendly tool to set up and manage your K8S cluster? Check out our demo, Kublr-in-a-Box. To learn more, visit kublr.com.

--

--

Production-ready cluster and application platform that speeds and simplifies the set-up and management of your K8S cluster. To learn more, visit kublr.com.