How to deploy Java applications modernly?
I have been discussing it with some colleagues, what are the options to deploy java applications? What are the main architectural drivers behind this question?
Java is the most used programming language in the enterprise world. It covers pretty much every single business need through a wide range of libraries and frameworks. For instance, Apache Camel (swiss army knife) for integrations, Springboot (and Quarkus) for developing web services, gRPC (and EJB) for remote procedure calls, Hibernate for object-relation mapping and so on. It is a fact that Java remains a vital part of the enterprise world with new greenfield developments every single day.
Recently I have been helping customers moving their Java applications to modern solutions. This is not an easy task. Migrating applications, stacks or system require changing the platform, the deployment pipelines or even the frameworks used. If we translate this to business language, it means that money will need to be spent in engineering
Image 1 - You see an architectural blueprint of something that I am not really sure what it is but it was the only image I could find that “fit” in the article. Software Engineers do similar blueprints to develop and deploy their apps.
My latest work made me think what modern options do we have to deploy Java applications. That is a easy question for Cloud or Kubernetes aficionados, but I have to say it is quite hard to answer, it really depends the architectural drivers we want to consider.
Architectural Drivers
Architectural drivers are key arguments for a system architecture. They are the output of business needs and are the main requirements behind the system architecture. I have selected the most important drivers, Hosting, Platform, Availability, Scalability and Cost.
I am aware that there are much more but as I said it really depends the business needs, in this article I am trying to cover only the deployment of a Java application which does not require much. But, if the business was a bank, and the Java application was developed by a software team, managed by a DevOps team and monitored by a SRE team, there would be new architectural drivers such as Maintainability, Observability and so on.
Hosting
Nowadays the most used hosting options are: on-premise using virtual-machines and cloud computing using virtual-machines (AWS EC2) or cloud-services (platform as a service like AWS Elastic Beanstalk).
Platform
Application Platforms or Enterprise Application Platforms were quite famous back in the day. Developers used them to deploy their .jar, .ear and .war files. Apache Wildfly (or JBoss EAP), WebLogic or even the Apache Tomcat made Developers’ life easier.
These servers (with the exception of Tomcat) provide features like XA Transactions support, Monitoring, Cron Jobs, Security Layer, Load Balancing and so on. Such features were very appreciated by the community and the applications servers shined before the cloud-era, where everything was deployed in a virtual-machine, in small clusters or even in standalones.
After the cloud-era, it appeared the containers-era, if you ask any sofware engineer nowadays, they will probably suggest to containerise the application and run it on top of Kubernetes (or Openshift) or Docker / Podman.
Availability
I believe this is the most important architectural driver, at least in the enterprise world where the business must be available twenty-four-seven. This is also one of the most difficult drivers to implement in a distributed system because there are so many variables, components, software, hardware that can trigger outages and affect the entire system.
To minimise such risk, the following topics must be covered:
Orchestration: the applications should be orchestrated by a platform, in case an application instance fails, the orchestrator should spin up a new instance of the application.
Monitoring / Alerting: Export metrics to a monitoring system/tool and set alerts for memory, cpu and disk usage, as well as, network traffic. It is important to have a view of the current status of the JVM and the environment.
Redundancy: Create multiple instances of the application in case of the application instances fails unexpectedly.
Load Balancing: Balance the incoming requests through the available instances of the application.
Failover Mechanisms: The application and the system components should be prepared to detect failures and react by adapting and prevent failure.
Disaster Recovery: In case of the entire system fails, the load balancer can point to a backup system.
Nowadays, with the popularity of Kubernetes, these topics are widely covered. Kubernetes ships all of that through multiple resources such as: pods, operators, routers, services, deployments and so on.
Why did the industry massively move to Kubernetes? Kubernetes were built to orchestrate workloads and provided a robust way to build, deploy, manage and monitor containers at scale. It gained huge traction within the cloud-native community and attracted the attention of big players such as Google, Microsoft and Red Hat. Soon, it was packed with enterprise features such as horizontal and vertical scalability, self-healing, open-source and vendor-neutral (could run on top of any hyperscaler, virtual-machine or baremetal), virtual network, RBAC, rollback mechanisms and so on.
Scalability
Scalability is tightly coupled with Availability and Costs. Business often requires more application instances to handle unusual load spikes, but at the same time demands the engineering team to minimize costs.
As mentioned previously, this can be achieved easily with Kubernetes (Openshift) and it could also be achieved with a standalone deployment on a virtual-machine, or with an application server. The difference is with Kubernetes the horizontal and vertical scaling will be implicitly defined, and the rules will be applied whenever the scenario occurs. With a standalone deployment or an application server, the scaling will be a manual tasks which making it prone to human error.
Cost
It is the last Architectural Driver, but probably the most important one for the businesses. Many enterprises started with their own servers and moved to the cloud. We are observing that some enterprises are finding that running things on a hyperscaler is too expensive and are moving back to their old on-premise servers. This scenario contributes to a discussions around Hybrid Cloud strategies. As previously mentioned, Hyperscalers tend to be very expensive, but provide PaaS, IaaS and SaaS which are very appreciated by the enterprise sector. Such hyperscalers features save Enterprises time because assembling the right engineering team takes time and many companies cannot attract quality engineers to their organization.
On the other hand, on-premise environments tend to be cheaper initially, but the responsibility to keep them available, maintained, secured, safe and monitored increases substantially, and some organizations are not prepared for this undetaking.