Click to learn more about author Pete Johnson.
A Linux shell is a Linux shell is a Linux shell. If you take that attitude, it opens up the possibility of running monolithic applications on Kubernetes. As more and more greenfield development shifts to a Microservices, Cloud-native-based architecture that’s hosted on top of container clusters, every IT Ops leader should ask themselves, “Is maintaining dual environments just to run my legacy apps on bare metal or virtual machines worth the overhead?”
This is hardly a new problem. Today’s innovative application quickly becomes tomorrow’s legacy system that must be sunset or kept on life support. What’s different about many of today’s legacy applications is that they assume they are running in a Linux environment. As more and more development teams shift their always accelerating iterations to container clustering technologies like Kubernetes, that assumption may serve them well. Kubernetes provides a Linux shell that many legacy applications would be happy to run within.
Client-Server Architectures & How We Got Here
For those of us old enough to have written socket code for connecting either end of a client-server connection, we remember the old architectural arguments: was that approach performant enough, given typical LAN speeds in the early 1990s? Back then, a “monolithic” application was one that contained both front-end and back-end code in the same memory space of the same physical compute.
By separating the two, the argument went, it was easier to run the heavier back end on a more powerful compute platform. Therefore, front-end client components could strategically ask for back-end data, giving the user the illusion that everything was happening locally.
We all know how this story ends. Client-server won out and we standardized different protocols and port numbers that multiple server connections exposed. Back-end components that first ran on the custom operating systems eventually gravitated towards some variant of Linux.
Port Numbers and IP Addresses: All Client-Server Applications Really Need
The modern irony is that we now refer to these client-server applications as monoliths, when that’s also the term for the applications with which they were replaced. Terminology glitches aside, all most client-server applications need is an IP address and a port number to connect the components together. Think of the classic 3-tier web application:
- A web server sits on top of the stack handling traffic threading and connects to an application server over an IP address and a port number.
- That server runs some Java processes.
- The application server tier then connects to a database server over an IP address and port number to run SQL queries.
Programmers wrote thousands of applications this way. Some programmers even simplified things by merging the web and application tiers. More sophisticated variants put load balancers on top of the stack or in between the tiers. When the Virtual Machine world emerged in the late 1990s, applications could better take advantage of horizontal scaling or blue-green deployments.
Fundamentally, all these components need is an IP address and a port number to connect to other components. Some applications care about specific Linux versions or patch levels, but these applications’ components ignore large swaths of those specifics. All they care about is that they have a Linux shell of some sort to execute within.
Kubernetes: Providing the Linux Shell Your Legacy Apps Can Still Use
With that as background, there is little preventing a legacy application from running within a Kubernetes environment where the fundamental application building blocks still consist of IP addressable Linux shells. That does not mean that a legacy application will automatically inherit all the Kubernetes goodness that cloud native applications do. Largely, they won’t.
However, instead of having an IT Ops team maintain two separate environments as deployment targets, there is a notion of backward compatibility to what many legacy applications expect an environment to provide them. That enables the same IT Ops team to serve multiple masters with the same toolset, thereby reducing costs.
This approach won’t work for all legacy apps, as some applications expect certain kernel parameters to be present. A container cluster may not be able to accommodate these parameters. Not being able to operate under these circumstances may be cause for sunsetting a system.
Compared to other disciplines, software engineering is somewhere between infancy and adolescence. As such, the discipline is changing at a rate that can be difficult for IT Ops to keep up with. IT Ops provides the infrastructure on which different eras of software operate.
Microservices and Cloud-native approaches to software development provide much faster iterations and more chances to find innovation, which is why they are gaining popularity. But because container cluster technologies like Kubernetes are still based on Linux shells and familiar network addressing schemes, IT Ops has the ability to serve the dual masters of legacy applications and new development in a single environment.