Kubernetes: the great equalizer
The problem with “the cloud” is that there are so many of them. While AWS gets credit for making it fashionable, the basic concepts behind cloud services date back to the early days of timesharing in the 1960s. The problem with timesharing was the uniqueness of each implementation and the effort involved in providing services to consumers. Today, nobody uses the term timesharing anymore, but the uniqueness of each cloud remains. On-prem clouds tend to be specific to the business needs of the premises. AWS, Alibaba, Deutsche Telekom, Google, Azure, Oracle, and the other clouds are similarly unique: each with their own specialties and offerings evolved from prior products.
The beauty of kubernetes is its ability to run almost anywhere consuming resources and specialized services while maintaining a single view of the whole solution. If a program requires access to a special service, such as a GPU, then the requirement is specified and the program gets scheduled where the requirement can be met. This equalizes the on-prem and Internet-based cloud providers and simplifies the operational interfaces. This is a very powerful concept because it treats all computing resources as potentially useful while respecting the needs requiring special services.
Having several years of kubernetes and many more years of prior orchestration experience, I am constantly amazed at the thoughtful design of kubernetes. Clusters and distributed computing are hard, but when it is wrapped up by a well-designed and extensible API it becomes a joy to use.
When onboarding new team members, I often start with pragmatism: kubernetes has a steep learning curve. It can do so much beneficial work on your behalf, that you don’t even know what you don’t know you need. Start by viewing it as an API. If you want to run a program consider its requirements and build a list of specifications. It will likely need some amount of CPU cycles, some storage, and network ports. These become specifications to declare intent to kubernetes. Then map those specifications onto the resources in the API. If you cannot do this, then you’re just wasting time flailing about. Once you’ve got the mental picture of your program’s needs and the resources used, the rest becomes a matter of using the tools.
Later, optimizations can be implemented that deal with the special services and pricing models across the wide range of on-prem and cloud vendors. Since kubernetes can run across all of those systems, the playing field is equal and you can call your own plays.