Kubernetes and DevOps — Compute Behavior Changed

j3ffyang
3 min readNov 11, 2021

This is my idea flow, in a view of CTO/ CIO at an enterprise: if you’re a CTO, where would you start to think of architecture design, DevOps definition, and team skillset build. The core starts from Kubernetes

Re-define Compute Behavior and Workload Orchestration

  • Defines fine-grained compute units and reshapes the way of orchestration to appropriate resource (when I say compute, it’s general to cover CPU/ memory, network and disk)
  • As the method of compute power consumption is being containerized in Kubernetes flavor (not Docker), DevOps process is being changed as well

Compute like a Single Linux Computer

Break down compute into pieces, such as pod to handle processes, statefulSet to persistent data, deployment to stateless workload such as nginx_ingress, secrets to certificate (SSL/ TLS), service to SDN. They all work separately across hardware boundary, but like within a single Linux computer.

They don’t depend on each other. If service (network) is broken, pod (the process) is still running. It's not like if VM is down, all processes, and network on such VM is gone.

X_as_a_Service Made Easy

I remember when working at OpenStack Development at IBM Software Lab years ago, there was a feature called: LoadBalancer_as_a_Service (https://docs.openstack.org/dragonflow/queens/specs/lbaas2.html). Easy to understand but it involves vLAN trunk, and Neutron port and VMs. And TLS is another object.

Look at what this can be done by Kubernetes, to utilize nginx-ingress (the official project is called ingress-nginx at https://kubernetes.github.io/ingress-nginx/) to manage routing table, and also enable load_balancing and high_availability within single component through container, orchestrated by Kubernetes. And separate TLS/ SSL in secrets component, completely de-coupled.

DevOps Process is Different from VM Era

Since the compute consumption is containerized,

  • APIs need to interact with Kubernetes to orchestrate compute resource and distribute workload
  • Unit test
  • Performance becomes robust. Don’t have to wait for taking minutes to launch expensive VM
  • Rollback policy when failure occurs
  • Logging and debugging
  • And DevSecOps as well

Resource LifeCycle Management

One thing that I almost forget to mention here is the lifecycle management. Most of cloud service providers claim how fast they could deploy virtualized compute resource from their platform. But fewer could make this clear: how the deployed resources (no longer used) could be released when getting removed (destroyed).

Several years ago when I worked at OpenStack development, I noticed there were lots of iptables chains created when software-defined-network (SDN) created and they all remained (but became useless) in Linux kernel after such network was removed. Imagine this in a production environment, how would you (dare) clean the unused iptables chains from Linux kernel.

In Kubernetes, when a namespace is deleted, some specific resources (everything is called resource in K8s, just like everything is a file in Linux, isn’t it?) would be deleted together, including SDN, CRD (custom resource definition), Pod, Ingress, etc. Some others, like statefulSet, mounted disk with configMap, would remain unchanged.

Simplified Deployment through helm

Helm Charts (https://helm.sh/) help you define, install, and upgrade even the most complex Kubernetes application. It fully utilizes declarative yaml and if you read a deployment YAML, you can see its original container image,

Skillset and Culture in Team

  • Linux and Open Source (I use Debian anyway)
  • Tools: YAML, Git, etc (I love vi/ vim)
  • Containerization and orchestration by Kubernetes (not dockerD)
  • Security in containerization, pod, service, secrets, container image. Audit in container - logging, debugging
  • Cheap hardware (don’t have to trust), high performance by software, availability & reliability by operation

Trade-offs

  • Deploy orchestration layer on top of container layer (dockerd or containerd)
  • More layers of technology and more logs to view for debug

--

--

j3ffyang

ardent linux user, opensource, kubernetes containerization, blockchain, data security. handler of @analyticsource and @j3ffyang