Table of contents of the article:
Kubernetes 2 is coming soon. And, as expected, we'll be saying goodbye to YAML. Finally. It will also be an opportunity to admit, not too subtly, that Kubernetes was — and still is — an unmanageable messNot an innovation, but a technical tangle raised to a standard, without anyone ever having the courage to stop and say: “Wait, does all this really make sense?”.
The truth is that Kubernetes was never meant for ordinary people, for ordinary developers, for those who manage real projects with budgets, deadlines and production environments to keep alive 24/7. Kubernetes was conceived by a technical elite — mostly former Google engineers — who tried to replicate outside Google some concepts taken from Deposit, the company's internal orchestration system. Too bad that Deposit is a system designed for a scale that most companies will never even remotely see. And that replicate it badly outside of that context did not lead to any revolution, but only to further complexity.
Docker and the myth of the DIY mini-cloud
To understand where the confusion arises, let's go back a moment. Docker made sense when it was born: it was a shortcut to avoid the cost of VMs, managing pseudo-isolated environments more lightly. Good. But then came the great illusion: the possibility of building a mini-cloud "like the big guys" using just some YAML and a little scripting.
And that's where the castle collapsed. Why create and manage a containerized environment, orchestrated by Kubernetes, it is not at all simple, nor even automatic. It requires skills that not everyone has. Indeed, it requires an impressive amount of cross-disciplinary knowledge:
-
Kubernetes Semantics, which is incredibly complex. Kubernetes semantics represent one of the most complex ecosystems in modern IT. It's not just about understanding pods, services, and deployments, but about mastering an entire universe of interconnected primitives: ReplicaSet, StatefulSet, DaemonSet, Job, CronJob, ConfigMap, Secret, PersistentVolume, StorageClass, Ingress, NetworkPolicy, RBAC, and CustomResourceDefinition. Each object has its own lifecycle, states, conditions, and finalizer. The complexity is amplified when considering the interaction between these elements: how a HorizontalPodAutoscaler monitors custom metrics through the metrics server, how admission controllers validate and mutate incoming resources, and how the garbage collector manages owner-reference dependencies. The declarative model requires a completely different mindset from traditional imperative management, where the desired state is described rather than the steps to achieve it.
-
TCP/IP Networking At a deep level, with overlays, plugins, service meshes, and internal DNS. Networking in Kubernetes operates on multiple layers of abstraction that require a deep understanding of the TCP/IP stack. Overlay networks like Flannel, Calico, and Weave create virtual networks that span physical nodes, using technologies like VXLAN, BGP, and IP-in-IP tunneling. Container Network Interface (CNI) plugins manage IP assignment to pods, configuration of routing tables, iptables rules, and virtual bridges. Service meshes like Istio, Linkerd, and Consul Connect introduce an additional layer of complexity with sidecar proxies that intercept all traffic, implementing advanced load balancing, circuit breaking, retry policies, and mutual TLS. The internal DNS (CoreDNS) must resolve dynamic service discovery, manage zone splitting, and synchronize with external service registries. Troubleshooting requires expertise in packet capture, analysis of iptables chains, understanding netfilter hooks, and debugging connection tables.
-
Linux systems Advanced: debugging within containers, tracing zombie processes, and troubleshooting pods that fail to start without any apparent logic. On AlmaLinux, Kubernetes system management requires kernel-level Linux expertise applied to containerized environments. Debugging within containers involves namespace isolation (PID, NET, MNT, IPC, UTS, USER), cgroups v1/v2 for resource limiting, and capabilities for security boundaries. Zombie processes in containers require an understanding of init systems (often tini or dumb-init), signal handling, and process reaping. When pods fail to start without any apparent logic, you need to analyze: kubelet logs, container runtime (containerd/CRI-O) status, seccomp profiles, AppArmor/SELinux policies on AlmaLinux, resource quotas, and node affinity/anti-affinity rules. Essential tools include nsenter for namespace entry, crictl for CRI debugging, systemd-analyze for performance, strace for system call tracing, and perf for profiling. On AlmaLinux specifically, managing SELinux contexts for containers, firewalld rules, and systemd service dependencies adds further complexity.
-
Distributed storage, volume replication, and volatile data persistence in ephemeral environments. Distributed storage in Kubernetes addresses the fundamental challenge of maintaining persistence in an environment designed for ephemerality. PersistentVolumes and PersistentVolumeClaims create an abstraction between physical storage and application consumption, supporting dynamic provisioning through StorageClass. Distributed storage systems such as Ceph (Rook), GlusterFS, and Longhorn implement multi-node replication for high availability, with consensus algorithms for consistency. Management includes snapshots and incremental backups, dynamic volume resizing, and live data migration between nodes. Specific challenges include split-brain scenarios in network partitioning, performance tuning for IOPS/throughput, management of failure domains for replica placement, and encryption at rest and in transit. Container Storage Interface (CSI) drivers must handle attach/detach operations, mount propagation, and filesystem specifics. Troubleshooting requires understanding block devices, filesystem internals, network latency impact on distributed consensus, and monitoring storage metrics for capacity planning and performance optimization.
Without forgetting all the mental load related to the update management, To problemi di sicurezza, secrets management, To automated deployments, To private registry containers and the more you put it.
YAML, that non-language language
The manifesto of Kubernetes madness is embodied in its extreme use of the YAML format. The idea was to "describe the infrastructure," not codify it. But then, as often happens, theory collided with reality: YAML files became long, convoluted, full of mandatory properties and implicit semantics. Changing a comma can mean bringing down an entire cluster.
And most importantly, each YAML is strictly dependent on the behavior of the underlying Docker image. And that image, in turn, is built on one or more filesystem layers derived from who knows what Linux distribution, with often opaque configurations. The result? A delirium of abstractions, where it is very difficult to predict exactly what will happen when you launch your deployment.
One microservice, a thousand problems
Anyone who thought Kubernetes was the panacea for managing microservices architectures should stop and think. Because Each new microservice is a new container, with its own lifecycle, its own configuration, its own set of environment variables, its own YAML file, its own network policy.
Each container potentially becomes a new time bombAnd who triggers it? The developer.
Because it's the developers who prepare the images. Not the system administrators. But in most cases... developers do not have the necessary system skills To make work safe, stable, and maintainable. They don't know how to properly isolate processes, how to manage sensitive variables, or how to implement persistent logging or monitoring mechanisms.
And so, slowly, the entire architecture becomes filled with weak points. It's not a question of if, but when something will explode.
The illusion of cloud-native
Added to all this is another misunderstanding: the rhetoric of “cloud-native”.
Everyone wants to be cloud-native. But few know what it really means. Being cloud-native doesn't mean putting Laravel in a container and calling it a microservice. It means completely rethinking the way you build applications:
-
Event-driven, non-synchronous.
-
Stateless by design.
-
API-centric.
-
Fault tolerant.
-
Dynamically scalable.
-
With native logging and observability, not glued on afterward.
All this you don't get that by putting a monolith in a container and feeding it Kubernetes. Yet, that's exactly what many do. Because the illusion is that Kubernetes will "take care of everything." But Kubernetes doesn't think. Kubernetes executes. Poorly, if configured poorly. And it's configured poorly often, because It is too complex to be configured properly by non-specialists.
Debugging: Hell on Earth
One example above all: debugging.
Have you ever tried to troubleshoot a real Kubernetes cluster in production, with ten pods running, logs scattered across a thousand nodes, containers restarting without logs, metrics disappearing, and your only tool is a kubectl describe or kubectl logs which often gives nothing back?
Have you ever tried to get into a crashed container, maybe Alpine based, without bash, without vi, without analysis tools, without anything?
If you haven't already, I envy you. If you did, you know exactly what I'm talking about.
Kubernetes is not an environment designed for troubleshooting. It's a system that assumes that everything will be OK, that everything is idempotent, that pods can be thrown down and rebuilt without state. But reality is different. Reality is made of exceptions, errors, unforeseen conditions. And in that moment, Kubernetes doesn't help you: it hinders you..
And now comes AI
While it was once possible to pretend nothing was happening—build two containers, use Kubernetes as a fake microservices orchestrator, and spend the rest of the time patching the problems—that's no longer possible.
Artificial intelligence has changed the rules of the game.
Modern AI applications are cloud-native for realThey are made up of dozens of APIs that communicate with each other, often in streaming, with GPU management, models to be loaded into memory, dynamic orchestration, intensive logging, auditing, observability, and constant updating.
You can't improvise. You can't just throw a couple of containers onto a cluster and cross your fingers anymore. You need a solid infrastructure, designed to handle real-world complexity, but with tools that won't kill you every time you need to change something.
And that's why more and more companies are looking for alternativesThey want to simplify. They want to return to a model that makes sense, that can be managed by human teams without having to become Kubernetes Ninjas. They want an environment that is stable, predictable, and maintainable.
The future? Less orchestration, more standards
The signal is clear: the right direction is not to add further layers on top of Kubernetes (PaaS platforms, service meshes, YAML management tools, graphical interfaces… all smoke and mirrors), but reduce complexity, starting from a simpler, standardized, serverless foundation when possible.
There's no need to orchestrate everything. orchestrate well what is really needed, repeatably and automatically, letting most workloads behave as decoupled, stateless services that are resilient by design.
Conclusion: It's time to wake up
Kubernetes was helpful in understanding how difficult it can be to do orchestration at scale. But it's not a solution for everyone. And especially, it's not the only possible solutionIt's a system born from the specific needs of huge companies, with specialized teams and extremely rigid internal processes.
Most companies does not need KubernetesIt needs simpler, more human, and more standardized environments. Where developers and systems engineers can collaborate without the constant risk of creating unmanageable architectures.
For this, today, When you tell a team they can build private cloud-native AI without having to go through the hell of Kubernetes, they look at you in amazement. And then they pop the champagne.
Because finally — after years of hype, complications, hellish YAMLs, and ghost pods — someone had the courage to tell the truthKubernetes isn't the future. It was a hiccup.