BLOG20b: Why Do We Need Kubernetes

BLOG11: Why do we need Kubernetes CRDs?

Where is the Deployment Controller?

The Deployment Controller is a built-in controller that runs inside the Kubernetes kube-controller-manager.


Location of the Deployment Controller

✔️ It is not in the API server

The API server only stores objects.

✔️ It is not in the Deployment object

The Deployment object carries only the desired state.

✔️ It is inside the kube-controller-manager binary

Specifically in the controller named:

deployment-controller

kube-controller-manager runs as a Pod in most clusters:

  • kubeadm

  • K3s

  • AKS / EKS / GKE

  • Rancher

  • RKE2

  • OpenShift (different)

Check it in kubeadm clusters:

Example:

Inside that pod are multiple controllers.


What lives inside kube-controller-manager?

The controller-manager contains many controllers, not just Deployment:

Controller
Purpose

deployment-controller

Handles Deployments → manages ReplicaSets

replicaset-controller

Ensures Pod replicas

statefulset-controller

Manages StatefulSets

job-controller

Manages Jobs

cronjob-controller

Manages CronJobs

node-controller

Handles node health

service-controller

Syncs load balancers

namespace-controller

Cleans up namespaces

pv-controller

Manages PersistentVolumes

endpoint-controller

Builds Endpoints

And many more.


Where is it in Kubernetes source code?

If you browse the Kubernetes source:

On GitHub:

Inside you will find:

  • deployment_controller.go

  • rollout.go

  • sync.go

  • progress.go

These files implement:

  • Rolling updates

  • Rollbacks

  • ReplicaSet management

  • Scale behaviors

  • Status updates


How It Actually Works

  1. Deployment object is stored in etcd via API server

  2. Deployment controller watches /apis/apps/v1/deployments

  3. It detects new or changed Deployment

  4. It creates or updates a ReplicaSet

  5. ReplicaSet controller notices ReplicaSet changes

  6. ReplicaSet controller creates or deletes Pods

  7. Scheduler assigns Pods to nodes

  8. Kubelet runs containers

Everything is loosely coupled via the API server. Controllers do not talk directly to each other — only through API objects.


Where does Deployment Controller run in managed Kubernetes?

On EKS

Runs inside control plane (you cannot see Pod) ✔️ But still a kube-controller-manager process

On GKE

Runs as part of Google’s internal master image

On AKS

Runs in Azure-managed controller-manager VM

On K3s

It is inside:

Because K3s compiles all controllers into a single binary.

On kubeadm

Visible as a Pod:


Fast Recap

Item
Where it exists

Deployment controller logic

Inside kube-controller-manager

Deployment object

Stored in etcd via API server

Runs as

Process or Pod depending on Kubernetes distribution

Communicates via

Shared API use, not direct calls


Perfect — here is the full, end-to-end workflow to demonstrate a simple "Greeting" CRD + Controller that prints a message.

This is the smallest, cleanest CRD demo possible.


FULL STEPS: Simple Greeting CRD + Controller Demo

STEP 1 — Create the CRD (Greeting)

Save this as greetings-crd.yaml:

Apply:

Check:


STEP 2 — Create a Custom Greeting Resource

Save as greeting-sample.yaml:

Apply:

This creates your custom object:

So far:

  • CRD exists

  • Custom object exists But nothing “happens” yet — because we need a controller.


STEP 3 — Create a Tiny Controller (runs in a Pod)

This controller simply:

  • Watches Greeting objects

  • Prints the message

  • Updates .status.printed = true

👉 Minimal controller code (Python version for simplicity)

Save as controller.py:


STEP 4 — Create a Deployment for the Controller

We need a container that contains:

  • the Python script

  • kubernetes python client

Use a simple container definition:

Save as controller-deploy.yaml:


STEP 5 — Put Controller Code in a ConfigMap

Create ConfigMap:

Apply:


STEP 6 — Give Controller Permission (RBAC)

Create service account + permission:

Apply:


STEP 7 — Deploy the Controller

View logs:

You should see the message:

Check status update:

You will see:


Finished — You demonstrated:

  • Custom API (CRD)

  • Custom object

  • Reconciliation controller

  • Status updates

  • Watch loop

  • Real CRD lifecycle

This is a complete, minimal operator-style CRD demo, perfect for:

  • training

  • presentations

  • interviews

  • workshops

  • showing Kubernetes API-first design


Here are the simplest and most useful CRDs you can install with Helm and immediately demonstrate in under 5 minutes — with zero coding, real behavior, and clear understanding.

These CRDs are easy, visual, and perfect for demos.


1. Metrics Server (Metrics / CPU Usage CRD)

Helm Chart:

Creates CRDs:

  • none, but adds metrics APIs:

    • /apis/metrics.k8s.io/v1beta1/nodes

    • /apis/metrics.k8s.io/v1beta1/pods

Amazing 2-minute demo:

Shows real CPU & memory usage.

🟢 Simple 🟢 Fast 🟢 Very useful demo


2. cert-manager (Certificate CRDs + automatic TLS)

The MOST POPULAR CRD used in real clusters.

Install:

Installs CRDs:

  • ClusterIssuer

  • Issuer

  • Certificate

  • CertificateRequest

  • Order

  • Challenge

Super simple demo:

Apply:

⭐ Shows:

  • CRD

  • Controller

  • Status updates

  • Secret creation


3. ArgoCD (Application CRD)

One of the BEST CRD demos for beginners.

Install:

Installs CRDs:

  • Application

  • AppProject

Quick demo:

Apply:

Then watch ArgoCD sync and deploy Pods automatically.

Great for “GitOps powered by CRDs” demo.


4. Prometheus Operator (ServiceMonitor / PodMonitor CRDs)

Install:

Installs CRDs:

  • ServiceMonitor

  • PodMonitor

  • PrometheusRule

Simple demo:

Monitor nginx using ServiceMonitor.

Shows:

  • CRD creates auto-monitoring

  • Prometheus reacts instantly


5. KEDA (ScaledObject CRD → Autoscaling on external metrics)

KEDA is simple but shows a powerful effect.

Install:

CRDs created:

  • ScaledObject

  • TriggerAuthentication

  • ScaledJob

2-minute demo:

Then create load → autoscaling happens automatically.


6. ExternalDNS (DNSRecord CRD)

Perfect demo of cloud DNS automation.

Install:

CRDs (optional depending on provider):

  • DNSEndpoint

Demo:

DNS record gets auto-created.


7. Sealed Secrets (SealedSecret CRD)

Encrypted secrets, safe in Git.

Install:

CRDs:

  • SealedSecret

Super simple demo:

Then apply:

Shows:

  • Encrypted CRD → real secret generated


8. Simple Demo CRD: "Hello" CRD (From Kubebuilder Test)

Install the sample operator:

But since you want existing charts, this is lower priority.


Best 3 For Quick Demo (Beginner-Friendly)

CRD
Time to demo
Visual effect

cert-manager

⭐⭐⭐⭐ (2–3 mins)

Automatically creates TLS cert

ArgoCD

⭐⭐⭐⭐ (3 mins)

Deploys an app via CRD

KEDA

⭐⭐⭐⭐ (3 mins)

Autoscaling happens automatically


How cert-manager controller is installed

When you install cert-manager with Helm:

Helm installs two types of components:


1️⃣ CRDs (API layer)

These define the custom API types:

  • Certificate

  • Issuer

  • ClusterIssuer

  • CertificateRequest

  • Order

  • Challenge

Stored inside: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition

These tell Kubernetes:

“Hey, there is a new resource type called Certificate. Users can create it.”

But CRDs alone do nothing. They are just schemas stored in the API server.


2️⃣ Controller Deployments (Code layer)

After CRDs are installed, Helm installs CERT-MANAGER CONTROLLERS:

You will see 3 Pods:

Example output:

These are the controllers (actual code):

cert-manager

  • Watches Certificate, Issuer, ClusterIssuer

  • Issues certificates

  • Updates status fields

  • Creates Secrets with TLS keys

  • Talks to ACME / CA endpoints

cert-manager-cainjector

  • Injects CA bundles into CRDs and webhook configs

cert-manager-webhook

  • Validates and mutates Certificate API objects

  • Ensures CRD fields are correct before storing

  • Adds defaults

All these are Go binaries packaged inside containers.

Each pod runs Go programs from cert-manager source code:

GitHub: https://github.com/cert-manager/cert-managerarrow-up-right (don’t click; just referencing context)


So what actually happens when you apply a Certificate CRD?

Example:

Controller flow:

  1. Certificate object stored in etcd

  2. cert-manager controller loop sees it

  3. Controller understands:

    • Need to generate keypair

    • Need sign certificate

  4. Controller creates:

    • Secret (key + certificate)

  5. Updates CRD status:


Why you thought "controller must be code"? — you're right.

CRDs = API types Controllers = Code implementing behavior

You always need both for custom APIs:

Component
Purpose

CRD

Defines the schema of the resource

Controller

Implements the logic (create certs, update status, etc.)

Cert-manager's helm chart installs both.


Where does the actual controller code run?

Inside Kubernetes:

  • As Deployments

  • As Pods

  • Using service accounts

  • Using RBAC

  • Watching the CRDs

  • Taking actions

Example Deployment:

You will see something like:

This image contains the Go controller code.


Summary (Simple)

Component
What it is
Installed by Helm?

CRDs

API definitions (schemas)

✔️ Yes

Controllers

Go code running in Pods

✔️ Yes

Webhook

Validates CRD objects

✔️ Yes

CA injector

Injects CA bundles

✔️ Yes

Helm installs everything needed to make cert-manager work.


Great — this is the core understanding of Kubernetes architecture. Let me explain it clearly and simply.


1. Does a Kubernetes API resource exist only if a controller exists?

✔️ Yes and No — depends on what you mean by “exist.”

🔹 YES — if you want behavior

A Kubernetes resource definition alone (CRD or built-in resource) does nothing unless there is a controller watching it.

Example:

Deployment has meaning only because Kubernetes has:

  • deployment controller

  • replicaset controller

  • pod controller

  • kubelet

Without controllers, a Deployment YAML would just sit in etcd doing nothing.

🔹 NO — the object itself can exist without a controller

You can create a CRD without writing a controller.

It will exist in the API server:

But:

  • No Pods will be created

  • No logic will run

  • No reconciliation happens

  • No status updates

It is like creating a database table with no application using it.


2. Why Kubernetes created CRDs? (Why resort to CRDs?)

This is the most important design decision in Kubernetes history.

💡 Reason #1 — Kubernetes needed to be EXTENSIBLE

Originally Kubernetes had only built-in types:

  • Pods

  • Services

  • Deployments

  • Jobs

But companies wanted:

  • Kafka clusters

  • Redis clusters

  • Ingress controllers

  • Cert management

  • GitOps

  • Security policies

  • Cloud resource management (RDS, S3, etc.)

It was impossible to include all these as built-in Kubernetes objects.

So Kubernetes added CRDs so ANYONE can create new APIs inside Kubernetes.


Reason #2 — To implement “Kubernetes as a Platform”

CRDs allow Kubernetes to become a platform for building platforms.

Example:

  • ArgoCD brings Application resource → GitOps platform

  • cert-manager brings Certificate resource → TLS automation platform

  • Crossplane brings RDSInstance/Bucket → Cloud provisioning platform

  • Istio brings VirtualService → Service mesh platform

Kubernetes becomes the foundation layer.


Reason #3 — Unified API for everything

Instead of exposing multiple systems with different APIs:

  • AWS API

  • Terraform

  • Jenkins pipelines

  • Database provisioning scripts

CRDs allow everything to be controlled via one consistent interface:

This is extremely powerful.


Reason #4 — Operators (Automation on top of CRDs)

CRD + Controller = Operator pattern.

Operators can automate:

  • Day-2 operations

  • Upgrades

  • Failovers

  • Scaling

  • Backups

  • Validation

  • Drift correction

This allows companies to encode human operational knowledge into software.

Instead of running scripts manually.


Reason #5 — Decoupling API Server from Controllers

The API server does NOT need to know what the controller does.

Flow:

  1. CRD registers new API

  2. Controller watches that new API

  3. Controller manages Kubernetes or external resources

  4. Kubernetes itself stays minimal and clean

This avoids coupling business logic into Kubernetes core.


Reason #6 — CRDs allow Kubernetes to evolve without changing the core code

No need to:

  • fork Kubernetes

  • recompile the API server

  • modify built-in resources

Just define new APIs dynamically.

Example:

If you want your own object:

You can add it without touching Kubernetes source.


Summary — Why CRDs?

Reason
Explanation

Extensibility

Add new APIs without modifying Kubernetes itself

Platform on top of platform

Build GitOps, Mesh, DB operators

Consistency

Everything managed via kubectl & YAML

Automation

Operators implement real logic

Decoupling

API server stays generic, controllers do the work

Ecosystem growth

Thousands of operators exist today


3. Final Answer (Simple Version)

✔️ Kubernetes resources need controllers to “do” anything

Objects are only data. Controllers give them life.

Last updated