BLOG01: Usage and difference between Kubeadm, Kubectl, Kubelet
Here’s a clear and concise explanation of kubeadm, kubectl, and kubelet — how they differ, how they work, and whether Kubernetes components run as containers.
Are all Kubernetes services containers?
Yes — in a kubeadm-based cluster, most control-plane components run as containers, such as:
kube-apiserver
kube-controller-manager
kube-scheduler
etcd (if local)
These run as static pods managed by kubelet. You can see them in:
kubectl get pods -n kube-system
Worker-node components like kubelet, container runtime (containerd, cri-o) run as systemd services (not containers).
The difference between kubeadm, kubectl, and kubelet
kubeadm
“The Kubernetes cluster installer”
A bootstrap tool to create and join Kubernetes clusters.
It does NOT run the cluster.
It only prepares and sets up the control plane + worker nodes.
kubeadm does things like:
kubeadm init – initialize the cluster
kubeadm join – join worker nodes
Configure certificates
Create static pod manifests for API server, scheduler, controller-manager
Configure kubelet bootstrap
Think of kubeadm as:
installer
initializer
not a long-running service
not responsible for cluster management
kubectl
“The Kubernetes command-line client”
Used to interact with the Kubernetes API server.
Examples:
kubectl does not manage nodes or start Kubernetes. It only sends requests to the API server.
kubelet
“The node agent — runs on every node”
kubelet is a long-running systemd service that:
Talks to the Kubernetes API
Ensures containers/pods are running
Runs static pod manifests
Manages health checks (liveness/readiness)
Talks to container runtime (containerd/cri-o)
Without kubelet, a node is NOT part of the cluster.
kubelet responsibilities:
Start/stop pods
Report node health
Run control plane components (kubeadm static pods)
Manage CRI runtime
kubelet ≠ kubeadm
kubeadm installs/kicks things off. kubelet actually runs/maintains everything.
Quick Comparison Table
Component
Type
Purpose
Runs as
kubeadm
Bootstrap tool
Create/Join cluster
CLI command
kubectl
Client tool
Talk to API server
CLI command
kubelet
Node agent
Runs pods, manages node
systemd service
kube-apiserver
Control-plane service
Accepts API requests
Container (static pod)
scheduler
Control-plane service
Assign pods to nodes
Container (static pod)
controller-manager
Control-plane service
Reconciliation loops
Container (static pod)
etcd
Database
Cluster state
Container (static pod)
Summary
kubeadm → installs and bootstraps Kubernetes
kubectl → interacts with the Kubernetes API
kubelet → runs on every node and manages pods
Control-plane services run as containers, managed by kubelet
Question: How do kubeadm, kubelet, kubectl, and all Kubernetes components interact in the cluster?
Great — here is a simple, clean, visual explanation of how kubeadm, kubelet, kubectl, and all Kubernetes components work together. This will make the whole architecture crystal clear.
How kubeadm, kubelet, kubectl, API server connect (Simple Diagram)
How kubeadm fits into this
kubeadm is NOT a service. It simply creates everything needed.
When you run:
kubeadm does these things:
Generates control-plane certificates
apiserver certs
etcd certs
kubelet client certs
Writes static pod manifests
kubeadm creates YAML files in:
These YAML files define:
kube-apiserver
kube-scheduler
kube-controller-manager
etcd (if local)
kubelet automatically sees these files
kubelet detects static pod manifests and runs them as containers.
So kubeadm → writes manifests kubelet → runs the containers
kubeadm vs kubelet vs kubectl — Simple Summary
kubeadm = Installer / Bootstrapper
creates cluster
generates certificates
installs control plane
creates static pod configs
kubelet = Node Agent (runs 24/7)
runs pods
pulls images via containerd
ensures health
keeps node in sync with API server
kubectl = Client tool
sends commands to API server
never interacts with kubelet directly
How kubeadm-created nodes run the control plane
Control-plane node example:
You will see kubelet running as a systemd service, and the API server, scheduler, and controller-manager running as containers, e.g.:
Because they are static pods pulled by kubelet.
Difference Between K8s Distributions
Here is how K8s installers differ:
Tool / Distro
Control-plane runs as
Purpose
kubeadm
containers (static pods)
Production cluster bootstrap
minikube
VM or Docker env
Local testing
k3s
single binary
Lightweight, CNCF-certified
EKS/GKE/AKS
Hidden (AWS manages)
Managed control plane
RKE/Rancher
containers
Cluster provisioning
Question: What actually happens inside Kubernetes when you run kubectl apply -f pod.yaml?
Let's step through and visualize how kubectl apply -f pod.yaml flows through Kubernetes, including how kubeadm, kubelet, and container runtimes fit into the picture. This breakdown will make the full lifecycle clear.
Step-by-Step Flow: kubectl apply -f pod.yaml
User runs kubectl
kubectl reads the YAML file (defines a Pod or Deployment).
Sends a REST API request to the kube-apiserver.
API server receives request
Validates YAML schema and authentication (RBAC).
Stores desired state in etcd (cluster database).
Updates spec in etcd: "I want Pod X running on Node Y".
NAME READY STATUS RESTARTS AGE
my-app 1/1 Running 0 10s
User
|
v
kubectl (CLI) --------------------> kube-apiserver (Control-plane)
|
v
etcd (cluster state)
|
v
scheduler & controller-manager
|
v
kubelet on Node
|
v
container runtime (containerd)
|
v
Containers
Worker Node
+-----------------+
| kubeadm join | <-- run CLI
+-----------------+
|
v
+-----------------+
| kubelet config | <-- bootstrap kubelet.conf
+-----------------+
|
v
+-----------------+
| kubelet service | <-- registers with API server
+-----------------+
|
v
API Server (Control-plane)
|
v
Node Registered
|
v
Node becomes Ready (after CNI)
|
v
Pods can now run
kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abc123.0123456789abcdef 23h 2025-11-21T12:30:00Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token