LAB10a: Pod Resource Quota

LAB30: Limiting Resource to Workloads

chevron-rightPython app – cpu_hog.pyhashtag
import multiprocessing as mp
import os
import signal
import sys
import time

# Read config from env (optional tuning)
RAMP_SECONDS = int(os.getenv("RAMP_SECONDS", "60"))   # How long to keep adding workers
MAX_WORKERS = int(os.getenv("MAX_WORKERS", str(RAMP_SECONDS)))  # Cap workers if needed


def burn_cpu(worker_id: int) -> None:
    """
    Busy loop that just eats CPU.
    Runs indefinitely until the container/pod is killed.
    """
    print(f"[worker-{worker_id}] starting CPU burn", flush=True)
    x = 0
    while True:
        # Simple nonsense math to keep CPU busy
        x = (x * x + 1) % 1_000_000_007


def handle_signal(signum, frame):
    print(f"[main] Received signal {signum}, exiting...", flush=True)
    sys.exit(0)


def main():
    # Handle graceful termination from Kubernetes
    signal.signal(signal.SIGTERM, handle_signal)
    signal.signal(signal.SIGINT, handle_signal)

    workers = []
    print(f"[main] Starting CPU ramp: up to {MAX_WORKERS} workers over {RAMP_SECONDS} seconds", flush=True)

    for i in range(RAMP_SECONDS):
        if len(workers) >= MAX_WORKERS:
            print("[main] Reached MAX_WORKERS cap, no more workers will be started", flush=True)
            break

        worker_id = len(workers) + 1
        p = mp.Process(target=burn_cpu, args=(worker_id,), daemon=True)
        p.start()
        workers.append(p)
        print(f"[main] Started worker {worker_id}, total workers: {len(workers)}", flush=True)

        # Sleep 1 second between each worker → gradual ramp
        time.sleep(1)

    print("[main] Ramp finished, workers will continue burning CPU until pod is killed", flush=True)

    # Keep main alive so the processes stay alive
    for p in workers:
        p.join()


if __name__ == "__main__":
    main()

---

chevron-rightDockerfilehashtag
# Simple, small-ish Python base image
FROM python:3.12-slim

WORKDIR /app

# Copy the CPU hog script
COPY cpu_hog.py .

# No extra deps, pure stdlib
ENTRYPOINT ["python", "cpu_hog.py"]

1. Dockerfile (image name: openlabfree/arajak-pod)

FROM python:3.12-slim

WORKDIR /app

COPY cpu_hog.py .

ENTRYPOINT ["python", "cpu_hog.py"]

Build & push:


3. (Optional) Quick K8s pod to demo limits

Without resource limits (to show node getting hammered):

With resource limits (to show how it’s contained):


2. Deployment WITHOUT resource limits

This one will happily chew through the node until it gasps.

hungry-no-limits.yaml


3. Deployment WITH resource limits

This one is leashed — Kubernetes will throttle the CPU burners. hungry-limited.yaml


chevron-right4. Optional: Namespace with ResourceQuotahashtag

Perfect for your demo — show how Kubernetes stops unlimited consumption.

If someone tries to deploy *hungry-no-limits* here, Kubernetes will say:

“Denied: exceeds namespace quota.”


Testing your demo

1) Create the no-limits deployment:

Watch the node scream (but slowly):

2) Create the limited deployment:

Now watch CPU throttling in action:


Memory Limit


chevron-rightmemory_hog.pyhashtag
chevron-rightDockerfilehashtag

Build & push:


3. Deployment WITHOUT memory limits

This version will happily devour RAM until the node squeals.


4. Deployment WITH memory limits

This one is placed on a diet → it will hit the memory ceiling → OOMKilled gracefully.

Observe OOM:


Optional: Namespace ResourceQuota (memory-focused)

Deploying kuber-pod-no-limits inside this namespace will be rejected.

Check scheduling to specific node

Let’s schedule your Deployment onto worker node w1 using either:

  • nodeSelector (clean + simple)

  • nodeAffinity (more flexible)

  • kubectl patch (quick one-liner)

Choose whichever fits your demo mood.


Step 1 — Label the target node (w1)

Verify:

You should see:


Step 2 — Patch your Deployment to force scheduling on w1

One-liner patch with nodeSelector

(Perfect for quick demo)

Or for the limited one:


What Kubernetes does next

  • It marks the Deployment template as needing nodeSelector: kuber-pod=yes

  • Old pods terminate

  • New pods create specifically on k8s-cluster-w1

  • w1’s memory graph begins its upward yoga stretch 📈


Verify scheduling on w1

You should see:


chevron-rightWant stronger pinning? (Optional)hashtag

Node Affinity (preferred, more expressive)

Patch it:


Last updated