BLOG09a: Tiny Toy Application—A Pocket-Sized Traffic Booth
BLOG 23: tiny toy application —a pocket-sized traffic booth
tiny toy application—a pocket-sized traffic booth—that:
Accepts incoming “requests” and drops them into an in-memory queue
Exposes a metric (
myapp_queue_length) at/metricsProcesses 1 request every 2 seconds (like a sleepy clerk sipping tea between tasks)
** Dummy App (Node.js)**
app.js
const express = require("express");
const client = require("prom-client");
const app = express();
app.use(express.json());
// Create a registry
const register = new client.Registry();
// Custom metric – current queue length
const queueLengthGauge = new client.Gauge({
name: "myapp_queue_length",
help: "Current number of items in the processing queue"
});
register.registerMetric(queueLengthGauge);
// Our in-memory queue
let queue = [];
// Endpoint to add an item to the queue
app.post("/enqueue", (req, res) => {
const item = req.body.item || `job-${Date.now()}`;
queue.push(item);
queueLengthGauge.set(queue.length);
console.log(`Enqueued: ${item}, queue length is now ${queue.length}`);
res.json({ message: "Item added", queue_length: queue.length });
});
// Worker that processes 1 item every 2 seconds
setInterval(() => {
if (queue.length > 0) {
const item = queue.shift();
console.log(`Processed: ${item}`);
queueLengthGauge.set(queue.length);
}
}, 2000);
// Expose metrics to Prometheus
app.get("/metrics", async (req, res) => {
res.set("Content-Type", register.contentType);
res.end(await register.metrics());
});
// Simple health endpoint
app.get("/", (req, res) => {
res.send("Queue processor is running");
});
const port = 8080;
app.listen(port, () => {
console.log(`Dummy queue app running on port ${port}`);
});package.json (click to expand)
{
"name": "dummy-queue-app",
"version": "1.0.0",
"main": "app.js",
"license": "MIT",
"dependencies": {
"express": "^4.18.2",
"prom-client": "^14.1.0"
}
}Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["node", "app.js"][Optional]
Test the app locally
Start the server:
Add items to the queue:
Check Prometheus metrics:
You'll see:
The worker will eat one item every 2 seconds
This app is perfect for HPA testing
Because:
Incoming traffic → queue grows → metric spikes
Prometheus scrapes
myapp_queue_lengthHPA scales when queue > 10
More pods = more queue consumers → queue clears faster
It's like hiring extra staff when the line gets too long.
PreviousLAB09: Fetching Kubelet MetricsNextBLOG09b: CPU & Memory Metrics — The Real Heartbeat of HPA
Last updated