So lets apply the network policy so only backend pods from the same namespace is allowed
mongo-backend-only-networkpolicy.yaml
apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-mongo-only-from-backendnamespace:globomantics-hotel-devspec:podSelector:matchLabels:app:guestbook-database # ✅ Targets ONLY MongoDB podspolicyTypes:-Ingressingress:-from:-podSelector:matchLabels:app:guestbook-backend # ✅ Only backend allowedports:-protocol:TCPport:27017 # ✅ MongoDB port
Test few cases
busybox pod in different namespace but with the same label as backend pod
busybox pod in same namespace but with no label matching as backend pod
busybox pod in same namespace but it has a label same as backend pod
Summary from the test
Yes — if someone creates a busybox Pod with the SAME label and in the SAME namespace as the backend, your current NetworkPolicy will ALLOW it to access MongoDB.
Because:
NetworkPolicy trusts labels
Labels are not a security boundary
Anyone with kubectl run permission in that namespace can spoof:
✅ This pod would be allowed to reach MongoDB under your current policy.
Important Limitation of Kubernetes NetworkPolicy
NetworkPolicy CANNOT:
Match on ServiceAccount
Match on user identity
Prevent label spoofing
Prevent access by someone with pod-creation rights
It only sees:
Namespace
Pod labels
IP blocks
So your current security is:
Threat
Blocked?
Busybox in another namespace
✅ Yes
Frontend pod
✅ Yes
Random pod without label
✅ Yes
Busybox with same label in same namespace
❌ NO
REAL PRODUCTION-GRADE SOLUTION (NO SPOOFING POSSIBLE)
You must combine THREE layers:
Layer
Purpose
Stops Busybox Spoofing?
NetworkPolicy
Network isolation
❌ Not alone
ServiceAccount
Workload identity
✅ Yes
RBAC
Who can use that identity
✅ Yes
STEP 1 — Create a Dedicated Backend ServiceAccount
STEP 2 — Force Backend Deployment to Use It
In your backend Deployment:
Now your real backend has a unique workload identity.
STEP 3 — Prevent Others from Using That ServiceAccount (RBAC)
This ensures only your backend deployment can use that SA:
👉 Now:
✅ Real backend works
❌ Busybox cannot attach this ServiceAccount
❌ Even if it spoofs the label, it still won’t get secrets or identity
STEP 4 — Your NetworkPolicy (Keep It)
You keep your existing locked policy:
FINAL SECURITY TRUTH TABLE
Scenario
Mongo Access
✅ Real backend pod
✅ Allowed
❌ Frontend pod
❌ Blocked
❌ Busybox other namespace
❌ Blocked
❌ Busybox same namespace, fake label
❌ Blocked (by RBAC + SA)
✅ Mongo replication traffic
✅ Allowed
Hence RBAC is important
If a human user has full privileges (cluster-admin/edit on the namespace), they can absolutely:
Create a Pod/Deployment/Job
Attach any ServiceAccount (including your locked-down guestbook-backend-sa)
Bypass your NetworkPolicy / SA design by impersonating the backend
There is no technical way inside Kubernetes to stop a real cluster-admin from doing this. The fix is:
1️⃣ Minimize & segment Kubernetes access
Rule of thumb:
Only CI/CD, GitOps (ArgoCD), and a very tiny SRE group should have write access.
For humans:
Most engineers → read-only + maybe kubectl exec into specific namespaces
A few platform owners → limited edit in certain namespaces
Almost nobody → cluster-admin
Example: give devs read-only in globomantics-hotel-dev:
Now they can’t create pods, deployments, or attach SAs at all.
2️⃣ Only GitOps/CI should deploy workloads
Instead of humans applying YAML:
Use ArgoCD or Flux (you already have ArgoCD annotations 😉)
CI/CD pushes to Git → Argo applies to cluster
Human users get no direct kubectl apply in app namespaces
Then:
Backend ServiceAccount is referenced only in Git
Only ArgoCD (with its own SA) can create pods using it
A human with just view rights cannot start a spoofed pod
3️⃣ Lock service accounts to automation, not humans
You already suspected this:
“User with full privilege can start the pod by attaching that service account too?”
So:
Don’t give humans full privilege in that namespace
Ensure only ArgoCD / CI service accounts have:
create/patch on deployments, statefulsets, pods
Ability to use the guestbook-backend-sa
Everyone else → view-only.
4️⃣ Admission controls (extra safety net)
Even if someone has more permissions than they should, tools like Kyverno / OPA Gatekeeper can:
Deny pods that:
Use serviceAccountName: guestbook-backend-sa and are not created by ArgoCD
Run images not from your trusted registry
Don’t have required labels/annotations
Example idea (not full policy, just concept):
Deny any Pod using guestbook-backend-sa unless:
metadata.labels.app == guestbook-backend, and
metadata.annotations.app-owner == "platform-team"
That way, even if someone tries to run busybox with that SA, admission webhook blocks it.
5️⃣ Summary: what protects you at each layer
NetworkPolicy → stops random pods from talking to Mongo by IP/label
ServiceAccount + RBAC → stops pods from talking to Kubernetes API with power
Limiting human RBAC → stops engineers from creating bad workloads
Admission control (Kyverno/OPA) → stops even overprivileged users from running non-compliant pods
GitOps (ArgoCD) → only automation deploys; humans don’t touch cluster state directly
TL;DR in your words
“We might need to restrict access to cluster because user with full privilege can start pod with that service account.”
✅ Correct. Solution is: don’t give them full privilege in the first place, and back it up with admission policies + GitOps.
If you want, next step I can:
Draft a concrete RBAC layout for:
platform-admins
devs
read-only
here is a clean, production-grade RBAC layout for the three roles you want:
✅ platform-admins → full cluster control
✅ devs → limited write access (apps only, no infra/security)
✅ read-only → view access only
This works with:
OIDC / SSO users
Groups from Keycloak, Azure AD, Google, etc.
Or plain Kubernetes users
1️⃣ PLATFORM ADMINS (Full Cluster Control)
⚠️ Only 1–2 trusted SRE / platform owners should ever be here.
🔹 ClusterRoleBinding
✅ Can:
Create/delete namespaces
Modify CRDs
Modify NetworkPolicies
Modify RBAC
Access all secrets
Attach any ServiceAccount
2️⃣ DEVS (Application-Level Control Only)
✅ Devs can:
Deploy apps
Restart pods
View logs
Exec into pods ❌ Devs cannot:
Modify RBAC
Modify NetworkPolicies
Read Secrets
Create ServiceAccounts
Touch CRDs
This is namespace-scoped (recommended).
🔹 Step 1 — Namespace Role
Apply per namespace (example: globomantics-hotel-dev):
Step 2 — Bind Devs Group
3️⃣ READ-ONLY USERS (Zero Risk)
✅ Can:
View pods
View deployments
View services
View logs ❌ Cannot:
Exec
Delete anything
Read secrets
Modify anything
🔹 Cluster-Wide Read-Only
FINAL PERMISSION MATRIX
Capability
platform-admins
devs
read-only
View workloads
✅
✅
✅
Deploy apps
✅
✅
❌
Delete pods
✅
✅
❌
Exec into pods
✅
✅
❌
View logs
✅
✅
✅
Read secrets
✅
❌
❌
Modify RBAC
✅
❌
❌
Modify NetworkPolicy
✅
❌
❌
Create ServiceAccounts
✅
❌
❌
Cluster settings
✅
❌
❌
EXTRA HARDENING (HIGHLY RECOMMENDED)
To fully protect against:
✅ Backend ServiceAccount abuse
✅ NetworkPolicy tampering
✅ Secret theft
✅ MongoDB bypass
You should also:
✅ Disable SA token mount for Mongo & frontend ✅ Lock backend with dedicated ServiceAccount ✅ Use GitOps (ArgoCD) for all writes ✅ Add Kyverno/OPA to block:
Pods with serviceAccountName: guestbook-backend-sa unless from ArgoCD