LAB11c: Dynamic Storage Provisioning

LAB80 Dynamic Storage Provisioning

Setup NFS

  1. Create the shared directory:

sudo mkdir -p /srv/nfs_share
  1. Set permissions

sudo chown nobody:nogroup /srv/nfs_share
sudo chmod 777 /srv/nfs_share
  1. Start NFS server


# Use --network=host and omit the -p port mappings
docker run -d \
  --name nfs-server \
  --privileged \
  --restart unless-stopped \
  --network=host \
  -v /srv/nfs_share:/data:rw \
  -e SHARED_DIRECTORY=/data \
  -e PERMITTED_CLIENTS="*" \
  itsthenetwork/nfs-server-alpine:12

Check logs

  1. Install NFS client packages on every node

To test mounting the share


1. Add the official Helm repo


2. Create namespace


3. Install using Helm (correct full command)

What each flag does:

Flag
Meaning

--set nfs.server=

NFS server IP address

--set nfs.path=

Path exported by your NFS server

--set storageClass.create=true

Automatically create a StorageClass

--set storageClass.name=nfs-client

Name the SC nfs-client

--set storageClass.onDelete=true

Cleanup subdir when PVC deleted

--namespace nfs-provisioner

Clean isolation


4. Verify installation

Pod:

Should show:

StorageClass:

Should show:


5. Test with a PVC

Create pvc.yaml:

Apply:

Expected:


6. Verify NFS directory creation

On your NFS server:

You will see:

📌 That means dynamic provisioning works.


Bonus: Make it the default StorageClass

If you want this to be the default SC:

Test - Filling up Storage Quota


storage-filler.yaml

Replace:

with your real PVC name (the one that is 1 GB).


How to Run This

Apply PVC (example 1 GB)

Apply:


What You Will Observe When PVC Becomes Full

Your cluster will begin telling tales:

1. dd write error

Inside container logs:

2. Events on Pod

You may see:

Last updated