kubectl get pvc -A and kubectl describe pvc <name> to inspect binding status, capacity, and events.Pending -- 90% of the time the cause is a missing or misconfigured StorageClass, an access-mode mismatch, or WaitForFirstConsumer binding with no Pod scheduled yet.allowVolumeExpansion: true on the StorageClass -- attempting to resize a PVC whose class lacks this flag produces an error. [src4]ReadWriteOncePod (RWOP) access mode requires Kubernetes 1.29+ and a CSI driver that supports it -- in-tree plugins do not support RWOP. [src1]Recycle reclaim policy is deprecated since Kubernetes 1.22 -- never use it in new deployments; use Delete or Retain with dynamic provisioning. [src5]kubectl delete pvc --force while a Pod is writing -- the kubernetes.io/pvc-protection finalizer exists to prevent data loss. [src1]| Resource | API Version | Key Fields | Purpose |
|---|---|---|---|
PersistentVolume | v1 | spec.capacity.storage, spec.accessModes, spec.persistentVolumeReclaimPolicy, spec.storageClassName | Cluster-wide storage resource |
PersistentVolumeClaim | v1 | spec.accessModes, spec.resources.requests.storage, spec.storageClassName, spec.selector | Namespace-scoped storage request |
StorageClass | storage.k8s.io/v1 | provisioner, parameters, reclaimPolicy, volumeBindingMode, allowVolumeExpansion | Template for dynamic provisioning |
VolumeSnapshot | snapshot.storage.k8s.io/v1 | spec.source.persistentVolumeClaimName, spec.volumeSnapshotClassName | Point-in-time volume copy |
VolumeSnapshotClass | snapshot.storage.k8s.io/v1 | driver, deletionPolicy, parameters | Snapshot provisioning template |
VolumeSnapshotContent | snapshot.storage.k8s.io/v1 | spec.source.volumeHandle, spec.driver, spec.deletionPolicy | Actual snapshot on storage backend |
| Access Mode | Short | Nodes | Pods | Typical Use Case |
|---|---|---|---|---|
ReadWriteOnce | RWO | 1 node | Any pods on that node | Databases, single-writer apps |
ReadOnlyMany | ROX | Many nodes | Many pods (read-only) | Shared config, static assets |
ReadWriteMany | RWX | Many nodes | Many pods (read-write) | Shared filesystem (NFS, CephFS, EFS) |
ReadWriteOncePod | RWOP | 1 node | Exactly 1 pod | Exclusive writer (etcd, single-leader) |
| Reclaim Policy | Behavior After PVC Deletion | Default For | Recommended Use |
|---|---|---|---|
Delete | PV and underlying storage asset deleted | Dynamic provisioning | Dev/test, ephemeral workloads |
Retain | PV marked Released, data preserved | Static provisioning | Production data, compliance |
Recycle (deprecated) | rm -rf /thevolume/*, PV reused | Legacy clusters | Never -- use dynamic provisioning |
| Volume Binding Mode | When PV is Bound | Best For |
|---|---|---|
Immediate | As soon as PVC is created | Single-zone clusters, pre-provisioned PVs |
WaitForFirstConsumer | When Pod using PVC is scheduled | Multi-zone clusters (zone-aware provisioning) |
START: Do you need persistent storage in Kubernetes?
├── Is the data ephemeral (survives Pod restart but not Pod deletion)?
│ ├── YES → Use emptyDir volume (not a PVC)
│ └── NO ↓
├── Do you need storage that survives Pod deletion and rescheduling?
│ ├── YES ↓
│ └── NO → Use emptyDir or configMap/secret volumes
├── Do multiple Pods on different nodes need read-write access?
│ ├── YES → Use RWX with NFS/CephFS/EFS StorageClass
│ └── NO ↓
├── Does exactly one Pod need exclusive write access?
│ ├── YES + K8s 1.29+ → Use RWOP access mode
│ ├── YES + K8s <1.29 → Use RWO access mode
│ └── NO ↓
├── Are you running a StatefulSet with per-replica storage?
│ ├── YES → Use volumeClaimTemplates (see StatefulSet section)
│ └── NO → Create a standalone PVC and reference in Pod spec
├── Is your cluster multi-zone?
│ ├── YES → Set volumeBindingMode: WaitForFirstConsumer
│ └── NO → Immediate binding is fine
└── Do you need snapshots or cloning?
├── YES → Verify CSI driver supports snapshots + install snapshot-controller
└── NO → Standard PVC workflow
Define a StorageClass that tells Kubernetes which CSI provisioner to use and how volumes should behave. Most managed Kubernetes clusters come with a default StorageClass, but creating explicit ones gives you control over performance tiers and reclaim behavior. [src2]
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: ebs.csi.aws.com # AWS EBS CSI driver
parameters:
type: gp3 # General Purpose SSD
fsType: ext4
reclaimPolicy: Retain # Keep data after PVC deletion
volumeBindingMode: WaitForFirstConsumer # Zone-aware provisioning
allowVolumeExpansion: true # Allow PVC resize
Verify: kubectl get sc fast-ssd → shows the StorageClass with PROVISIONER and RECLAIMPOLICY columns.
A PVC requests storage from the cluster. With dynamic provisioning, Kubernetes automatically creates a matching PV. [src1]
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 20Gi
Verify: kubectl get pvc app-data → STATUS should show Bound (or Pending if using WaitForFirstConsumer until a Pod is scheduled).
Reference the PVC in the Pod's volumes section and mount it into a container. [src1]
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: nginx:1.27
volumeMounts:
- mountPath: /data
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: app-data
Verify: kubectl exec app -- df -h /data → shows the mounted filesystem with the requested capacity.
To increase storage, patch the PVC's spec.resources.requests.storage field. Most CSI drivers support online expansion without Pod restart. [src4]
kubectl patch pvc app-data -p '{"spec":{"resources":{"requests":{"storage":"50Gi"}}}}'
Verify: kubectl get pvc app-data → CAPACITY shows 50Gi. Check events: kubectl describe pvc app-data | grep -i resize → shows FileSystemResizeSuccessful.
Take a point-in-time snapshot of an existing PVC. Requires snapshot-controller and CSI driver snapshot support. [src3]
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: app-data-snap-20260227
spec:
volumeSnapshotClassName: csi-aws-snapclass
source:
persistentVolumeClaimName: app-data
Verify: kubectl get volumesnapshot app-data-snap-20260227 → READYTOUSE should show true.
Create a new PVC with the dataSource field pointing to an existing VolumeSnapshot. The new volume is pre-populated with the snapshot data. [src3]
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data-restored
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 20Gi
dataSource:
name: app-data-snap-20260227
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
Verify: kubectl get pvc app-data-restored → STATUS shows Bound and the data matches the snapshot contents.
For StatefulSets, each replica automatically gets its own PVC via volumeClaimTemplates. PVCs are named {template-name}-{pod-name} (e.g., data-myapp-0). [src8]
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
spec:
serviceName: myapp
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
volumeMounts:
- mountPath: /data
name: data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: fast-ssd
resources:
requests:
storage: 10Gi
Verify: kubectl get pvc -l app=myapp → shows data-myapp-0, data-myapp-1, data-myapp-2, all Bound.
# Input: NFS server at 10.0.0.100, export path /exports/data
# Output: PV available for PVC binding with RWX access
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
server: 10.0.0.100
path: /exports/data
# Input: Pre-created PV with label tier=fast
# Output: PVC bound to a specific PV via selector
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fast-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: ""
selector:
matchLabels:
tier: fast
# Input: AWS EBS CSI driver installed
# Output: Snapshot class for creating EBS snapshots
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-aws-snapclass
driver: ebs.csi.aws.com
deletionPolicy: Retain
# Input: Existing PVC "source-pvc" in namespace "default"
# Output: New PVC "clone-pvc" with identical data
cat <<'EOF' | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clone-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 20Gi
dataSource:
name: source-pvc
kind: PersistentVolumeClaim
EOF
# BAD -- storageClassName: "" disables dynamic provisioning entirely
# The PVC will never bind unless a matching PV exists with storageClassName: ""
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: ""
resources:
requests:
storage: 10Gi
# GOOD -- uses the cluster's default StorageClass for dynamic provisioning
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
# BAD -- PV may be provisioned in zone-a but Pod scheduled in zone-b
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: bad-multizone
provisioner: ebs.csi.aws.com
volumeBindingMode: Immediate
# GOOD -- PV is provisioned in the same zone as the scheduled Pod
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: good-multizone
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
# BAD -- deleting a StatefulSet does NOT delete its PVCs
# Orphaned PVCs accumulate storage cost silently
kubectl delete statefulset myapp
# PVCs data-myapp-0, data-myapp-1, data-myapp-2 still exist
# GOOD -- delete PVCs explicitly after confirming data is backed up
kubectl delete statefulset myapp
kubectl delete pvc -l app=myapp # Only after verifying backups
# BAD -- a PV in Released phase still has a claimRef to the old PVC
# New PVCs will NOT automatically bind to a Released PV
# GOOD -- remove the old claimRef so the PV becomes Available
kubectl patch pv my-pv -p '{"spec":{"claimRef": null}}'
storageClassName that does not exist or no default StorageClass is set. Fix: kubectl get sc to list available classes. [src1]WaitForFirstConsumer binding mode, the PVC stays Pending until a Pod that uses it is scheduled. This is expected behavior. Fix: Create a Pod that references the PVC. [src2]volumeBindingMode: WaitForFirstConsumer. [src7]allowVolumeExpansion: true. Fix: Create a new StorageClass with expansion enabled or ask the cluster admin to patch the existing one. [src4]Retain policy has an old claimRef. Fix: kubectl patch pv <name> -p '{"spec":{"claimRef": null}}'. [src5]external-snapshotter repository. [src3]# List all PVCs across namespaces with capacity and status
kubectl get pvc -A -o wide
# Describe a specific PVC to see events, conditions, and binding
kubectl describe pvc app-data -n default
# List all PVs with reclaim policy and status
kubectl get pv -o custom-columns=NAME:.metadata.name,CAPACITY:.spec.capacity.storage,ACCESS:.spec.accessModes[0],RECLAIM:.spec.persistentVolumeReclaimPolicy,STATUS:.status.phase,CLAIM:.spec.claimRef.name,STORAGECLASS:.spec.storageClassName
# Check StorageClasses and their provisioners
kubectl get sc -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,RECLAIM:.reclaimPolicy,BINDING:.volumeBindingMode,EXPANSION:.allowVolumeExpansion
# Check which CSI drivers are installed
kubectl get csidrivers
# Find PVCs in Pending state
kubectl get pvc -A --field-selector=status.phase=Pending
# Check volume expansion progress
kubectl describe pvc app-data | grep -A5 "Conditions"
# List VolumeSnapshots and their readiness
kubectl get volumesnapshot -A
# Check snapshot controller is running
kubectl get pods -n kube-system -l app=snapshot-controller
# Verify volume is mounted inside a Pod
kubectl exec app -- df -h /data
# Check PV finalizers (deletion protection)
kubectl get pv <name> -o jsonpath='{.metadata.finalizers}'
| Version | Status | Key Changes | Migration Notes |
|---|---|---|---|
| Kubernetes 1.31+ | Current | PV deletion protection finalizers GA; VolumeAttributesClass beta | No migration needed from 1.29+ |
| Kubernetes 1.29 | Supported | ReadWriteOncePod (RWOP) GA; SELinuxMount alpha | RWOP requires CSI driver support |
| Kubernetes 1.27 | Supported | VolumeSnapshot GA; in-tree to CSI migration complete for major plugins | Migrate in-tree PVs to CSI |
| Kubernetes 1.24 | EOL | Volume expansion GA; non-graceful node shutdown alpha | All expansion features stable |
| Kubernetes 1.22 | EOL | Recycle reclaim policy deprecated; ephemeral volumes beta | Replace Recycle with Delete + dynamic provisioning |
| Kubernetes 1.13 | EOL | CSI 1.0 GA; dynamic provisioning GA | Upgrade from FlexVolume to CSI drivers |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Application needs data that survives Pod restarts and rescheduling | Data is temporary and can be regenerated | emptyDir volume |
| Running databases (PostgreSQL, MySQL, MongoDB) on Kubernetes | Using managed database services (RDS, Cloud SQL) | Cloud provider's managed DB |
| Need consistent snapshots and cloning for CI/CD test data | Backing up application-level data (SQL dumps preferred) | pg_dump, mysqldump, mongodump |
| Deploying a StatefulSet with per-replica storage | Running stateless services (APIs, web servers) | Deployment without PVCs |
| Multi-zone cluster needs zone-aware storage provisioning | Single-node development clusters (minikube, kind) | Default StorageClass with Immediate binding |
| Need shared read-write filesystem across Pods (RWX) | Block storage is sufficient for a single writer | RWO with block-backed StorageClass |