Kubernetes Reference: Persistent Volume Claims

Type: Software Reference Confidence: 0.94 Sources: 8 Verified: 2026-02-27 Freshness: 2026-02-27

TL;DR

Constraints

Quick Reference

ResourceAPI VersionKey FieldsPurpose
PersistentVolumev1spec.capacity.storage, spec.accessModes, spec.persistentVolumeReclaimPolicy, spec.storageClassNameCluster-wide storage resource
PersistentVolumeClaimv1spec.accessModes, spec.resources.requests.storage, spec.storageClassName, spec.selectorNamespace-scoped storage request
StorageClassstorage.k8s.io/v1provisioner, parameters, reclaimPolicy, volumeBindingMode, allowVolumeExpansionTemplate for dynamic provisioning
VolumeSnapshotsnapshot.storage.k8s.io/v1spec.source.persistentVolumeClaimName, spec.volumeSnapshotClassNamePoint-in-time volume copy
VolumeSnapshotClasssnapshot.storage.k8s.io/v1driver, deletionPolicy, parametersSnapshot provisioning template
VolumeSnapshotContentsnapshot.storage.k8s.io/v1spec.source.volumeHandle, spec.driver, spec.deletionPolicyActual snapshot on storage backend
Access ModeShortNodesPodsTypical Use Case
ReadWriteOnceRWO1 nodeAny pods on that nodeDatabases, single-writer apps
ReadOnlyManyROXMany nodesMany pods (read-only)Shared config, static assets
ReadWriteManyRWXMany nodesMany pods (read-write)Shared filesystem (NFS, CephFS, EFS)
ReadWriteOncePodRWOP1 nodeExactly 1 podExclusive writer (etcd, single-leader)
Reclaim PolicyBehavior After PVC DeletionDefault ForRecommended Use
DeletePV and underlying storage asset deletedDynamic provisioningDev/test, ephemeral workloads
RetainPV marked Released, data preservedStatic provisioningProduction data, compliance
Recycle (deprecated)rm -rf /thevolume/*, PV reusedLegacy clustersNever -- use dynamic provisioning
Volume Binding ModeWhen PV is BoundBest For
ImmediateAs soon as PVC is createdSingle-zone clusters, pre-provisioned PVs
WaitForFirstConsumerWhen Pod using PVC is scheduledMulti-zone clusters (zone-aware provisioning)

Decision Tree

START: Do you need persistent storage in Kubernetes?
├── Is the data ephemeral (survives Pod restart but not Pod deletion)?
│   ├── YES → Use emptyDir volume (not a PVC)
│   └── NO ↓
├── Do you need storage that survives Pod deletion and rescheduling?
│   ├── YES ↓
│   └── NO → Use emptyDir or configMap/secret volumes
├── Do multiple Pods on different nodes need read-write access?
│   ├── YES → Use RWX with NFS/CephFS/EFS StorageClass
│   └── NO ↓
├── Does exactly one Pod need exclusive write access?
│   ├── YES + K8s 1.29+ → Use RWOP access mode
│   ├── YES + K8s <1.29 → Use RWO access mode
│   └── NO ↓
├── Are you running a StatefulSet with per-replica storage?
│   ├── YES → Use volumeClaimTemplates (see StatefulSet section)
│   └── NO → Create a standalone PVC and reference in Pod spec
├── Is your cluster multi-zone?
│   ├── YES → Set volumeBindingMode: WaitForFirstConsumer
│   └── NO → Immediate binding is fine
└── Do you need snapshots or cloning?
    ├── YES → Verify CSI driver supports snapshots + install snapshot-controller
    └── NO → Standard PVC workflow

Step-by-Step Guide

1. Create a StorageClass for dynamic provisioning

Define a StorageClass that tells Kubernetes which CSI provisioner to use and how volumes should behave. Most managed Kubernetes clusters come with a default StorageClass, but creating explicit ones gives you control over performance tiers and reclaim behavior. [src2]

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: ebs.csi.aws.com          # AWS EBS CSI driver
parameters:
  type: gp3                            # General Purpose SSD
  fsType: ext4
reclaimPolicy: Retain                  # Keep data after PVC deletion
volumeBindingMode: WaitForFirstConsumer # Zone-aware provisioning
allowVolumeExpansion: true             # Allow PVC resize

Verify: kubectl get sc fast-ssd → shows the StorageClass with PROVISIONER and RECLAIMPOLICY columns.

2. Create a PersistentVolumeClaim

A PVC requests storage from the cluster. With dynamic provisioning, Kubernetes automatically creates a matching PV. [src1]

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast-ssd
  resources:
    requests:
      storage: 20Gi

Verify: kubectl get pvc app-data → STATUS should show Bound (or Pending if using WaitForFirstConsumer until a Pod is scheduled).

3. Mount the PVC in a Pod

Reference the PVC in the Pod's volumes section and mount it into a container. [src1]

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: nginx:1.27
      volumeMounts:
        - mountPath: /data
          name: storage
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: app-data

Verify: kubectl exec app -- df -h /data → shows the mounted filesystem with the requested capacity.

4. Expand a PVC (online resize)

To increase storage, patch the PVC's spec.resources.requests.storage field. Most CSI drivers support online expansion without Pod restart. [src4]

kubectl patch pvc app-data -p '{"spec":{"resources":{"requests":{"storage":"50Gi"}}}}'

Verify: kubectl get pvc app-data → CAPACITY shows 50Gi. Check events: kubectl describe pvc app-data | grep -i resize → shows FileSystemResizeSuccessful.

5. Create a VolumeSnapshot

Take a point-in-time snapshot of an existing PVC. Requires snapshot-controller and CSI driver snapshot support. [src3]

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: app-data-snap-20260227
spec:
  volumeSnapshotClassName: csi-aws-snapclass
  source:
    persistentVolumeClaimName: app-data

Verify: kubectl get volumesnapshot app-data-snap-20260227 → READYTOUSE should show true.

6. Restore a PVC from a snapshot

Create a new PVC with the dataSource field pointing to an existing VolumeSnapshot. The new volume is pre-populated with the snapshot data. [src3]

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data-restored
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast-ssd
  resources:
    requests:
      storage: 20Gi
  dataSource:
    name: app-data-snap-20260227
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io

Verify: kubectl get pvc app-data-restored → STATUS shows Bound and the data matches the snapshot contents.

7. Use volumeClaimTemplates in a StatefulSet

For StatefulSets, each replica automatically gets its own PVC via volumeClaimTemplates. PVCs are named {template-name}-{pod-name} (e.g., data-myapp-0). [src8]

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
spec:
  serviceName: myapp
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp
          image: myapp:latest
          volumeMounts:
            - mountPath: /data
              name: data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: fast-ssd
        resources:
          requests:
            storage: 10Gi

Verify: kubectl get pvc -l app=myapp → shows data-myapp-0, data-myapp-1, data-myapp-2, all Bound.

Code Examples

YAML: Static PV with NFS backend

# Input:  NFS server at 10.0.0.100, export path /exports/data
# Output: PV available for PVC binding with RWX access
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    server: 10.0.0.100
    path: /exports/data

YAML: PVC with label selector for static binding

# Input:  Pre-created PV with label tier=fast
# Output: PVC bound to a specific PV via selector
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fast-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  storageClassName: ""
  selector:
    matchLabels:
      tier: fast

YAML: VolumeSnapshotClass for AWS EBS

# Input:  AWS EBS CSI driver installed
# Output: Snapshot class for creating EBS snapshots
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-aws-snapclass
driver: ebs.csi.aws.com
deletionPolicy: Retain

Bash: Clone a PVC (volume cloning)

# Input:  Existing PVC "source-pvc" in namespace "default"
# Output: New PVC "clone-pvc" with identical data
cat <<'EOF' | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clone-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast-ssd
  resources:
    requests:
      storage: 20Gi
  dataSource:
    name: source-pvc
    kind: PersistentVolumeClaim
EOF

Anti-Patterns

Wrong: Hardcoding storageClassName to empty string unintentionally

# BAD -- storageClassName: "" disables dynamic provisioning entirely
# The PVC will never bind unless a matching PV exists with storageClassName: ""
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes: ["ReadWriteOnce"]
  storageClassName: ""
  resources:
    requests:
      storage: 10Gi

Correct: Omit storageClassName to use cluster default or specify explicitly

# GOOD -- uses the cluster's default StorageClass for dynamic provisioning
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 10Gi

Wrong: Using Immediate binding in multi-zone clusters

# BAD -- PV may be provisioned in zone-a but Pod scheduled in zone-b
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: bad-multizone
provisioner: ebs.csi.aws.com
volumeBindingMode: Immediate

Correct: Use WaitForFirstConsumer for multi-zone awareness

# GOOD -- PV is provisioned in the same zone as the scheduled Pod
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: good-multizone
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer

Wrong: Expecting PVCs to be deleted when StatefulSet is deleted

# BAD -- deleting a StatefulSet does NOT delete its PVCs
# Orphaned PVCs accumulate storage cost silently
kubectl delete statefulset myapp
# PVCs data-myapp-0, data-myapp-1, data-myapp-2 still exist

Correct: Explicitly clean up PVCs after StatefulSet deletion

# GOOD -- delete PVCs explicitly after confirming data is backed up
kubectl delete statefulset myapp
kubectl delete pvc -l app=myapp  # Only after verifying backups

Wrong: Trying to re-bind a Released PV to a new PVC

# BAD -- a PV in Released phase still has a claimRef to the old PVC
# New PVCs will NOT automatically bind to a Released PV

Correct: Clear claimRef to make a Retained PV available again

# GOOD -- remove the old claimRef so the PV becomes Available
kubectl patch pv my-pv -p '{"spec":{"claimRef": null}}'

Common Pitfalls

Diagnostic Commands

# List all PVCs across namespaces with capacity and status
kubectl get pvc -A -o wide

# Describe a specific PVC to see events, conditions, and binding
kubectl describe pvc app-data -n default

# List all PVs with reclaim policy and status
kubectl get pv -o custom-columns=NAME:.metadata.name,CAPACITY:.spec.capacity.storage,ACCESS:.spec.accessModes[0],RECLAIM:.spec.persistentVolumeReclaimPolicy,STATUS:.status.phase,CLAIM:.spec.claimRef.name,STORAGECLASS:.spec.storageClassName

# Check StorageClasses and their provisioners
kubectl get sc -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,RECLAIM:.reclaimPolicy,BINDING:.volumeBindingMode,EXPANSION:.allowVolumeExpansion

# Check which CSI drivers are installed
kubectl get csidrivers

# Find PVCs in Pending state
kubectl get pvc -A --field-selector=status.phase=Pending

# Check volume expansion progress
kubectl describe pvc app-data | grep -A5 "Conditions"

# List VolumeSnapshots and their readiness
kubectl get volumesnapshot -A

# Check snapshot controller is running
kubectl get pods -n kube-system -l app=snapshot-controller

# Verify volume is mounted inside a Pod
kubectl exec app -- df -h /data

# Check PV finalizers (deletion protection)
kubectl get pv <name> -o jsonpath='{.metadata.finalizers}'

Version History & Compatibility

VersionStatusKey ChangesMigration Notes
Kubernetes 1.31+CurrentPV deletion protection finalizers GA; VolumeAttributesClass betaNo migration needed from 1.29+
Kubernetes 1.29SupportedReadWriteOncePod (RWOP) GA; SELinuxMount alphaRWOP requires CSI driver support
Kubernetes 1.27SupportedVolumeSnapshot GA; in-tree to CSI migration complete for major pluginsMigrate in-tree PVs to CSI
Kubernetes 1.24EOLVolume expansion GA; non-graceful node shutdown alphaAll expansion features stable
Kubernetes 1.22EOLRecycle reclaim policy deprecated; ephemeral volumes betaReplace Recycle with Delete + dynamic provisioning
Kubernetes 1.13EOLCSI 1.0 GA; dynamic provisioning GAUpgrade from FlexVolume to CSI drivers

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Application needs data that survives Pod restarts and reschedulingData is temporary and can be regeneratedemptyDir volume
Running databases (PostgreSQL, MySQL, MongoDB) on KubernetesUsing managed database services (RDS, Cloud SQL)Cloud provider's managed DB
Need consistent snapshots and cloning for CI/CD test dataBacking up application-level data (SQL dumps preferred)pg_dump, mysqldump, mongodump
Deploying a StatefulSet with per-replica storageRunning stateless services (APIs, web servers)Deployment without PVCs
Multi-zone cluster needs zone-aware storage provisioningSingle-node development clusters (minikube, kind)Default StorageClass with Immediate binding
Need shared read-write filesystem across Pods (RWX)Block storage is sufficient for a single writerRWO with block-backed StorageClass

Important Caveats

Related Units