Gindlesperger

Docker on Block Storage with limited IOPS

Modern container runtimes love fast storage. Put Docker’s /var/lib/docker on a volume that tops out at a few-hundred IOPS and you’ll watch builds crawl, pulls stall, and the kubelet raise throttling alerts. Below is a quick, numbers-driven look at why an IOPS-limited block device is the wrong home for Docker data, plus a couple of low-friction fixes.

Docker is IOPS hungry

In short, containers generate lots of small, random I/O operations—the exact pattern cheap cloud volumes dislike.

How stingy are cloud volumes?

Cloud device Baseline IOPS What that means for Docker
AWS EBS gp2 3 IOPS/GB (min 100); a common 30 GB root volume gets 100 IOPS (Amazon Web Services, Inc.) One busy docker build easily saturates the queue, so other containers block.
AWS magnetic (standard) ≈100 IOPS average (AWS Documentation) Same limit, but with 10× higher latency. Builds can be minutes slower.
GCP PD Standard 0.75 read / 1.5 write IOPS per GB → a 50 GB disk offers 38/75 IOPS (Google Cloud) Two image pulls in parallel can max it out.
Generic advice (Red Hat) Cloud IOPS throttling can overload CRI-O and kubelet on I/O-intensive pods (Red Hat Documentation) Kubernetes loses heart-beats and evicts pods when writes back up.

Those numbers are orders of magnitude below what a laptop NVMe (hundreds-of-thousands of IOPS) delivers.

What does “IOPS-limited” feel like?

(Even cloud vendors highlight this: AWS recommends gp3/io2 for “bursty, low-latency container workloads,” while Google suggests SSD PD for “image-heavy builds.”)

Tips for taming IOPS for Docker

Bottom line

Cheap, low-IOPS volumes look tempting on the invoice, but Docker’s layer-rich, small-file workload will hit their ceilings fast, turning basic operations into patience tests. A modest upgrade to SSD-class storage (or at least provisioned IOPS) usually pays for itself in developer time saved and pod stability gained.