Benchmarks
Results
- PutFS on ZFS (HDD+SSD) is 3–4x faster than MinIO on XFS/SSD-only for concurrent reads
- PutFS writes are 2–6x faster than MinIO regardless of MinIO's filesystem
- PutFS directory listings are much faster than MinIO
- PutFS uses 30x less memory
- Large sequential reads are equal across all configurations
- MinIO wins on bulk parallel uploads (
mc mirrorvsputfs sync)
Small file random reads (10K files, 1K–100K, wrk, 30s)
| Concurrency | PutFS (ZFS) | MinIO (XFS/SSD) | Ratio |
|---|---|---|---|
| 1 | 11,448 req/s | 2,771 req/s | 4.1x |
| 10 | 52,676 req/s | 17,319 req/s | 3.0x |
| 100 | 82,891 req/s | 22,945 req/s | 3.6x |
| 1000 | 86,141 req/s | 24,493 req/s | 3.5x |
PutFS is 3–4x faster on concurrent small file reads. nginx serves files via sendfile directly from ZFS ARC – Python is never involved.
Single-file write latency
| File size | PutFS (ZFS) | MinIO (ZFS) | MinIO (XFS/SSD) |
|---|---|---|---|
| 1 KB | 14.1ms | 45.8ms | 25.4ms |
| 10 KB | 14.3ms | 29.2ms | 25.8ms |
| 100 KB | 11.3ms | 27.0ms | 27.9ms |
| 1 MB | 10.6ms | 45.1ms | 55.2ms |
| 10 MB | 18.1ms | 100.1ms | 116.5ms |
| 100 MB | 103ms | 620ms | 620ms |
| 1 GB | 1.0s | 5.1s | 6.3s |
| 10 GB | 46.6s | 71.9s | 68.1s |
PutFS is 2–6x faster than both MinIO configurations across all file sizes.
Bulk upload (10K files, 1K–100K)
| System | Method | files/s |
|---|---|---|
| MinIO (ZFS) | mc mirror | 2,434 |
| MinIO (XFS/SSD) | mc mirror | 1,730 |
| PutFS (ZFS) | putfs sync | 1,349 |
MinIO's mc mirror wins on bulk parallel uploads – Go's native concurrency outperforms Python's threaded putfs sync. The gap has narrowed significantly since switching from sequential curl to threaded sync.
Large file sequential read (1 GB)
| System | Time | Throughput |
|---|---|---|
| PutFS (ZFS) | 413ms | 2,478 MB/s |
| MinIO (ZFS) | 445ms | 2,301 MB/s |
| MinIO (XFS/SSD) | 433ms | 2,365 MB/s |
Nearly equal – all systems are memory/cache-bound on large sequential reads.
Directory listing
| Dataset size | PutFS (ZFS) | MinIO (ZFS) | MinIO (XFS/SSD) |
|---|---|---|---|
| 1K objects | 9ms | 69ms | 72ms |
| 10K objects | 10ms | 670ms | 483ms |
| 100K objects | 11ms | 4,294ms | 4,290ms |
PutFS is 8–390x faster on listing. The near-constant ~10ms regardless of dataset size comes from streaming os.scandir results directly to the client. MinIO's listing performance degrades linearly – its internal metadata overhead dominates.
Memory footprint (idle)
| System | Total |
|---|---|
| PutFS (ZFS) | 186 MB (API: 159 MB + nginx: 27 MB) |
| MinIO (XFS/SSD) | 5,582 MB |
PutFS uses 30x less memory.
Summary
| Metric | PutFS (ZFS) | MinIO (XFS/SSD) | Ratio |
|---|---|---|---|
| Reads (c=1000) | 86,141 req/s | 24,493 req/s | 3.5x |
| Write 1 KB | 14.1ms | 25.4ms | 1.8x |
| Write 1 GB | 1.0s | 6.3s | 6.3x |
| Large read 1 GB | 2,478 MB/s | 2,365 MB/s | ~equal |
| List 100K | 11ms | 4,290ms | 390x |
| Bulk upload | 1,349 files/s | 1,730 files/s | 0.8x |
| Memory idle | 186 MB | 5,582 MB | 30x |
Tuning
Write latency has been reduced ~15–20% by using a Unix socket between nginx and granian. See Granian tuning and Nginx tuning.
access_log
access_log off is critical for nginx performance. In our benchmarks, access_log /dev/stdout reduced throughput from 104K req/s to 3.8K req/s – a 27x penalty. See Nginx tuning for details.
Test setup
Both PutFS and MinIO ran in docker on the same server. PutFS with the best possible ZFS setup for this hardware (HDD mirror, SSD special vdev, SSD SLOG, SSD L2ARC). MinIO was tested on its recommended setup (single disk XFS on SSD, noatime,nodiratime). PutFS small files land on the ZFS special vdev (SSD), making the SSD-vs-SSD comparison fair.
Tuning
- PutFS:
- MinIO:
MINIO_API_REQUESTS_MAX: "0"MINIO_SCANNER_SPEED: "fastest"MINIO_BROWSER: "off"GOMAXPROCS: "0"
Hardware
| Component | Spec |
|---|---|
| CPU | Intel Core i7-8700 @ 3.20 GHz (6 cores, 12 threads) |
| RAM | 64 GB DDR4 |
| Storage | ZFS mirror: 2x 8TB HGST HDD |
| Special vdev | ~900 GB SSD (metadata + small files) |
| SLOG | 32 GB SSD (write intent log) |
| L2ARC | 128 GB SSD (read cache) |
| MinIO XFS/SSD | single SSD partition, XFS |
| OS | Debian 13 (trixie), kernel 6.12 |
Note: Special vdev was not mirrored. SLOG and L2ARC on SATA SSD, not NVMe.
Running
All benchmark scripts live in benchmark/. See the benchmark README for details.
cd benchmark
make putfs # PutFS benchmark
make minio # MinIO comparison
make s3 # PutFS S3 API benchmark
Extract results as CSV: