Compare SSDs and HDDs: Disk Throughput Tester Tips and Best Practices

Compare SSDs and HDDs: Disk Throughput Tester Tips and Best Practices

Overview

This article explains how disk throughput differs between SSDs and HDDs, how to test throughput accurately, and practical tips to get reliable, comparable results. It’s aimed at engineers, sysadmins, and power users who need to evaluate storage performance for workloads.

Key differences: SSD vs HDD

  • Latency: SSDs have much lower latency (microseconds vs milliseconds), so throughput tests should account for small I/O wait times.
  • Random vs Sequential I/O: SSDs excel at random I/O; HDDs perform best with large sequential transfers due to mechanical seek limits.
  • Sustained throughput: SSDs (especially consumer NVMe/SATA) deliver higher sustained throughput; some TLC/QLC drives may throttle under long writes due to SLC caching.
  • I/O queue depth sensitivity: SSD throughput often increases with higher queue depths; HDDs show diminishing returns and increased seek overhead.
  • Wear and thermal throttling: SSDs can show variable throughput over long tests due to wear-leveling and thermal throttling; HDDs may warm up but are less affected by thermal throttling.

Test setup recommendations

  1. Isolate the device
    • Run tests in single-user mode or ensure minimal background I/O (stop services, disable indexing, pause backups).
  2. Use fresh test files
    • For write tests, use large files and avoid filesystem cache effects (write-direct or ODIRECT).
  3. Choose representative block sizes
    • Test multiple block sizes: 4K (random), 64K, 256K, 1M (sequential). These reflect common workload patterns.
  4. Vary I/O patterns
    • Run: random read, random write, sequential read, sequential write, and mixed read/write (e.g., ⁄30).
  5. Set appropriate queue depths
    • Test QD=1, 4, 16, 32, 128 to see how throughput scales. SSDs typically benefit from higher QD.
  6. Warm up and run sustained tests
    • For SSDs, include a warm-up phase (2–5 minutes) then run sustained tests (5–15 minutes) to observe throttling or performance drops.
  7. Control caching
    • Disable OS cache or use direct I/O flags. Record cached vs uncached results if relevant to real workloads.
  8. Repeat and average
    • Run each test multiple times and report median and standard deviation to account for variability.

Tools and commands

  • Linux
    • fio: flexible, supports direct I/O, mixed workloads, queue depth, and runtime. Example fio job (random 4K read, QD16):

      Code

      [global] ioengine=libaio direct=1 bs=4k iodepth=16 runtime=300 time_based[randread] rw=randread filename=/dev/nvme0n1
    • hdparm: quick sequential read checks (read-only).
    • dd (with oflag=direct) for simple sequential writes/reads.
  • Windows
    • DiskSpd: Microsoft tool for precise I/O patterns, queue depth, and duration.
    • CrystalDiskMark: user-friendly for quick checks (note caching behavior).
  • Cross-platform
    • iozone: filesystem and raw device testing across platforms.

Interpreting results

  • Throughput vs IOPS: For small block sizes, report IOPS; for large block sizes, MB/s throughput is more meaningful.
  • Latency: Include average and tail latencies (p99, p99.9) — critical for database and interactive workloads.
  • Sustained vs burst performance: Note differences between initial peak performance and sustained numbers.
  • Thermal and power effects: Correlate performance drops with device temperature and power limits.
  • Consistency: Prefer median and percentiles over single best-run numbers.

Best practices by use case

  • Databases (OLTP): Focus on random 4K–16K reads/writes, low latency, and p99 latency. Use high QD and measure mixed workloads.
  • File servers / Media streaming: Emphasize large sequential throughput (64K–1M).
  • Virtualization: Test mixed read/write with many concurrent streams at moderate QD to simulate

Comments

Leave a Reply