DELL 2950 PERC-6 HWRaid BBC 6 DISK 15K RPM 3.5" RAID-10 256K stripe across two channels-using WRITE THROUGH CACHE on mkfs.ext3 -T largefile4 Linux Filesystem.
The theory is that the outer part of the spindles is the fastest, and the inner portion is slower - since the outer is where the data starts (thanks for the info Benjamin Schweizer). Thus one can conclude that the more disk space your application(s) use the slower the throughput, since the heads have to move more. Brad F. my co-worker did a benchmark to prove this. Our goal is to find out at what is the saturation point if our expectation is to have 22 MB / sec of random access.
Why do we want 22MB / sec of random access throughput? We want to guarantee a certain level of performance when adding new apps to a common backend-which is I/O bound: we need to know when things will break.
Here is what Brad found: Total disk size for our RAID-10 setup =~ 800G. What point does it FAIL to achieve our expectations of sustained 22MB/s?
rndrw test across 100G test / 750G LV =~ 35 MB/s # outer part of the spindles
rndrw test across 100G test / 300G LV =~ 32 MB/s # outer part of the spindles
rndrw test across 250G test / 300G LV =~ 24 MB/s # sweet spot
rndrw test across 350G test / 384G LV =~ 21 MB/s # saturation point
rndrw test across 750G test / 800G LV =~ 14 MB/s # waste of space
In conclusion these test show that even though a RAID-10 setup with 800G of space is available, the expected performance drops when data exceeds the sweet spot of 250G-300G of 800G usable-data array.
Disclaimers: There are many factors that can raise or lower the bar, like different file systems, different I/O schedulers, flushing. For my setups I like
Deadline I/O scheduler
few inodes (don't need them)
ext3 since that’s what stable and available.