Along with maximal possible fsync/sec it is interesting how different software RAID modes affects throughput on FusionIO cards.
In short conclusion, RAID10 modes really disappoint me, the detailed numbers to follow.
To get numbers I run sysbench fileio
test with 16KB page size, random read and writes, 1 and 16 threads, O_DIRECT mode.
FusionIO cards are the same as in the previous experiment, as I am running XFS with nobarrier mount options.
OS is CentOS 5.3 with 2.6.18-128.1.10.el5 kernel.
For RAID modes I use:
- single card ( for baseline)
- RAID0 over 2 FusionIO cards
- RAID1 over 2 FusionIO cards
- RAID1 over 2 RAID0 partitions (4 cards in total)
- RAID0 over 2 RAID1 partitions (4 cards in total)
- special RAID10 mode with n2 layout
Latest mode you can get creating RAID as:
mdadm --create --verbose /dev/md0 --level=10 --layout=n2 --raid-devices=4 --chunk=64 /dev/fioa /dev/fiob /dev/fioc /dev/fiod
In this case for all modes use 64KB chunk size ( different chunk sizes also interesting question).
There is graph for 16 threads runs, and raw results are below.
As expected RAID1 over 2 disks shows hit on write throughput comparing to single disk,
but RAID10 modes over 4 disks surprises me, showing almost 2x drops.
Only in RAID10n2 random reads skyrocket, while writes are equal to single disk.
This makes me asking if RAID1 mode is really usable, and how it performs
on regular hard drives or SSD disks.
The performance drop in RAID settings is unexpected. I am working with Fusion-io engineers to figure out the issue.
The next experiment I am going to look into is different page sizes.
Raw results (in requests / seconds, more is better):
| single disk |
read/1
| 12765.49 |
read/16
| 31604.86 |
write/1
| 14357.65 |
write/16
| 32447.07 |
| |
| raid0 2 disks |
read/1
| 12046.12 |
read/16
| 57410.58 |
write/1
| 12993.91 |
write/16
| 43023.12 |
| |
| raid1 2 disks |
read/1
| 11484.17 |
read/16
| 51084.02 |
write/1
| 9821.12 |
write/16
| 15220.57 |
| |
| raid1 over raid0 4 disks |
read/1
| 10227.13 |
read/16
| 61392.25 |
write/1
| 7395.75 |
write/16
| 13536.86 |
| raid0 over raid1 4 disks |
read/1
| 10810.08 |
read/16
| 66316.29 |
write/1
| 8830.49 |
write/16
| 18687.97 |
| raid10 n2 |
read/1
| 11612.89 |
read/16
| 99170.51 |
write/1
| 10634.62 |
write/16
| 31038.5 |
Script for reference:
CODE:
-
#!/bin/sh
-
set -u
-
set -x
-
set -e
-
-
for size in 50G; do
-
for mode in rndrd rndwr; do
-
#for mode in rndwr; do
-
#for blksize in 512 4096 8192 16384 32768 65536 ; do
-
for blksize in 16384 ; do
-
sysbench –test=fileio –file-num=64 –file-total-size=$size prepare
-
#for threads in 1 4 8; do
-
for threads in 1 16 ; do
-
echo “====== testing $blksize in $threads threads”
-
echo PARAMS $size $mode $threads $blksize> sysbench-size-$size-mode-$mode-threads-$threads-blksz-$blksize
-
for i in 1 2 3 ; do
-
sysbench –test=fileio –file-total-size=$size –file-test-mode=$mode\
-
–max-time=180 –max-requests=100000000 –num-threads=$threads –init-rng=on \
-
–file-num=64 –file-extra-flags=direct –file-fsync-freq=0 –file-block-size=$blksize run \
-
| tee -a sysbench-size-$size-mode-$mode-threads-$threads-blksz-$blksize 2>&1
-
done
-
done
-
sysbench –test=fileio –file-total-size=$size cleanup
-
done
-
done
-
done
Entry posted by Vadim |
5 comments
Add to: | | | |