Performanz-Tests
Table of Contents
- 1. fio on 16-TB-Toshiba-disks lxfs463
- 2. fio on single 14 TB disks
- 3. fio on single 10 TB
- 4. fio on single 6 TB
- 5. fio on hebetest
- 6. lxmds12 with WD 60 disk enclosure
- 7. lxdr04
- 8. lxfs532 with 18 TB disks
- 8.1. iozone
- 8.2. fio: zfs recordsize 1M, fio blocksize 1M
- 8.3. fio: zfs recordsize 1M, fio blocksize 128k
- 8.4. fio: zfs recordsize 128k, fio blocksize 128k
- 8.5. fio: zfs recordsize 128k, fio blocksize 1M
- 8.6. fio: zfs recordsize 1M, fio blocksize 1M, size 4G, numJobs 320
- 8.7. iotest
- 8.8. iozone Christo
- 9. lxmds25 RAID-10 with 10 SSDs, Strip Size = 256 KB
- 9.1. iozone
- 9.2. fio: blocksize 1M, size 20G, numJobs 40
- 9.3. fio: fio blocksize 128k, size 20G, numJobs 40
- 9.4. fio: fio blocksize 1M, size 4G, numJobs 320
- 9.5. iozone Christo r=256k
- 9.6. iozone Christo r=1M
- 9.7. fio: fio blocksize 1M, size 10G, numJobs 320
- 9.8. fio: fio blocksize 128k, size 10G, numJobs 320
- 9.9. fio: fio blocksize 128k, size 1G, numJobs 3200
- 10. lxmds25 DRBD on Raid-10 with 10 SSDs, Strip Size = 256 KB, connected to lxmds26, protocol C
- 10.1. iozone
- 10.2. fio: blocksize 1M, size 20G, numJobs 40
- 10.3. fio: fio blocksize 128k, size 20G, numJobs 40
- 10.4. fio: fio blocksize 1M, size 4G, numJobs 320
- 10.5. iozone Christo r=256k
- 10.6. iozone Christo r=1M
- 10.7. fio: fio blocksize 1M, size 10G, numJobs 320
- 10.8. fio: fio blocksize 128k, size 10G, numJobs 320
- 10.9. fio: fio blocksize 128k, size 1G, numJobs 3200
- 10.10. fio: Thomas-Krenn write latency test
- 10.11. fio: Thomas-Krenn read latency test
- 10.12. fio: Thomas-Krenn IOPS write test
- 10.13. fio: Thomas-Krenn IOPS read test
- 10.14. fio IOPS write test 32 * 64
- 11. lxmds25 DRBD on Raid-10 with 10 SSDs, Strip Size = 256 KB, connected to lxmds26, protocol C, ldiskfs
- 11.1. iozone
- 11.2. fio: blocksize 1M, size 20G, numJobs 40
- 11.3. fio: fio blocksize 128k, size 20G, numJobs 40
- 11.4. fio: fio blocksize 1M, size 4G, numJobs 320
- 11.5. iozone Christo r=256k
- 11.6. iozone Christo r=1M
- 11.7. fio: fio blocksize 1M, size 10G, numJobs 320
- 11.8. fio: fio blocksize 128k, size 10G, numJobs 320
- 11.9. fio: fio blocksize 128k, size 1G, numJobs 3200
- 12. lxmds26
- 13. lxmds27
- 13.1. iozone
- 13.2. fio: blocksize 1M, size 20G, numJobs 40
- 13.3. fio: fio blocksize 128k, size 20G, numJobs 40
- 13.4. fio: fio blocksize 1M, size 4G, numJobs 320
- 13.5. iozone Christo r=256k
- 13.6. iozone Christo r=1M
- 13.7. fio: fio blocksize 1M, size 10G, numJobs 320
- 13.8. fio: fio blocksize 128k, size 10G, numJobs 320
- 13.9. fio: fio blocksize 128k, size 1G, numJobs 3200
- 13.10. fio: Thomas-Krenn write latency test
- 13.11. fio: Thomas-Krenn write latency test
- 13.12. fio: Thomas-Krenn IOPS write test
- 13.13. fio: Thomas-Krenn IOPS write test
- 14. lxmds28
- 14.1. iozone
- 14.2. fio: blocksize 1M, size 20G, numJobs 40
- 14.3. fio: fio blocksize 128k, size 20G, numJobs 40
- 14.4. fio: fio blocksize 1M, size 4G, numJobs 320
- 14.5. iozone Christo r=256k
- 14.6. iozone Christo r=1M
- 14.7. fio: fio blocksize 1M, size 10G, numJobs 320
- 14.8. fio: fio blocksize 128k, size 10G, numJobs 320
- 14.9. fio: fio blocksize 128k, size 1G, numJobs 3200
- 15. lxmds29
- 15.1. iozone
- 15.2. fio: blocksize 1M, size 20G, numJobs 40
- 15.3. fio: fio blocksize 128k, size 20G, numJobs 40
- 15.4. fio: fio blocksize 1M, size 4G, numJobs 320
- 15.5. iozone Christo r=256k
- 15.6. iozone Christo r=1M
- 15.7. fio: fio blocksize 1M, size 10G, numJobs 320
- 15.8. fio: fio blocksize 128k, size 10G, numJobs 320
- 15.9. fio: fio blocksize 128k, size 1G, numJobs 3200
- 15.10. fio: Thomas-Krenn IOPS write test
- 15.11. fio: Thomas-Krenn IOPS read test
- 15.12. fio: Thomas-Krenn write latency test
- 15.13. fio: Thomas-Krenn read latency test
- 15.14. fio IOPS write test 32 * 64
- 16. lxmds30
- 16.1. iozone
- 16.2. fio: blocksize 1M, size 20G, numJobs 40
- 16.3. fio: fio blocksize 128k, size 20G, numJobs 40
- 16.4. fio: fio blocksize 1M, size 4G, numJobs 320
- 16.5. iozone Christo r=256k
- 16.6. iozone Christo r=1M
- 16.7. fio: fio blocksize 1M, size 10G, numJobs 320
- 16.8. fio: fio blocksize 128k, size 10G, numJobs 320
- 16.9. fio: fio blocksize 128k, size 1G, numJobs 3200
- 17. cephfs on lxbk0377
- 17.1. iozone
- 17.2. fio: blocksize 1M, size 20G, numJobs 40
- 17.3. fio: fio blocksize 128k, size 20G, numJobs 40
- 17.4. fio: fio blocksize 1M, size 4G, numJobs 320
- 17.5. iozone Christo r=256k
- 17.6. iozone Christo r=1M
- 17.7. fio: Thomas-Krenn write latency test
- 17.8. fio: Thomas-Krenn read latency test
- 17.9. fio: Thomas-Krenn IOPS write test
- 17.10. fio: Thomas-Krenn IOPS read test
- 17.11. fio IOPS write test 32 * 64
1 fio on 16-TB-Toshiba-disks lxfs463
[root@lxfs463 ~]# zpool status pool: sechzehn state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM sechzehn ONLINE 0 0 0 srvb0 ONLINE 0 0 0 srvb1 ONLINE 0 0 0 srvb2 ONLINE 0 0 0 srvb3 ONLINE 0 0 0
1.1 zfs recordsize 128k, fio blocksize 1M
fio --rw=randrw --name=test? --fallocate=none --size=4G --bs=1M --numj=21 --group_reporting --output=fio_randrw_s4G_bs128k_numj21_recs128k-1.out
1.2 zfs recordsize 128k, fio blocksize 128k
fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_numj40_recs128k-2.out
read: IOPS=749, BW=93.6MiB/s (98.2MB/s)(400GiB/4374180msec) write: IOPS=749, BW=93.7MiB/s (98.2MB/s)(400GiB/4374180msec) READ: bw=93.6MiB/s (98.2MB/s), 93.6MiB/s-93.6MiB/s (98.2MB/s-98.2MB/s), io=400GiB (429GB), run=4374180-4374180msec WRITE: bw=93.7MiB/s (98.2MB/s), 93.7MiB/s-93.7MiB/s (98.2MB/s-98.2MB/s), io=400GiB (430GB), run=4374180-4374180msec
- Result
test2: (groupid=0, jobs=40): err= 0: pid=25675: Mon Nov 25 12:38:47 2019 read: IOPS=749, BW=93.6MiB/s (98.2MB/s)(400GiB/4374180msec) clat (usec): min=70, max=1474.4k, avg=43564.48, stdev=40087.93 lat (usec): min=70, max=1474.4k, avg=43565.02, stdev=40087.94 clat percentiles (msec): | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 18], | 30.00th=[ 22], 40.00th=[ 26], 50.00th=[ 31], 60.00th=[ 38], | 70.00th=[ 47], 80.00th=[ 62], 90.00th=[ 90], 95.00th=[ 121], | 99.00th=[ 201], 99.50th=[ 239], 99.90th=[ 342], 99.95th=[ 397], | 99.99th=[ 542] bw ( KiB/s): min= 256, max=22272, per=2.70%, avg=2591.82, stdev=907.95, samples=324751 iops : min= 2, max= 174, avg=20.17, stdev= 7.07, samples=324751 write: IOPS=749, BW=93.7MiB/s (98.2MB/s)(400GiB/4374180msec) clat (usec): min=38, max=37815, avg=5994.46, stdev=4389.17 lat (usec): min=41, max=37819, avg=6000.23, stdev=4389.72 clat percentiles (usec): | 1.00th=[ 64], 5.00th=[ 206], 10.00th=[ 783], 20.00th=[ 1680], | 30.00th=[ 2704], 40.00th=[ 3982], 50.00th=[ 5342], 60.00th=[ 6783], | 70.00th=[ 8291], 80.00th=[10028], 90.00th=[12256], 95.00th=[13960], | 99.00th=[16909], 99.50th=[17957], 99.90th=[20055], 99.95th=[20579], | 99.99th=[22414] bw ( KiB/s): min= 255, max=26624, per=2.71%, avg=2602.94, stdev=1337.69, samples=323450 iops : min= 1, max= 208, avg=20.26, stdev=10.41, samples=323450 lat (usec) : 50=0.01%, 100=2.28%, 250=0.43%, 500=0.37%, 750=1.47% lat (usec) : 1000=1.39% lat (msec) : 2=6.01%, 4=8.19%, 10=22.34%, 20=20.99%, 50=22.72% lat (msec) : 100=9.90%, 250=3.70%, 500=0.20%, 750=0.01%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=0.05%, sys=0.53%, ctx=9839217, majf=0, minf=14584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwt: total=3276387,3277213,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=93.6MiB/s (98.2MB/s), 93.6MiB/s-93.6MiB/s (98.2MB/s-98.2MB/s), io=400GiB (429GB), run=4374180-4374180msec WRITE: bw=93.7MiB/s (98.2MB/s), 93.7MiB/s-93.7MiB/s (98.2MB/s-98.2MB/s), io=400GiB (430GB), run=4374180-4374180msec
1.3 zfs recordsize 1M, fio blocksize 1M
fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_numj40_recs128k-4.out
read: IOPS=302, BW=303MiB/s (318MB/s)(399GiB/1349048msec) write: IOPS=304, BW=304MiB/s (319MB/s)(401GiB/1349048msec) READ: bw=303MiB/s (318MB/s), 303MiB/s-303MiB/s (318MB/s-318MB/s), io=399GiB (429GB), run=1349048-1349048msec WRITE: bw=304MiB/s (319MB/s), 304MiB/s-304MiB/s (319MB/s-319MB/s), io=401GiB (430GB), run=1349048-1349048msec
- Result
test4: (groupid=0, jobs=40): err= 0: pid=28933: Mon Nov 25 15:33:38 2019 read: IOPS=302, BW=303MiB/s (318MB/s)(399GiB/1349048msec) clat (usec): min=153, max=1262.5k, avg=111096.28, stdev=87015.49 lat (usec): min=153, max=1262.5k, avg=111097.02, stdev=87015.49 clat percentiles (msec): | 1.00th=[ 15], 5.00th=[ 26], 10.00th=[ 34], 20.00th=[ 46], | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 86], 60.00th=[ 104], | 70.00th=[ 128], 80.00th=[ 163], 90.00th=[ 224], 95.00th=[ 284], | 99.00th=[ 426], 99.50th=[ 485], 99.90th=[ 642], 99.95th=[ 709], | 99.99th=[ 894] bw ( KiB/s): min= 1432, max=52813, per=2.59%, avg=8019.64, stdev=3182.37, samples=105321 iops : min= 1, max= 51, avg= 7.74, stdev= 3.07, samples=105321 write: IOPS=304, BW=304MiB/s (319MB/s)(401GiB/1349048msec) clat (usec): min=185, max=125501, avg=18195.85, stdev=15561.01 lat (usec): min=213, max=125605, avg=18271.91, stdev=15572.80 clat percentiles (usec): | 1.00th=[ 457], 5.00th=[ 1401], 10.00th=[ 1893], 20.00th=[ 4047], | 30.00th=[ 7570], 40.00th=[ 11469], 50.00th=[ 15664], 60.00th=[ 19530], | 70.00th=[ 23725], 80.00th=[ 28443], 90.00th=[ 35914], 95.00th=[ 46400], | 99.00th=[ 73925], 99.50th=[ 81265], 99.90th=[ 94897], 99.95th=[102237], | 99.99th=[113771] bw ( KiB/s): min= 1442, max=62566, per=2.79%, avg=8689.04, stdev=5500.91, samples=97654 iops : min= 1, max= 61, avg= 8.39, stdev= 5.31, samples=97654 lat (usec) : 250=0.01%, 500=0.56%, 750=0.48%, 1000=0.45% lat (msec) : 2=3.98%, 4=4.49%, 10=8.28%, 20=13.49%, 50=27.88% lat (msec) : 100=19.68%, 250=16.99%, 500=3.51%, 750=0.20%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=0.08%, sys=0.70%, ctx=1404273, majf=0, minf=64376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwt: total=408669,410531,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=303MiB/s (318MB/s), 303MiB/s-303MiB/s (318MB/s-318MB/s), io=399GiB (429GB), run=1349048-1349048msec WRITE: bw=304MiB/s (319MB/s), 304MiB/s-304MiB/s (319MB/s-319MB/s), io=401GiB (430GB), run=1349048-1349048msec
1.4 zfs recordsize 1M, fio blocksize 128k
fio --rw=randrw --name=test5 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_numj40_recs1M-5.out
read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(400GiB/19859966msec) write: IOPS=165, BW=20.6MiB/s (21.6MB/s)(400GiB/19859966msec) READ: bw=20.6MiB/s (21.6MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=400GiB (429GB), run=19859966-19859966msec WRITE: bw=20.6MiB/s (21.6MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=400GiB (430GB), run=19859966-19859966msec
- Result
test5: (groupid=0, jobs=40): err= 0: pid=158131: Mon Nov 25 23:13:48 2019 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(400GiB/19859966msec) clat (usec): min=19, max=2055.3k, avg=101081.40, stdev=86004.49 lat (usec): min=19, max=2055.3k, avg=101082.17, stdev=86004.49 clat percentiles (usec): | 1.00th=[ 53], 5.00th=[ 12911], 10.00th=[ 19792], 20.00th=[ 31065], | 30.00th=[ 43779], 40.00th=[ 58983], 50.00th=[ 77071], 60.00th=[ 98042], | 70.00th=[125305], 80.00th=[160433], 90.00th=[217056], 95.00th=[270533], | 99.00th=[387974], 99.50th=[442500], 99.90th=[574620], 99.95th=[641729], | 99.99th=[809501] bw ( KiB/s): min= 210, max= 8081, per=2.89%, avg=610.87, stdev=340.29, samples=1370384 iops : min= 1, max= 63, avg= 4.76, stdev= 2.65, samples=1370384 write: IOPS=165, BW=20.6MiB/s (21.6MB/s)(400GiB/19859966msec) clat (usec): min=32, max=1891.4k, avg=138486.08, stdev=107780.43 lat (usec): min=35, max=1891.4k, avg=138492.16, stdev=107780.45 clat percentiles (usec): | 1.00th=[ 84], 5.00th=[ 24511], 10.00th=[ 34341], 20.00th=[ 50594], | 30.00th=[ 68682], 40.00th=[ 88605], 50.00th=[111674], 60.00th=[137364], | 70.00th=[168821], 80.00th=[210764], 90.00th=[278922], 95.00th=[346031], | 99.00th=[505414], 99.50th=[574620], 99.90th=[750781], 99.95th=[826278], | 99.99th=[994051] bw ( KiB/s): min= 210, max=10146, per=2.75%, avg=580.20, stdev=298.68, samples=1443192 iops : min= 1, max= 79, avg= 4.52, stdev= 2.32, samples=1443193 lat (usec) : 20=0.01%, 50=0.43%, 100=1.35%, 250=0.15%, 500=0.45% lat (usec) : 750=0.29%, 1000=0.01% lat (msec) : 2=0.02%, 4=0.07%, 10=0.53%, 20=3.63%, 50=19.96% lat (msec) : 100=26.02%, 250=37.11%, 500=9.35%, 750=0.58%, 1000=0.05% lat (msec) : 2000=0.01%, >=2000=0.01% cpu : usr=0.02%, sys=0.20%, ctx=10025840, majf=0, minf=26174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwt: total=3276387,3277213,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=20.6MiB/s (21.6MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=400GiB (429GB), run=19859966-19859966msec WRITE: bw=20.6MiB/s (21.6MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=400GiB (430GB), run=19859966-19859966msec
2 fio on single 14 TB disks
zpool create pool0 /dev/disk/by-location/srvb0
[root@lxfs463 ~]# zfs get primarycache NAME PROPERTY VALUE SOURCE pool0 primarycache metadata local pool1 primarycache none local pool2 primarycache all default [root@lxfs463 ~]# zfs get sync NAME PROPERTY VALUE SOURCE pool0 sync disabled local pool1 sync disabled local pool2 sync standard default [root@lxfs463 pool0]# zfs get recordsize NAME PROPERTY VALUE SOURCE pool0 recordsize 128K default pool1 recordsize 1M local pool2 recordsize 128K default
2.0.1 fio on single ZFS, recsz 128k
sync=standard arc=all recs=128k bs=128k : done sync=standard arc=all recs=128k bs=1M : done sync=standard arc=metadata recs=128k bs=128k : done sync=standard arc=metadata recs=128k bs=1M : done sync=standard arc=none recs=128k bs=128k : done sync=standard arc=none recs=128k bs=1M : done sync=none arc=all recs=128k bs=128k : done sync=none arc=all recs=128k bs=1M : done sync=none arc=metadata recs=128k bs=128k : done sync=none arc=metadata recs=128k bs=1M : done sync=none arc=none recs=128k bs=128k : done sync=none arc=none recs=128k bs=1M : done
- pool0
[root@lxfs463 pool0]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-none_numj40_recs128k-0.out
read: IOPS=160, BW=20.0MiB/s (21.0MB/s)(400GiB/20443227msec) write: IOPS=160, BW=20.0MiB/s (21.0MB/s)(400GiB/20443227msec) READ: bw=20.0MiB/s (21.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=400GiB (429GB), run=20443227-20443227msec WRITE: bw=20.0MiB/s (21.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=400GiB (430GB), run=20443227-20443227msec
[root@lxfs463 pool0]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-none_numj40_recs128k-1.out
read: IOPS=36, BW=36.3MiB/s (38.1MB/s)(399GiB/11252135msec) write: IOPS=36, BW=36.5MiB/s (38.3MB/s)(401GiB/11252135msec) READ: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=399GiB (429GB), run=11252135-11252135msec WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=401GiB (430GB), run=11252135-11252135msec
[root@lxfs463 pool0]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-standard_numj40_recs128k-2.out
read: IOPS=146, BW=18.3MiB/s (19.2MB/s)(400GiB/22421959msec) write: IOPS=146, BW=18.3MiB/s (19.2MB/s)(400GiB/22421959msec) READ: bw=18.3MiB/s (19.2MB/s), 18.3MiB/s-18.3MiB/s (19.2MB/s-19.2MB/s), io=400GiB (429GB), run=22421959-22421959msec WRITE: bw=18.3MiB/s (19.2MB/s), 18.3MiB/s-18.3MiB/s (19.2MB/s-19.2MB/s), io=400GiB (430GB), run=22421959-22421959msec
[root@lxfs463 pool0]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-standard_numj40_recs128k-3.out
read: IOPS=32, BW=32.7MiB/s (34.3MB/s)(399GiB/12482918msec) write: IOPS=32, BW=32.9MiB/s (34.5MB/s)(401GiB/12482918msec) READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=399GiB (429GB), run=12482918-12482918msec WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=401GiB (430GB), run=12482918-12482918msec
- pool1
fio --rw=randrw --name=test5 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-none_numj40_recs128k-5.out
read: IOPS=29, BW=29.6MiB/s (31.0MB/s)(399GiB/13802675msec) write: IOPS=29, BW=29.7MiB/s (31.2MB/s)(401GiB/13802675msec) READ: bw=29.6MiB/s (31.0MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=399GiB (429GB), run=13802675-13802675msec WRITE: bw=29.7MiB/s (31.2MB/s), 29.7MiB/s-29.7MiB/s (31.2MB/s-31.2MB/s), io=401GiB (430GB), run=13802675-13802675msec
fio --rw=randrw --name=test6 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-none_numj40_recs128k-6.out
read: IOPS=55, BW=7130KiB/s (7301kB/s)(400GiB/58819789msec) write: IOPS=55, BW=7132KiB/s (7303kB/s)(400GiB/58819789msec) READ: bw=7130KiB/s (7301kB/s), 7130KiB/s-7130KiB/s (7301kB/s-7301kB/s), io=400GiB (429GB), run=58819789-58819789msec WRITE: bw=7132KiB/s (7303kB/s), 7132KiB/s-7132KiB/s (7303kB/s-7303kB/s), io=400GiB (430GB), run=58819789-58819789msec
[root@lxfs463 pool1]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-standard_numj40_recs128k-0.out
read: IOPS=126, BW=15.8MiB/s (16.5MB/s)(400GiB/25987763msec) write: IOPS=126, BW=15.8MiB/s (16.5MB/s)(400GiB/25987763msec) READ: bw=15.8MiB/s (16.5MB/s), 15.8MiB/s-15.8MiB/s (16.5MB/s-16.5MB/s), io=400GiB (429GB), run=25987763-25987763msec WRITE: bw=15.8MiB/s (16.5MB/s), 15.8MiB/s-15.8MiB/s (16.5MB/s-16.5MB/s), io=400GiB (430GB), run=25987763-25987763msec
[root@lxfs463 pool1]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-standard_numj40_recs128k-1.out
read: IOPS=25, BW=25.3MiB/s (26.5MB/s)(399GiB/16161321msec) write: IOPS=25, BW=25.4MiB/s (26.6MB/s)(401GiB/16161321msec) READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=399GiB (429GB), run=16161321-16161321msec WRITE: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=401GiB (430GB), run=16161321-16161321msec
- pool2
[root@lxfs463 pool2]# fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-enabled_numj40_recs128k-4.out
read: IOPS=166, BW=20.9MiB/s (21.9MB/s)(400GiB/19626177msec) write: IOPS=166, BW=20.9MiB/s (21.9MB/s)(400GiB/19626177msec) READ: bw=20.9MiB/s (21.9MB/s), 20.9MiB/s-20.9MiB/s (21.9MB/s-21.9MB/s), io=400GiB (429GB), run=19626177-19626177msec WRITE: bw=20.9MiB/s (21.9MB/s), 20.9MiB/s-20.9MiB/s (21.9MB/s-21.9MB/s), io=400GiB (430GB), run=19626177-19626177msec
[root@lxfs463 pool2]# fio --rw=randrw --name=test5 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-enabled_numj40_recs128k-5.out
read: IOPS=37, BW=37.9MiB/s (39.7MB/s)(399GiB/10785033msec) write: IOPS=38, BW=38.1MiB/s (39.9MB/s)(401GiB/10785033msec) READ: bw=37.9MiB/s (39.7MB/s), 37.9MiB/s-37.9MiB/s (39.7MB/s-39.7MB/s), io=399GiB (429GB), run=10785033-10785033msec WRITE: bw=38.1MiB/s (39.9MB/s), 38.1MiB/s-38.1MiB/s (39.9MB/s-39.9MB/s), io=401GiB (430GB), run=10785033-10785033msec
root@lxfs463 pool2]# fio --rw=randrw --name=test8 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-none_numj40_recs128k-8.out
read: IOPS=129, BW=16.2MiB/s (16.0MB/s)(400GiB/25309580msec) write: IOPS=129, BW=16.2MiB/s (16.0MB/s)(400GiB/25309580msec) READ: bw=16.2MiB/s (16.0MB/s), 16.2MiB/s-16.2MiB/s (16.0MB/s-16.0MB/s), io=400GiB (429GB), run=25309580-25309580msec WRITE: bw=16.2MiB/s (16.0MB/s), 16.2MiB/s-16.2MiB/s (16.0MB/s-16.0MB/s), io=400GiB (430GB), run=25309580-25309580msec
[root@lxfs463 pool2]# fio --rw=randrw --name=test9 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-none_numj40_recs128k-9.out
read: IOPS=43, BW=43.3MiB/s (45.4MB/s)(399GiB/9434569msec) write: IOPS=43, BW=43.5MiB/s (45.6MB/s)(401GiB/9434569msec) READ: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=399GiB (429GB), run=9434569-9434569msec WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=401GiB (430GB), run=9434569-9434569msec
2.0.2 fio on single ZFS, recsz 1M
sync=standard arc=all recs=1M bs=128k : done sync=standard arc=all recs=1M bs=1M : done sync=standard arc=metadata recs=1M bs=128k : done sync=standard arc=metadata recs=1M bs=1M : done sync=standard arc=none recs=1M bs=128k : done sync=standard arc=none recs=1M bs=1M : done sync=none arc=all recs=1M bs=128k : done sync=none arc=all recs=1M bs=1M : done sync=none arc=metadata recs=1M bs=128k : done sync=none arc=metadata recs=1M bs=1M : done sync=none arc=none recs=1M bs=128 : done sync=none arc=none recs=1M bs=1M : done
[root@lxfs463 pool1]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-disabled_numj40_recs1M-2.out
read: IOPS=29, BW=3814KiB/s (3906kB/s)(400GiB/109943775msec) write: IOPS=29, BW=3815KiB/s (3907kB/s)(400GiB/109943775msec) READ: bw=3814KiB/s (3906kB/s), 3814KiB/s-3814KiB/s (3906kB/s-3906kB/s), io=400GiB (429GB), run=109943775-109943775msec WRITE: bw=3815KiB/s (3907kB/s), 3815KiB/s-3815KiB/s (3907kB/s-3907kB/s), io=400GiB (430GB), run=109943775-109943775msec
[root@lxfs463 pool1]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-disabled_numj40_recs1M-3.out
read: IOPS=70, BW=70.9MiB/s (74.3MB/s)(399GiB/5766415msec) write: IOPS=71, BW=71.2MiB/s (74.7MB/s)(401GiB/5766415msec) READ: bw=70.9MiB/s (74.3MB/s), 70.9MiB/s-70.9MiB/s (74.3MB/s-74.3MB/s), io=399GiB (429GB), run=5766415-5766415msec WRITE: bw=71.2MiB/s (74.7MB/s), 71.2MiB/s-71.2MiB/s (74.7MB/s-74.7MB/s), io=401GiB (430GB), run=5766415-5766415msec
[root@lxfs463 pool1]# fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-disabled_numj40_recs1M-4.out
read: IOPS=67, BW=67.1MiB/s (70.4MB/s)(399GiB/6089386msec) write: IOPS=67, BW=67.4MiB/s (70.7MB/s)(401GiB/6089386msec) READ: bw=67.1MiB/s (70.4MB/s), 67.1MiB/s-67.1MiB/s (70.4MB/s-70.4MB/s), io=399GiB (429GB), run=6089386-6089386msec WRITE: bw=67.4MiB/s (70.7MB/s), 67.4MiB/s-67.4MiB/s (70.7MB/s-70.7MB/s), io=401GiB (430GB), run=6089386-6089386msec
[root@lxfs463 pool2]# zfs get recordsize pool2 NAME PROPERTY VALUE SOURCE pool2 recordsize 1M local
[root@lxfs463 pool2]# fio --rw=randrw --name=test6 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-enabled_numj40_recs1M-6.out
read: IOPS=47, BW=6023KiB/s (6167kB/s)(400GiB/69631684msec) write: IOPS=47, BW=6024KiB/s (6169kB/s)(400GiB/69631684msec) READ: bw=6023KiB/s (6167kB/s), 6023KiB/s-6023KiB/s (6167kB/s-6167kB/s), io=400GiB (429GB), run=69631684-69631684msec WRITE: bw=6024KiB/s (6169kB/s), 6024KiB/s-6024KiB/s (6169kB/s-6169kB/s), io=400GiB (430GB), run=69631684-69631684msec
[root@lxfs463 pool2]# fio --rw=randrw --name=test7 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-enabled_numj40_recs1M-7.out
read: IOPS=58, BW=58.1MiB/s (60.0MB/s)(399GiB/7028766msec) write: IOPS=58, BW=58.4MiB/s (61.2MB/s)(401GiB/7028766msec) READ: bw=58.1MiB/s (60.0MB/s), 58.1MiB/s-58.1MiB/s (60.0MB/s-60.0MB/s), io=399GiB (429GB), run=7028766-7028766msec WRITE: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=401GiB (430GB), run=7028766-7028766msec
[root@lxfs463 pool0]# zfs get recordsize pool0 NAME PROPERTY VALUE SOURCE pool0 recordsize 1M local
fio --rw=randrw --name=test8 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-none_numj40_recs1M-8.out
read: IOPS=32, BW=4164KiB/s (4264kB/s)(400GiB/100717678msec) write: IOPS=32, BW=4165KiB/s (4265kB/s)(400GiB/100717678msec) READ: bw=4164KiB/s (4264kB/s), 4164KiB/s-4164KiB/s (4264kB/s-4264kB/s), io=400GiB (429GB), run=100717678-100717678msec WRITE: bw=4165KiB/s (4265kB/s), 4165KiB/s-4165KiB/s (4265kB/s-4265kB/s), io=400GiB (430GB), run=100717678-100717678msec
fio --rw=randrw --name=test9 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-none_numj40_recs1M-9.out
read: IOPS=61, BW=61.7MiB/s (64.7MB/s)(399GiB/6623021msec write: IOPS=61, BW=61.0MiB/s (64.0MB/s)(401GiB/6623021msec) READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=399GiB (429GB), run=6623021-6623021msec WRITE: bw=61.0MiB/s (64.0MB/s), 61.0MiB/s-61.0MiB/s (64.0MB/s-64.0MB/s), io=401GiB (430GB), run=6623021-6623021msec
[root@lxfs463 pool0]# fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-standard_numj40_recs1M-4.out
read: IOPS=30, BW=3944KiB/s (4038kB/s)(400GiB/106344670msec) write: IOPS=30, BW=3945KiB/s (4039kB/s)(400GiB/106344670msec) READ: bw=3944KiB/s (4038kB/s), 3944KiB/s-3944KiB/s (4038kB/s-4038kB/s), io=400GiB (429GB), run=106344670-106344670msec WRITE: bw=3945KiB/s (4039kB/s), 3945KiB/s-3945KiB/s (4039kB/s-4039kB/s), io=400GiB (430GB), run=106344670-106344670msec
[root@lxfs463 pool0]# fio --rw=randrw --name=test5 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-standard_numj40_recs1M-5.out
read: IOPS=57, BW=57.8MiB/s (60.6MB/s)(399GiB/7070926msec) write: IOPS=58, BW=58.1MiB/s (60.9MB/s)(401GiB/7070926msec) READ: bw=57.8MiB/s (60.6MB/s), 57.8MiB/s-57.8MiB/s (60.6MB/s-60.6MB/s), io=399GiB (429GB), run=7070926-7070926msec WRITE: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=401GiB (430GB), run=7070926-7070926msec
[root@lxfs463 pool1]# fio --rw=randrw --name=test5 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-standard_numj40_recs1M-5.out
read: IOPS=23, BW=2972KiB/s (3043kB/s)(400GiB/141124135msec) write: IOPS=23, BW=2972KiB/s (3044kB/s)(400GiB/141124135msec) READ: bw=2972KiB/s (3043kB/s), 2972KiB/s-2972KiB/s (3043kB/s-3043kB/s), io=400GiB (429GB), run=141124135-141124135msec WRITE: bw=2972KiB/s (3044kB/s), 2972KiB/s-2972KiB/s (3044kB/s-3044kB/s), io=400GiB (430GB), run=141124135-141124135msec
[root@lxfs463 pool1]# fio --rw=randrw --name=test6 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-standard_numj40_recs1M-6.out
read: IOPS=17, BW=17.2MiB/s (18.0MB/s)(399GiB/23741920msec) write: IOPS=17, BW=17.3MiB/s (18.1MB/s)(401GiB/23741920msec) READ: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=399GiB (429GB), run=23741920-23741920msec WRITE: bw=17.3MiB/s (18.1MB/s), 17.3MiB/s-17.3MiB/s (18.1MB/s-18.1MB/s), io=401GiB (430GB), run=23741920-23741920msec
[root@lxfs463 pool2]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-none_numj40_recs1M-0.out
read: IOPS=42, BW=5500KiB/s (5632kB/s)(400GiB/76246028msec) write: IOPS=42, BW=5502KiB/s (5634kB/s)(400GiB/76246028msec) READ: bw=5500KiB/s (5632kB/s), 5500KiB/s-5500KiB/s (5632kB/s-5632kB/s), io=400GiB (429GB), run=76246028-76246028msec WRITE: bw=5502KiB/s (5634kB/s), 5502KiB/s-5502KiB/s (5634kB/s-5634kB/s), io=400GiB (430GB), run=76246028-76246028msec
[root@lxfs463 pool2]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-none_numj40_recs1M-1.out
read: IOPS=62, BW=62.3MiB/s (65.4MB/s)(399GiB/6555194msec) write: IOPS=62, BW=62.6MiB/s (65.7MB/s)(401GiB/6555194msec) READ: bw=62.3MiB/s (65.4MB/s), 62.3MiB/s-62.3MiB/s (65.4MB/s-65.4MB/s), io=399GiB (429GB), run=6555194-6555194msec WRITE: bw=62.6MiB/s (65.7MB/s), 62.6MiB/s-62.6MiB/s (65.7MB/s-65.7MB/s), io=401GiB (430GB), run=6555194-6555194msec
2.0.3 fio on single XFS
fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-3.out
read: IOPS=162, BW=20.3MiB/s (21.3MB/s)(400GiB/20175092msec) write: IOPS=162, BW=20.3MiB/s (21.3MB/s)(400GiB/20175092msec) READ: bw=20.3MiB/s (21.3MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=400GiB (429GB), run=20175092-20175092msec WRITE: bw=20.3MiB/s (21.3MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=400GiB (430GB), run=20175092-20175092msec
fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-4.out
read: IOPS=72, BW=72.2MiB/s (75.7MB/s)(399GiB/5660027msec) write: IOPS=72, BW=72.5MiB/s (76.1MB/s)(401GiB/5660027msec) READ: bw=72.2MiB/s (75.7MB/s), 72.2MiB/s-72.2MiB/s (75.7MB/s-75.7MB/s), io=399GiB (429GB), run=5660027-5660027msec WRITE: bw=72.5MiB/s (76.1MB/s), 72.5MiB/s-72.5MiB/s (76.1MB/s-76.1MB/s), io=401GiB (430GB), run=5660027-5660027msec
3 fio on single 10 TB
3.0.1 recordsize 128k
sync=standard arc=all recs=128k bs=128k : done sync=standard arc=all recs=128k bs=1M : done sync=standard arc=metadata recs=128k bs=128k : done sync=standard arc=metadata recs=128k bs=1M : done sync=standard arc=none recs=128k bs=128k : done sync=standard arc=none recs=128k bs=1M : done sync=none arc=all recs=128k bs=128k : done sync=none arc=all recs=128k bs=1M : done sync=none arc=metadata recs=128k bs=128k : done sync=none arc=metadata recs=128k bs=1M : done sync=none arc=none recs=128k bs=128k : done sync=none arc=none recs=128k bs=1M : done
[root@lxfs462 pool0]# zfs get recordsize NAME PROPERTY VALUE SOURCE pool0 recordsize 128K default pool1 recordsize 128K default pool2 recordsize 128K default [root@lxfs462 pool0]# zfs get sync NAME PROPERTY VALUE SOURCE pool0 sync disabled local pool1 sync disabled local pool2 sync standard default [root@lxfs462 pool0]# zfs get primarycache NAME PROPERTY VALUE SOURCE pool0 primarycache metadata local pool1 primarycache none local pool2 primarycache all default
root@lxfs462 pool0]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-none_numj40_recs128k-0.out
read: IOPS=47, BW=6021KiB/s (6166kB/s)(400GiB/69647508msec) write: IOPS=47, BW=6023KiB/s (6167kB/s)(400GiB/69647508msec) READ: bw=6021KiB/s (6166kB/s), 6021KiB/s-6021KiB/s (6166kB/s-6166kB/s), io=400GiB (429GB), run=69647508-69647508msec WRITE: bw=6023KiB/s (6167kB/s), 6023KiB/s-6023KiB/s (6167kB/s-6167kB/s), io=400GiB (430GB), run=69647508-69647508msec
root@lxfs462 pool0]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-none_numj40_recs128k-1.out
read: IOPS=20, BW=20.4MiB/s (21.4MB/s)(399GiB/20019303msec) write: IOPS=20, BW=20.5MiB/s (21.5MB/s)(401GiB/20019303msec) READ: bw=20.4MiB/s (21.4MB/s), 20.4MiB/s-20.4MiB/s (21.4MB/s-21.4MB/s), io=399GiB (429GB), run=20019303-20019303msec WRITE: bw=20.5MiB/s (21.5MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=401GiB (430GB), run=20019303-20019303msec
root@lxfs462 pool1]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-none_numj40_recs128k-2.out
read: IOPS=45, BW=5832KiB/s (5972kB/s)(400GiB/71909334msec) write: IOPS=45, BW=5834KiB/s (5974kB/s)(400GiB/71909334msec) READ: bw=5832KiB/s (5972kB/s), 5832KiB/s-5832KiB/s (5972kB/s-5972kB/s), io=400GiB (429GB), run=71909334-71909334msec WRITE: bw=5834KiB/s (5974kB/s), 5834KiB/s-5834KiB/s (5974kB/s-5974kB/s), io=400GiB (430GB), run=71909334-71909334msec
[root@lxfs462 pool1]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bsM_arc-none_sync-none_numj40_recs128k-3.out
read: IOPS=17, BW=17.7MiB/s (18.5MB/s)(399GiB/23104551msec write: IOPS=17, BW=17.8MiB/s (18.6MB/s)(401GiB/23104551msec) READ: bw=17.7MiB/s (18.5MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=399GiB (429GB), run=23104551-23104551msec WRITE: bw=17.8MiB/s (18.6MB/s), 17.8MiB/s-17.8MiB/s (18.6MB/s-18.6MB/s), io=401GiB (430GB), run=23104551-23104551msec
[root@lxfs462 pool2]# fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bsM_arc-all_sync-standard_numj40_recs128k-4.out
read: IOPS=24, BW=24.7MiB/s (25.9MB/s)(399GiB/16560354msec) write: IOPS=24, BW=24.8MiB/s (25.0MB/s)(401GiB/16560354msec) READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=399GiB (429GB), run=16560354-16560354msec WRITE: bw=24.8MiB/s (25.0MB/s), 24.8MiB/s-24.8MiB/s (25.0MB/s-25.0MB/s), io=401GiB (430GB), run=16560354-16560354msec
[root@lxfs462 pool2]# fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-standard_numj40_recs128k-5.out
read: IOPS=45, BW=5789KiB/s (5928kB/s)(400GiB/72444830msec) write: IOPS=45, BW=5790KiB/s (5929kB/s)(400GiB/72444830msec) READ: bw=5789KiB/s (5928kB/s), 5789KiB/s-5789KiB/s (5928kB/s-5928kB/s), io=400GiB (429GB), run=72444830-72444830msec WRITE: bw=5790KiB/s (5929kB/s), 5790KiB/s-5790KiB/s (5929kB/s-5929kB/s), io=400GiB (430GB), run=72444830-72444830msec
[root@lxfs462 pool2]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-standard_numj40_recs128k-1.out
read: IOPS=54, BW=7031KiB/s (7199kB/s)(400GiB/59649076msec) write: IOPS=54, BW=7033KiB/s (7201kB/s)(400GiB/59649076msec) READ: bw=7031KiB/s (7199kB/s), 7031KiB/s-7031KiB/s (7199kB/s-7199kB/s), io=400GiB (429GB), run=59649076-59649076msec WRITE: bw=7033KiB/s (7201kB/s), 7033KiB/s-7033KiB/s (7201kB/s-7201kB/s), io=400GiB (430GB), run=59649076-59649076msec
[root@lxfs462 pool2]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-standard_numj40_recs128k-0.out
read: IOPS=25, BW=25.7MiB/s (26.9MB/s)(399GiB/15901047msec) write: IOPS=25, BW=25.8MiB/s (27.1MB/s)(401GiB/15901047msec) READ: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=399GiB (429GB), run=15901047-15901047msec WRITE: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=401GiB (430GB), run=15901047-15901047msec
[root@lxfs463 pool5]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-standard_numj40_recs128k-0.out
read: IOPS=43, BW=5593KiB/s (5727kB/s)(400GiB/74987746msec) write: IOPS=43, BW=5594KiB/s (5728kB/s)(400GiB/74987746msec) READ: bw=5593KiB/s (5727kB/s), 5593KiB/s-5593KiB/s (5727kB/s-5727kB/s), io=400GiB (429GB), run=74987746-74987746msec WRITE: bw=5594KiB/s (5728kB/s), 5594KiB/s-5594KiB/s (5728kB/s-5728kB/s), io=400GiB (430GB), run=74987746-74987746msec
[root@lxfs463 pool5]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-standard_numj40_recs128k-1.out
read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(399GiB/21734921msec) write: IOPS=18, BW=18.9MiB/s (19.8MB/s)(401GiB/21734921msec) READ: bw=18.8MiB/s (19.7MB/s), 18.8MiB/s-18.8MiB/s (19.7MB/s-19.7MB/s), io=399GiB (429GB), run=21734921-21734921msec WRITE: bw=18.9MiB/s (19.8MB/s), 18.9MiB/s-18.9MiB/s (19.8MB/s-19.8MB/s), io=401GiB (430GB), run=21734921-21734921msec
[root@lxfs462 pool2]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-none_numj40_recs128k-2.out
read: IOPS=46, BW=5941KiB/s (6083kB/s)(400GiB/70592282msec) write: IOPS=46, BW=5942KiB/s (6085kB/s)(400GiB/70592282msec) READ: bw=5941KiB/s (6083kB/s), 5941KiB/s-5941KiB/s (6083kB/s-6083kB/s), io=400GiB (429GB), run=70592282-70592282msec WRITE: bw=5942KiB/s (6085kB/s), 5942KiB/s-5942KiB/s (6085kB/s-6085kB/s), io=400GiB (430GB), run=70592282-70592282msec
[root@lxfs463 pool6]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-none_numj40_recs128k-1.out
read: IOPS=26, BW=26.3MiB/s (27.6MB/s)(399GiB/15553489msec) write: IOPS=26, BW=26.4MiB/s (27.7MB/s)(401GiB/15553489msec) READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=399GiB (429GB), run=15553489-15553489msec WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=401GiB (430GB), run=15553489-15553489msec
3.0.2 recordsize 1M
sync=standard arc=all recs=1M bs=128k : done sync=standard arc=all recs=1M bs=1M : done sync=standard arc=metadata recs=1M bs=128k : done sync=standard arc=metadata recs=1M bs=1M : done sync=standard arc=none recs=1M bs=128k : done sync=standard arc=none recs=1M bs=1M : done sync=none arc=all recs=1M bs=128k : done sync=none arc=all recs=1M bs=1M : done sync=none arc=metadata recs=1M bs=128k : done sync=none arc=metadata recs=1M bs=1M : done sync=none arc=none recs=1M bs=128 : done sync=none arc=none recs=1M bs=1M : done
[root@lxfs463 pool6]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-standard_numj40_recs1M-0.out
read: IOPS=33, BW=4256KiB/s (4359kB/s)(400GiB/98529901msec) write: IOPS=33, BW=4257KiB/s (4360kB/s)(400GiB/98529901msec) READ: bw=4256KiB/s (4359kB/s), 4256KiB/s-4256KiB/s (4359kB/s-4359kB/s), io=400GiB (429GB), run=98529901-98529901msec WRITE: bw=4257KiB/s (4360kB/s), 4257KiB/s-4257KiB/s (4360kB/s-4360kB/s), io=400GiB (430GB), run=98529901-98529901msec
[root@lxfs463 pool6]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-standard_numj40_recs1M-2.out
read: IOPS=26, BW=26.7MiB/s (28.0MB/s)(399GiB/15297636msec) write: IOPS=26, BW=26.8MiB/s (28.1MB/s)(401GiB/15297636msec) READ: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=399GiB (429GB), run=15297636-15297636msec WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=401GiB (430GB), run=15297636-15297636msec
[root@lxfs463 pool6]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-standard_numj40_recs1M-3.out
read: IOPS=23, BW=3042KiB/s (3115kB/s)(400GiB/137857616msec) write: IOPS=23, BW=3043KiB/s (3116kB/s)(400GiB/137857616msec) READ: bw=3042KiB/s (3115kB/s), 3042KiB/s-3042KiB/s (3115kB/s-3115kB/s), io=400GiB (429GB), run=137857616-137857616msec WRITE: bw=3043KiB/s (3116kB/s), 3043KiB/s-3043KiB/s (3116kB/s-3116kB/s), io=400GiB (430GB), run=137857616-137857616msec
[root@lxfs463 pool6]# fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-standard_numj40_recs1M-4.out
read: IOPS=26, BW=26.6MiB/s (27.9MB/s)(399GiB/15376329msec) write: IOPS=26, BW=26.7MiB/s (27.0MB/s)(401GiB/15376329msec) READ: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=399GiB (429GB), run=15376329-15376329msec WRITE: bw=26.7MiB/s (27.0MB/s), 26.7MiB/s-26.7MiB/s (27.0MB/s-27.0MB/s), io=401GiB (430GB), run=15376329-15376329msec
[root@lxfs463 pool5]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-standard_numj40_recs1M-2.out
read: IOPS=23, BW=3064KiB/s (3138kB/s)(400GiB/136871147msec) write: IOPS=23, BW=3065KiB/s (3138kB/s)(400GiB/136871147msec) READ: bw=3064KiB/s (3138kB/s), 3064KiB/s-3064KiB/s (3138kB/s-3138kB/s), io=400GiB (429GB), run=136871147-136871147msec WRITE: bw=3065KiB/s (3138kB/s), 3065KiB/s-3065KiB/s (3138kB/s-3138kB/s), io=400GiB (430GB), run=136871147-136871147msec
[root@lxfs463 pool5]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-standard_numj40_recs1M-3.out
read: IOPS=26, BW=26.3MiB/s (27.5MB/s)(399GiB/15563408msec) write: IOPS=26, BW=26.4MiB/s (27.7MB/s)(401GiB/15563408msec) READ: bw=26.3MiB/s (27.5MB/s), 26.3MiB/s-26.3MiB/s (27.5MB/s-27.5MB/s), io=399GiB (429GB), run=15563408-15563408msec WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=401GiB (430GB), run=15563408-15563408msec
[root@lxfs462 pool2]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-none_numj40_recs1M-3.out
read: IOPS=33, BW=4281KiB/s (4384kB/s)(400GiB/97966843msec) write: IOPS=33, BW=4282KiB/s (4385kB/s)(400GiB/97966843msec) READ: bw=4281KiB/s (4384kB/s), 4281KiB/s-4281KiB/s (4384kB/s-4384kB/s), io=400GiB (429GB), run=97966843-97966843msec WRITE: bw=4282KiB/s (4385kB/s), 4282KiB/s-4282KiB/s (4385kB/s-4385kB/s), io=400GiB (430GB), run=97966843-97966843msec
[root@lxfs462 pool2]# fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-none_numj40_recs1M-4.out
read: IOPS=26, BW=26.7MiB/s (28.0MB/s)(399GiB/15277775msec) write: IOPS=26, BW=26.9MiB/s (28.2MB/s)(401GiB/15277775msec) READ: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=399GiB (429GB), run=15277775-15277775msec WRITE: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=401GiB (430GB), run=15277775-15277775msec
[root@lxfs462 pool0]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-none_numj40_recs1M-1.out
read: IOPS=44, BW=5694KiB/s (5830kB/s)(400GiB/73657029msec) write: IOPS=44, BW=5695KiB/s (5832kB/s)(400GiB/73657029msec) READ: bw=5694KiB/s (5830kB/s), 5694KiB/s-5694KiB/s (5830kB/s-5830kB/s), io=400GiB (429GB), run=73657029-73657029msec WRITE: bw=5695KiB/s (5832kB/s), 5695KiB/s-5695KiB/s (5832kB/s-5832kB/s), io=400GiB (430GB), run=73657029-73657029msec
[root@lxfs462 pool0]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-none_numj40_recs1M-2.out
read: IOPS=30, BW=30.3MiB/s (31.8MB/s)(399GiB/13485818msec) write: IOPS=30, BW=30.4MiB/s (31.9MB/s)(401GiB/13485818msec) READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=399GiB (429GB), run=13485818-13485818msec WRITE: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=401GiB (430GB), run=13485818-13485818msec
[root@lxfs462 pool1]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-none_numj40_recs1M-0.out
read: IOPS=25, BW=3215KiB/s (3292kB/s)(400GiB/130434540msec) write: IOPS=25, BW=3216KiB/s (3293kB/s)(400GiB/130434540msec) READ: bw=3215KiB/s (3292kB/s), 3215KiB/s-3215KiB/s (3292kB/s-3292kB/s), io=400GiB (429GB), run=130434540-130434540msec WRITE: bw=3216KiB/s (3293kB/s), 3216KiB/s-3216KiB/s (3293kB/s-3293kB/s), io=400GiB (430GB), run=130434540-130434540msec
[root@lxfs462 pool1]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-none_numj40_recs1M-1.out
read: IOPS=25, BW=25.2MiB/s (26.4MB/s)(399GiB/16248813msec) write: IOPS=25, BW=25.3MiB/s (26.5MB/s)(401GiB/16248813msec) READ: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=399GiB (429GB), run=16248813-16248813msec WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=401GiB (430GB), run=16248813-16248813msec
4 fio on single 6 TB
[root@lxfs463 ~]# zfs get primarycache pool3 NAME PROPERTY VALUE SOURCE pool3 primarycache metadata local [root@lxfs463 ~]# zfs get sync pool3 NAME PROPERTY VALUE SOURCE pool3 sync disabled local
4.0.1 recordsize 128k
sync=standard arc=all recs=128k bs=128k : done sync=standard arc=all recs=128k bs=1M : done sync=standard arc=metadata recs=128k bs=128k : done sync=standard arc=metadata recs=128k bs=1M : done sync=standard arc=none recs=128k bs=128k : done sync=standard arc=none recs=128k bs=1M : done sync=none arc=all recs=128k bs=128k : done sync=none arc=all recs=128k bs=1M : done sync=none arc=metadata recs=128k bs=128k : done sync=none arc=metadata recs=128k bs=1M : done sync=none arc=none recs=128k bs=128k : done sync=none arc=none recs=128k bs=1M : done
[root@lxfs463 pool3]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-none_numj40_recs128k-0.out
read: IOPS=104, BW=13.0MiB/s (13.7MB/s)(400GiB/31385687msec) write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(400GiB/31385687msec) READ: bw=13.0MiB/s (13.7MB/s), 13.0MiB/s-13.0MiB/s (13.7MB/s-13.7MB/s), io=400GiB (429GB), run=31385687-31385687msec WRITE: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=400GiB (430GB), run=31385687-31385687msec
[root@lxfs463 pool3]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-none_numj40_recs128k-1.out
read: IOPS=28, BW=28.6MiB/s (29.9MB/s)(399GiB/14308417msec) write: IOPS=28, BW=28.7MiB/s (30.1MB/s)(401GiB/14308417msec) READ: bw=28.6MiB/s (29.9MB/s), 28.6MiB/s-28.6MiB/s (29.9MB/s-29.9MB/s), io=399GiB (429GB), run=14308417-14308417msec WRITE: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=401GiB (430GB), run=14308417-14308417msec
[root@lxfs463 pool4]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-none_numj40_recs128k-2.out
read: IOPS=26, BW=26.7MiB/s (28.0MB/s)(399GiB/15284877msec) write: IOPS=26, BW=26.9MiB/s (28.2MB/s)(401GiB/15284877msec) READ: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=399GiB (429GB), run=15284877-15284877msec WRITE: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=401GiB (430GB), run=15284877-15284877msec
[root@lxfs463 pool4]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-none_numj40_recs128k-3.out
read: IOPS=92, BW=11.5MiB/s (12.1MB/s)(400GiB/35488359msec) write: IOPS=92, BW=11.5MiB/s (12.1MB/s)(400GiB/35488359msec) READ: bw=11.5MiB/s (12.1MB/s), 11.5MiB/s-11.5MiB/s (12.1MB/s-12.1MB/s), io=400GiB (429GB), run=35488359-35488359msec WRITE: bw=11.5MiB/s (12.1MB/s), 11.5MiB/s-11.5MiB/s (12.1MB/s-12.1MB/s), io=400GiB (430GB), run=35488359-35488359msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-none_numj40_recs128k-5.out
read: IOPS=106, BW=13.4MiB/s (14.0MB/s)(400GiB/30665103msec) write: IOPS=106, BW=13.4MiB/s (14.0MB/s)(400GiB/30665103msec) READ: bw=13.4MiB/s (14.0MB/s), 13.4MiB/s-13.4MiB/s (14.0MB/s-14.0MB/s), io=400GiB (429GB), run=30665103-30665103msec WRITE: bw=13.4MiB/s (14.0MB/s), 13.4MiB/s-13.4MiB/s (14.0MB/s-14.0MB/s), io=400GiB (430GB), run=30665103-30665103msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-none_numj40_recs128k-3.out
read: IOPS=25, BW=25.8MiB/s (27.1MB/s)(399GiB/15822076msec) write: IOPS=25, BW=25.9MiB/s (27.2MB/s)(401GiB/15822076msec) READ: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=399GiB (429GB), run=15822076-15822076msec WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=401GiB (430GB), run=15822076-15822076msec
[root@lxfs462 pool4]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-standard_numj40_recs128k-0.out
read: IOPS=97, BW=12.2MiB/s (12.8MB/s)(400GiB/33679025msec) write: IOPS=97, BW=12.2MiB/s (12.8MB/s)(400GiB/33679025msec) READ: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=400GiB (429GB), run=33679025-33679025msec WRITE: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=400GiB (430GB), run=33679025-33679025msec
[root@lxfs462 pool4]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-standard_numj40_recs128k-1.out
read: IOPS=38, BW=38.2MiB/s (40.0MB/s)(399GiB/10702562msec) write: IOPS=38, BW=38.4MiB/s (40.2MB/s)(401GiB/10702562msec) READ: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=399GiB (429GB), run=10702562-10702562msec WRITE: bw=38.4MiB/s (40.2MB/s), 38.4MiB/s-38.4MiB/s (40.2MB/s-40.2MB/s), io=401GiB (430GB), run=10702562-10702562msec
[root@lxfs462 pool4]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-standard_numj40_recs128k-2.out
read: IOPS=87, BW=10.9MiB/s (11.4MB/s)(400GiB/37651727msec) write: IOPS=87, BW=10.9MiB/s (11.4MB/s)(400GiB/37651727msec) READ: bw=10.9MiB/s (11.4MB/s), 10.9MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=400GiB (429GB), run=37651727-37651727msec WRITE: bw=10.9MiB/s (11.4MB/s), 10.9MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=400GiB (430GB), run=37651727-37651727msec
[root@lxfs462 pool4]# zfs set primarycache=all pool4 [root@lxfs462 pool4]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-standard_numj40_recs128k-3.out
read: IOPS=93, BW=11.6MiB/s (12.2MB/s)(400GiB/35197694msec) write: IOPS=93, BW=11.6MiB/s (12.2MB/s)(400GiB/35197694msec) READ: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=400GiB (429GB), run=35197694-35197694msec WRITE: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=400GiB (430GB), run=35197694-35197694msec
[root@lxfs462 pool4]# zfs set primarycache=none pool4 [root@lxfs462 pool4]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-standard_numj40_recs128k-0.out
read: IOPS=93, BW=11.7MiB/s (12.2MB/s)(400GiB/35095486msec) write: IOPS=93, BW=11.7MiB/s (12.2MB/s)(400GiB/35095486msec) READ: bw=11.7MiB/s (12.2MB/s), 11.7MiB/s-11.7MiB/s (12.2MB/s-12.2MB/s), io=400GiB (429GB), run=35095486-35095486msec WRITE: bw=11.7MiB/s (12.2MB/s), 11.7MiB/s-11.7MiB/s (12.2MB/s-12.2MB/s), io=400GiB (430GB), run=35095486-35095486msec
[root@lxfs462 pool4]# fio --rw=randrw --name=test4 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-standard_numj40_recs128k-4.out
read: IOPS=24, BW=24.5MiB/s (25.7MB/s)(399GiB/16703797msec) write: IOPS=24, BW=24.6MiB/s (25.8MB/s)(401GiB/16703797msec) READ: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=399GiB (429GB), run=16703797-16703797msec WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=401GiB (430GB), run=16703797-16703797msec
4.0.2 recordsize 1M
sync=standard arc=all recs=1M bs=128k : done sync=standard arc=all recs=1M bs=1M : done sync=standard arc=metadata recs=1M bs=128k : done sync=standard arc=metadata recs=1M bs=1M : done sync=standard arc=none recs=1M bs=128k : done sync=standard arc=none recs=1M bs=1M : done sync=none arc=all recs=1M bs=128k : done sync=none arc=all recs=1M bs=1M : done sync=none arc=metadata recs=1M bs=128k : done sync=none arc=metadata recs=1M bs=1M : done sync=none arc=none recs=1M bs=128 : done sync=none arc=none recs=1M bs=1M : done
[root@lxfs462 pool3]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-none_numj40_recs1M-1.out
read: IOPS=45, BW=45.4MiB/s (47.6MB/s)(399GiB/9005150msec) write: IOPS=45, BW=45.6MiB/s (47.8MB/s)(401GiB/9005150msec) READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=399GiB (429GB), run=9005150-9005150msec WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=401GiB (430GB), run=9005150-9005150msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-none_numj40_recs1M-2.out
read: IOPS=42, BW=5394KiB/s (5524kB/s)(400GiB/77744254msec) write: IOPS=42, BW=5396KiB/s (5525kB/s)(400GiB/77744254msec) READ: bw=5394KiB/s (5524kB/s), 5394KiB/s-5394KiB/s (5524kB/s-5524kB/s), io=400GiB (429GB), run=77744254-77744254msec WRITE: bw=5396KiB/s (5525kB/s), 5396KiB/s-5396KiB/s (5525kB/s-5525kB/s), io=400GiB (430GB), run=77744254-77744254msec
[root@lxfs463 pool3]# fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-none_numj40_recs1M-2.out
read: IOPS=28, BW=3644KiB/s (3731kB/s)(400GiB/115097851msec) write: IOPS=28, BW=3645KiB/s (3732kB/s)(400GiB/115097851msec) READ: bw=3644KiB/s (3731kB/s), 3644KiB/s-3644KiB/s (3731kB/s-3731kB/s), io=400GiB (429GB), run=115097851-115097851msec WRITE: bw=3645KiB/s (3732kB/s), 3645KiB/s-3645KiB/s (3732kB/s-3732kB/s), io=400GiB (430GB), run=115097851-115097851msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-none_numj40_recs1M-1.out
read: IOPS=44, BW=44.7MiB/s (46.9MB/s)(399GiB/9142048msec) write: IOPS=44, BW=44.9MiB/s (47.1MB/s)(401GiB/9142048msec) READ: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=399GiB (429GB), run=9142048-9142048msec WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=401GiB (430GB), run=9142048-9142048msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-none_numj40_recs1M-3.out
read: IOPS=25, BW=3324KiB/s (3404kB/s)(400GiB/126166081msec) write: IOPS=25, BW=3325KiB/s (3405kB/s)(400GiB/126166081msec) READ: bw=3324KiB/s (3404kB/s), 3324KiB/s-3324KiB/s (3404kB/s-3404kB/s), io=400GiB (429GB), run=126166081-126166081msec WRITE: bw=3325KiB/s (3405kB/s), 3325KiB/s-3325KiB/s (3405kB/s-3405kB/s), io=400GiB (430GB), run=126166081-126166081msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-none_numj40_recs1M-4.out
read: IOPS=42, BW=42.4MiB/s (44.5MB/s)(399GiB/9634005msec) write: IOPS=42, BW=42.6MiB/s (44.7MB/s)(401GiB/9634005msec) READ: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=399GiB (429GB), run=9634005-9634005msec WRITE: bw=42.6MiB/s (44.7MB/s), 42.6MiB/s-42.6MiB/s (44.7MB/s-44.7MB/s), io=401GiB (430GB), run=9634005-9634005msec
zfs set sync=standard pool3 [root@lxfs462 pool3]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-none_sync-standard_numj40_recs1M-0.out
read: IOPS=26, BW=3410KiB/s (3492kB/s)(400GiB/122987808msec) write: IOPS=26, BW=3411KiB/s (3493kB/s)(400GiB/122987808msec) READ: bw=3410KiB/s (3492kB/s), 3410KiB/s-3410KiB/s (3492kB/s-3492kB/s), io=400GiB (429GB), run=122987808-122987808msec WRITE: bw=3411KiB/s (3493kB/s), 3411KiB/s-3411KiB/s (3493kB/s-3493kB/s), io=400GiB (430GB), run=122987808-122987808msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-none_sync-standard_numj40_recs1M-0.out
read: IOPS=45, BW=45.9MiB/s (48.1MB/s)(399GiB/8902828msec) write: IOPS=46, BW=46.1MiB/s (48.4MB/s)(401GiB/8902828msec) READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=399GiB (429GB), run=8902828-8902828msec WRITE: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=401GiB (430GB), run=8902828-8902828msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_sync-standard_numj40_recs1M-1.out
read: IOPS=28, BW=3644KiB/s (3732kB/s)(400GiB/115081425msec) write: IOPS=28, BW=3645KiB/s (3733kB/s)(400GiB/115081425msec) READ: bw=3644KiB/s (3732kB/s), 3644KiB/s-3644KiB/s (3732kB/s-3732kB/s), io=400GiB (429GB), run=115081425-115081425msec WRITE: bw=3645KiB/s (3733kB/s), 3645KiB/s-3645KiB/s (3733kB/s-3733kB/s), io=400GiB (430GB), run=115081425-115081425msec
[root@lxfs462 pool3]# fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_sync-standard_numj40_recs1M-3.out
read: IOPS=44, BW=44.4MiB/s (46.6MB/s)(399GiB/9194191msec) write: IOPS=44, BW=44.7MiB/s (46.8MB/s)(401GiB/9194191msec) READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=399GiB (429GB), run=9194191-9194191msec WRITE: bw=44.7MiB/s (46.8MB/s), 44.7MiB/s-44.7MiB/s (46.8MB/s-46.8MB/s), io=401GiB (430GB), run=9194191-9194191msec
[root@lxfs463 pool4]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-all_sync-standard_numj40_recs1M-0.out
read: IOPS=35, BW=4492KiB/s (4600kB/s)(400GiB/93357034msec) write: IOPS=35, BW=4493KiB/s (4601kB/s)(400GiB/93357034msec) READ: bw=4492KiB/s (4600kB/s), 4492KiB/s-4492KiB/s (4600kB/s-4600kB/s), io=400GiB (429GB), run=93357034-93357034msec WRITE: bw=4493KiB/s (4601kB/s), 4493KiB/s-4493KiB/s (4601kB/s-4601kB/s), io=400GiB (430GB), run=93357034-93357034msec
[root@lxfs462 pool4]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-all_sync-standard_numj40_recs1M-1.out
read: IOPS=47, BW=47.8MiB/s (50.2MB/s)(399GiB/8543243msec) write: IOPS=48, BW=48.1MiB/s (50.4MB/s)(401GiB/8543243msec) READ: bw=47.8MiB/s (50.2MB/s), 47.8MiB/s-47.8MiB/s (50.2MB/s-50.2MB/s), io=399GiB (429GB), run=8543243-8543243msec WRITE: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=401GiB (430GB), run=8543243-8543243msec
5 fio on hebetest
5.0.1 New LNet parameters
for i in {0..14}; do fio --rw=rw --name=test$i --size=4G --bs=1M --numjobs=4 --filename=OST$i/test$i --group_reporting --output=fio_rw_s4G_bs1M_numj4-$i.out; done
6 lxmds12 with WD 60 disk enclosure
6.1 fio
fio_randrw_s20G_bs1M_primarycache-all_sync-default_numj40_recs1M-0.out
read : io=409923MB, bw=403295KB/s, iops=393, runt=1040829msec write: io=409277MB, bw=402659KB/s, iops=393, runt=1040829msec READ: io=409923MB, aggrb=403295KB/s, minb=403295KB/s, maxb=403295KB/s, mint=1040829msec, maxt=1040829msec WRITE: io=409277MB, aggrb=402659KB/s, minb=402659KB/s, maxb=402659KB/s, mint=1040829msec, maxt=1040829msec
fio_randrw_s20G_bs128k_primarycache-all_sync-default_numj40_recs1M-1.out
read : io=409763MB, bw=47614KB/s, iops=371, runt=8812485msec write: io=409437MB, bw=47576KB/s, iops=371, runt=8812485msec READ: io=409763MB, aggrb=47613KB/s, minb=47613KB/s, maxb=47613KB/s, mint=8812485msec, maxt=8812485msec WRITE: io=409437MB, aggrb=47576KB/s, minb=47576KB/s, maxb=47576KB/s, mint=8812485msec, maxt=8812485msec #+end_example $ : fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=1M --numj=100 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_primarycache-all_sync-default_numj100-2.out #+begin_example $ read : io=999.61GB, bw=251860KB/s, iops=245, runt=4161624msec write: io=1000.5GB, bw=252067KB/s, iops=246, runt=4161624msec READ: io=999.61GB, aggrb=251859KB/s, minb=251859KB/s, maxb=251859KB/s, mint=4161624msec, maxt=4161624msec WRITE: io=1000.5GB, aggrb=252066KB/s, minb=252066KB/s, maxb=252066KB/s, mint=4161624msec, maxt=4161624msec
6.1.1 5 pools
for i in {0..5}; do cd /pool${i}; nohup fio --rw=randrw --name=test6 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=/pool${i}/fio_pool0${i}.out & done; wait
pool0: read : io=409923MB, bw=218329KB/s, iops=213, runt=1922607msec write: io=409277MB, bw=217985KB/s, iops=212, runt=1922607msec READ: io=409923MB, aggrb=218329KB/s, minb=218329KB/s, maxb=218329KB/s, mint=1922607msec, maxt=1922607msec WRITE: io=409277MB, aggrb=217985KB/s, minb=217985KB/s, maxb=217985KB/s, mint=1922607msec, maxt=1922607msec pool1: read : io=409923MB, bw=244905KB/s, iops=239, runt=1713976msec write: io=409277MB, bw=244519KB/s, iops=238, runt=1713976msec READ: io=409923MB, aggrb=244904KB/s, minb=244904KB/s, maxb=244904KB/s, mint=1713976msec, maxt=1713976msec WRITE: io=409277MB, aggrb=244518KB/s, minb=244518KB/s, maxb=244518KB/s, mint=1713976msec, maxt=1713976msec pool2: read : io=409923MB, bw=251181KB/s, iops=245, runt=1671152msec write: io=409277MB, bw=250785KB/s, iops=244, runt=1671152msec READ: io=409923MB, aggrb=251180KB/s, minb=251180KB/s, maxb=251180KB/s, mint=1671152msec, maxt=1671152msec WRITE: io=409277MB, aggrb=250784KB/s, minb=250784KB/s, maxb=250784KB/s, mint=1671152msec, maxt=1671152msec pool3: read : io=409923MB, bw=279976KB/s, iops=273, runt=1499275msec write: io=409277MB, bw=279535KB/s, iops=272, runt=1499275msec READ: io=409923MB, aggrb=279976KB/s, minb=279976KB/s, maxb=279976KB/s, mint=1499275msec, maxt=1499275msec WRITE: io=409277MB, aggrb=279534KB/s, minb=279534KB/s, maxb=279534KB/s, mint=1499275msec, maxt=1499275msec pool4: read : io=409923MB, bw=276797KB/s, iops=270, runt=1516496msec write: io=409277MB, bw=276361KB/s, iops=269, runt=1516496msec READ: io=409923MB, aggrb=276796KB/s, minb=276796KB/s, maxb=276796KB/s, mint=1516496msec, maxt=1516496msec WRITE: io=409277MB, aggrb=276360KB/s, minb=276360KB/s, maxb=276360KB/s, mint=1516496msec, maxt=1516496msec pool5: read : io=409923MB, bw=202293KB/s, iops=197, runt=2075014msec write: io=409277MB, bw=201974KB/s, iops=197, runt=2075014msec READ: io=409923MB, aggrb=202293KB/s, minb=202293KB/s, maxb=202293KB/s, mint=2075014msec, maxt=2075014msec WRITE: io=409277MB, aggrb=201974KB/s, minb=201974KB/s, maxb=201974KB/s, mint=2075014msec, maxt=2075014msec
6.1.2 Christo HVS fio
for i in {0..5}; do cd /pool${i}; nohup fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=40G --readwrite=randrw --rwmixread=75 --output=/pool${i}/christo_hvs_pool${i}.out & done; wait
pool0: read : io=30719MB, bw=11420KB/s, iops=2854, runt=2754528msec write: io=10241MB, bw=3807.5KB/s, iops=951, runt=2754528msec cpu : usr=0.73%, sys=13.86%, ctx=1160647, majf=0, minf=6 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=7864109/w=2621651/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=30719MB, aggrb=11419KB/s, minb=11419KB/s, maxb=11419KB/s, mint=2754528msec, maxt=2754528msec WRITE: io=10241MB, aggrb=3807KB/s, minb=3807KB/s, maxb=3807KB/s, mint=2754528msec, maxt=2754528msec pool1: read : io=30719MB, bw=12014KB/s, iops=3003, runt=2618330msec write: io=10241MB, bw=4005.8KB/s, iops=1001, runt=2618330msec cpu : usr=0.69%, sys=14.06%, ctx=1334270, majf=0, minf=7 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=7864109/w=2621651/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=30719MB, aggrb=12013KB/s, minb=12013KB/s, maxb=12013KB/s, mint=2618330msec, maxt=2618330msec WRITE: io=10241MB, aggrb=4005KB/s, minb=4005KB/s, maxb=4005KB/s, mint=2618330msec, maxt=2618330msec pool2: read : io=30719MB, bw=11250KB/s, iops=2812, runt=2796119msec write: io=10241MB, bw=3750.5KB/s, iops=937, runt=2796119msec cpu : usr=0.66%, sys=14.19%, ctx=1299320, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=7864109/w=2621651/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=30719MB, aggrb=11250KB/s, minb=11250KB/s, maxb=11250KB/s, mint=2796119msec, maxt=2796119msec WRITE: io=10241MB, aggrb=3750KB/s, minb=3750KB/s, maxb=3750KB/s, mint=2796119msec, maxt=2796119msec pool3: read : io=30719MB, bw=12191KB/s, iops=3047, runt=2580382msec write: io=10241MB, bw=4063.1KB/s, iops=1015, runt=2580382msec cpu : usr=0.78%, sys=15.77%, ctx=1348554, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=7864109/w=2621651/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=30719MB, aggrb=12190KB/s, minb=12190KB/s, maxb=12190KB/s, mint=2580382msec, maxt=2580382msec WRITE: io=10241MB, aggrb=4063KB/s, minb=4063KB/s, maxb=4063KB/s, mint=2580382msec, maxt=2580382msec pool4: read : io=30719MB, bw=11125KB/s, iops=2781, runt=2827573msec write: io=10241MB, bw=3708.8KB/s, iops=927, runt=2827573msec cpu : usr=0.73%, sys=14.08%, ctx=1374325, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=7864109/w=2621651/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=30719MB, aggrb=11124KB/s, minb=11124KB/s, maxb=11124KB/s, mint=2827573msec, maxt=2827573msec WRITE: io=10241MB, aggrb=3708KB/s, minb=3708KB/s, maxb=3708KB/s, mint=2827573msec, maxt=2827573msec pool5: read : io=30719MB, bw=11134KB/s, iops=2783, runt=2825363msec write: io=10241MB, bw=3711.7KB/s, iops=927, runt=2825363msec cpu : usr=0.70%, sys=14.20%, ctx=1123533, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=7864109/w=2621651/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=30719MB, aggrb=11133KB/s, minb=11133KB/s, maxb=11133KB/s, mint=2825363msec, maxt=2825363msec WRITE: io=10241MB, aggrb=3711KB/s, minb=3711KB/s, maxb=3711KB/s, mint=2825363msec, maxt=2825363msec
6.2 iozone
iozone -s 520g -r 64 -i0 -i1 -t1
Children see throughput for 1 initial writers = 2,227,839.00 kB/sec Children see throughput for 1 rewriters = 83769.52 kB/sec Children see throughput for 1 readers = 4593807.50 kB/sec Children see throughput for 1 re-readers = 4089786.75 kB/sec
iozone -a -g 520g -n 100m -r 1024 -i0 -i1
kB reclen write rewrite read reread 102400 1024 2005955 3888509 6462600 6005115 204800 1024 2578856 3786851 5511604 6231372 409600 1024 3097325 3799445 5928492 6114511 819200 1024 3464889 4120244 6557004 6265774 1638400 1024 3271388 3957048 6380065 5739729 3276800 1024 3329401 4408538 7080854 6789999 6553600 1024 3283769 4191677 6391342 6719758 13107200 1024 3244877 3688342 6289214 6093557 26214400 1024 3271442 3661815 6041671 5859413 52428800 1024 3352489 3450823 6971899 7332373 104857600 1024 3079120 3704646 6414798 6552022 209715200 1024 2527180 3845256 6173794 6178438 419430400 1024 1628998 4051517 6708755 6615247
iozone -a -g 520g -n 100m -i0 -i1
kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 102400 64 1111776 2917126 6164658 6006879 102400 128 2210996 2644768 6247697 5942386 102400 256 2178452 3648931 5828441 5849716 102400 512 1777219 4035492 6611934 6738598 102400 1024 3001798 4224967 6914674 7006036 102400 2048 3670385 4311964 6565340 5876287 102400 4096 3054702 4275905 6646724 6683649 102400 8192 2908662 3951294 6934029 7215046 102400 16384 3014457 3772071 5300805 5010667 204800 64 1909626 2858901 5633970 5559331 204800 128 2382811 2616522 5983231 5934004 204800 256 2320025 4061304 5268579 6327629 204800 512 2107650 4444645 6842181 6441461 204800 1024 2937424 4335858 7125926 6488218 204800 2048 3430364 4265242 6749071 6422246 204800 4096 2818758 4107583 6301541 6506747 204800 8192 3221441 4119244 6720715 6590718 204800 16384 2814960 3649470 4659742 4975021 409600 64 1769810 2507026 5217439 4983459 409600 128 1899117 2843295 5736206 5316231 409600 256 2496813 3369224 6277498 6036840 409600 512 2340907 4252139 6635772 6501694 409600 1024 3257157 4270350 6639978 6607189 409600 2048 3690287 4732031 7111349 6816899 409600 4096 4156897 4217380 7159209 6863387 409600 8192 3302082 4061196 6670995 6277590 409600 16384 2838255 3452408 4755096 4592285 819200 64 1777423 2428281 4946204 4901575 819200 128 2012687 2951272 4600080 4500995 819200 256 2159642 3122866 5509301 5322141 819200 512 2567904 4061454 6248852 6176534 819200 1024 3514067 4492139 6834470 6780829 819200 2048 3522396 4091376 6661254 6570835 819200 4096 3745890 4091683 6588285 6267637 819200 8192 3436067 3664257 6163427 6408811 819200 16384 2580191 3186381 3648395 3805790 1638400 64 2120318 2803069 5239830 5365695 1638400 128 2345574 3174350 5573546 5721207 1638400 256 2220243 3346885 5517740 5534293 1638400 512 2574202 3991881 6541878 6378424 1638400 1024 2910021 3429179 5569982 5308618 1638400 2048 3176726 4165670 6581267 6087059 1638400 4096 3214925 3124886 5618541 5805933 1638400 8192 2878314 3749019 5604319 6114165 1638400 16384 2735097 3107820 3347902 3251606 3276800 64 1937977 2287381 4787311 4812376 3276800 128 2066169 2893084 4987003 5012082 3276800 256 2304081 3476113 5737529 5505164 3276800 512 2392969 3770441 6180554 6100356 3276800 1024 3068118 3789176 6348050 6588226 3276800 2048 3739123 4310853 7137943 6965110 3276800 4096 3229316 3749726 5947568 5945237 3276800 8192 3113086 3676134 6254412 6167629 3276800 16384 2772836 3447256 4662665 4570861 6553600 64 2031868 2727426 6021852 6087910 6553600 128 2466826 2851566 5908087 5390134 6553600 256 2427398 3686721 7661767 7702768 6553600 512 2221045 3360661 6051516 6024576 6553600 1024 3496648 4205762 6813118 7347769 6553600 2048 3108436 3842020 5912517 6352781 6553600 4096 3489424 3979969 6427611 6165071 6553600 8192 3076000 3435108 5692772 5954169 6553600 16384 2784121 3478845 5052132 4809476 13107200 64 2050446 2644497 5164843 5609616 13107200 128 2282084 2650991 4831753 4991393 13107200 256 2484094 3683837 6971080 7049103 13107200 512 2351889 3540549 5816790 6058222 13107200 1024 3693481 4319841 7523689 6958690 13107200 2048 3437650 3547264 6594124 6884930 13107200 4096 3257207 3417022 5720220 5836088 13107200 8192 2958508 3316280 6513037 6063037 13107200 16384 2688393 3284886 3434006 3483084 26214400 64 2173929 2857263 5977249 5936332 26214400 128 2300675 2742829 4998823 5112676 26214400 256 2386066 3665939 6565933 6476156 26214400 512 2548652 3830738 6431919 6699280 26214400 1024 3346418 4161150 6958475 6947176 26214400 2048 3500526 4155306 6472058 6518661 26214400 4096 3365852 4055479 6736786 6741845 26214400 8192 3314360 3932889 7109068 7094798 26214400 16384 2916067 3714455 4865172 4761081 52428800 64 2116056 2399555 5248981 5482738 52428800 128 2320039 2923936 5598295 5456380 52428800 256 2470436 3451854 5363937 5903453 52428800 512 2541528 3531166 6153175 6449052 52428800 1024 3431003 4231409 6928585 6965308 52428800 2048 3479436 3740235 6111487 6513918 52428800 4096 3233582 3618249 6136682 6109986 52428800 8192 2987575 3375362 6198420 6057771 52428800 16384 2664737 3181381 3358469 3513933 104857600 64 1947325 2460286 5413506 5455679 104857600 128 2133765 2928517 5422825 5539566 104857600 256 2205667 3060470 5633781 5673073 104857600 512 2249515 3707370 6373114 6286175 104857600 1024 3170245 3704393 6358946 6676533 104857600 2048 3090272 3852830 6701589 6649975 104857600 4096 2966668 3888624 6072420 6158134 104857600 8192 2738291 3426218 6279479 6323956 104857600 16384 2460631 3364976 4682368 4730174 209715200 64 1658165 2541695 5500680 5386131 209715200 128 1888633 3094732 5748980 5781055 209715200 256 1918157 3400482 6365405 6060659 209715200 512 1964041 3558998 6283904 6450258 209715200 1024 2508677 3799966 6550717 6588509 209715200 2048 2484028 3570248 6729743 6550425 209715200 4096 2292168 3690595 6285402 6324695 209715200 8192 2084752 3483297 6473346 6455647 209715200 16384 2010613 3408264 4644144 4629238 419430400 64 1227116 2564097 5268679 5353002 419430400 128 1341298 3221678 5676650 5700971 419430400 256 1382995 3370112 5961271 5977038 419430400 512 1407826 3658843 6113909 6155162 419430400 1024 1646897 4146758 6570343 7035537 419430400 2048 1568909 3969424 6454062 6450140 419430400 4096 1466634 3963037 6124784 6118815 419430400 8192 1253661 3852958 6595677 6714129 419430400 16384 1,263,686 3483866 4202950 3283788
6.2.1 6 pools
for i in {0..5}; do cd /pool${i}; nohup iozone -s 520g -r 1M -i0 -i1 -t1 > /pool${i}/iozone_pool0${i}.out & done; wait
pool0: Children see throughput for 1 initial writers = 1454790.62 kB/sec Children see throughput for 1 rewriters = 2594000.75 kB/sec Children see throughput for 1 readers = 2913904.25 kB/sec Children see throughput for 1 re-readers = 2984400.50 kB/sec pool1: Children see throughput for 1 initial writers = 1458839.50 kB/sec Children see throughput for 1 rewriters = 2224705.00 kB/sec Children see throughput for 1 readers = 3059833.00 kB/sec Children see throughput for 1 re-readers = 2928956.00 kB/sec pool2: Children see throughput for 1 initial writers = 1461552.88 kB/sec Children see throughput for 1 rewriters = 2194462.75 kB/sec Children see throughput for 1 readers = 3076502.00 kB/sec Children see throughput for 1 re-readers = 2935310.50 kB/sec pool3: Children see throughput for 1 initial writers = 1459408.50 kB/sec Children see throughput for 1 rewriters = 2190424.50 kB/sec Children see throughput for 1 readers = 3075660.00 kB/sec Children see throughput for 1 re-readers = 2952353.75 kB/sec pool4: Children see throughput for 1 initial writers = 1471717.25 kB/sec Children see throughput for 1 rewriters = 2584366.00 kB/sec Children see throughput for 1 readers = 2943485.00 kB/sec Children see throughput for 1 re-readers = 2980121.75 kB/sec pool5: Children see throughput for 1 initial writers = 1465863.62 kB/sec Children see throughput for 1 rewriters = 2215621.00 kB/sec Children see throughput for 1 readers = 3059053.00 kB/sec Children see throughput for 1 re-readers = 2920575.50 kB/sec
6.2.2 Christo HVS iozone command
root@lxmds12:~# for i in {0..5}; do cd /pool${i}; nohup iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > /pool${i}/iozone_pool${i}.out & done; wait
pool0: Children see throughput for 16 initial writers = 2212924.43 kB/sec Children see throughput for 16 rewriters = 2039193.58 kB/sec Children see throughput for 16 readers = 2932246.09 kB/sec Children see throughput for 16 re-readers = 3069638.73 kB/sec Children see throughput for 16 random readers = 376803.02 kB/sec pool1: Children see throughput for 16 initial writers = 2282083.12 kB/sec Children see throughput for 16 rewriters = 2039030.29 kB/sec Children see throughput for 16 readers = 2883482.70 kB/sec Children see throughput for 16 re-readers = 3005910.45 kB/sec Children see throughput for 16 random readers = 376314.08 kB/sec Children see throughput for 16 random writers = 217606.85 kB/sec pool2: Children see throughput for 16 initial writers = 2273513.69 kB/sec Children see throughput for 16 rewriters = 2084862.71 kB/sec Children see throughput for 16 readers = 3074259.67 kB/sec Children see throughput for 16 re-readers = 2988783.98 kB/sec Children see throughput for 16 random readers = 378769.07 kB/sec Children see throughput for 16 random writers = 219470.43 kB/sec pool3: Children see throughput for 16 initial writers = 2296605.92 kB/sec Children see throughput for 16 rewriters = 2033474.57 kB/sec Children see throughput for 16 readers = 2901075.34 kB/sec Children see throughput for 16 re-readers = 2950656.30 kB/sec Children see throughput for 16 random readers = 373782.30 kB/sec Children see throughput for 16 random writers = 219392.01 kB/sec pool4: Children see throughput for 16 initial writers = 2245470.23 kB/sec Children see throughput for 16 rewriters = 2029820.04 kB/sec Children see throughput for 16 readers = 2950151.30 kB/sec Children see throughput for 16 re-readers = 3004277.19 kB/sec Children see throughput for 16 random readers = 375007.84 kB/sec Children see throughput for 16 random writers = 217633.73 kB/sec pool5: Children see throughput for 16 initial writers = 2266218.69 kB/sec Children see throughput for 16 rewriters = 2034736.67 kB/sec Children see throughput for 16 readers = 2941146.58 kB/sec Children see throughput for 16 re-readers = 3027472.05 kB/sec Children see throughput for 16 random readers = 374811.35 kB/sec Children see throughput for 16 random writers = 219680.01 kB/sec
6.3 iotest
root@lxmds12:/bigpool/iotest0# /root/iotest/seq_io_test -b 64K -t 520G -w -f /bigpool/iotest0/iotest.test 2020-05-06 14:56:21|2020-05-06 15:06:19|598|933|iotest-64K-520G root@lxmds12:/bigpool/iotest0# /root/iotest/seq_io_test -b 1M -t 520G -w -f /bigpool/iotest0/iotest0.test 2020-05-06 15:11:57|2020-05-06 15:14:16|139|4016|iotest-1M-520G root@lxmds12:/bigpool/iotest0# /root/iotest/seq_io_test -b 128K -t 520G -w -f /bigpool/iotest0/iotest1.test 2020-05-06 15:24:49|2020-05-06 15:28:09|200|2791|iotest-128K-520G
6 Pools mit 10 Platten:
lxmds12-3|2020-05-06 16:43:15|2020-05-06 17:01:35|1100|507|iotest-1M-520G lxmds12-1|2020-05-06 16:43:15|2020-05-06 17:01:36|1101|507|iotest-1M-520G lxmds12-0|2020-05-06 16:43:15|2020-05-06 17:01:38|1103|506|iotest-1M-520G lxmds12-2|2020-05-06 16:43:15|2020-05-06 17:01:40|1105|505|iotest-1M-520G lxmds12-4|2020-05-06 16:43:15|2020-05-06 17:02:01|1126|495|iotest-1M-520G lxmds12-5|2020-05-06 16:43:15|2020-05-06 17:02:19|1144|488|iotest-1M-520G
root@lxmds12:~# for i in {0..5}; do /root/iotest/seq_io_test -b 1M -t 520G -w -f /pool${i}/test.tmp -S $((${i}+1)) | xargs -I {} echo `hostname`"-${i}|"{} & done; wait lxmds12-5|2020-05-07 09:49:55|2020-05-07 10:09:56|1202|464|iotest-1M-520G lxmds12-5|2020-05-07 09:49:33|2020-05-07 10:09:56|1223|456|iotest-1M-520G lxmds12-4|2020-05-07 09:49:33|2020-05-07 10:10:07|1233|452|iotest-1M-520G lxmds12-4|2020-05-07 09:49:55|2020-05-07 10:10:07|1212|460|iotest-1M-520G lxmds12-1|2020-05-07 09:49:33|2020-05-07 10:10:11|1238|451|iotest-1M-520G lxmds12-1|2020-05-07 09:49:54|2020-05-07 10:10:11|1217|458|iotest-1M-520G lxmds12-3|2020-05-07 09:49:33|2020-05-07 10:10:13|1240|450|iotest-1M-520G lxmds12-3|2020-05-07 09:49:55|2020-05-07 10:10:13|1219|458|iotest-1M-520G lxmds12-2|2020-05-07 09:49:54|2020-05-07 10:10:19|1224|456|iotest-1M-520G lxmds12-2|2020-05-07 09:49:33|2020-05-07 10:10:19|1246|448|iotest-1M-520G lxmds12-0|2020-05-07 09:49:54|2020-05-07 10:10:45|1251|446|iotest-1M-520G lxmds12-0|2020-05-07 09:49:33|2020-05-07 10:10:45|1272|438|iotest-1M-520G root@lxmds12:~# for i in {0..5}; do /root/iotest/seq_io_test -b 128K -t 520G -w -f /pool${i}/test1.tmp -S $((${i}+1)) | xargs -I {} echo `hostname`"-${i}|"{} & done; wait lxmds12-4|2020-05-07 10:19:51|2020-05-07 10:36:14|984|567|iotest-128K-520G lxmds12-2|2020-05-07 10:19:51|2020-05-07 10:36:32|1001|557|iotest-128K-520G lxmds12-0|2020-05-07 10:19:51|2020-05-07 10:36:34|1003|556|iotest-128K-520G lxmds12-5|2020-05-07 10:19:51|2020-05-07 10:36:38|1007|554|iotest-128K-520G lxmds12-1|2020-05-07 10:19:51|2020-05-07 10:37:36|1066|523|iotest-128K-520G lxmds12-3|2020-05-07 10:19:51|2020-05-07 10:37:39|1068|522|iotest-128K-520G root@lxmds12:~# for i in {0..5}; do /root/iotest/seq_io_test -b 128K -t 520G -w -f /pool${i}/test2.tmp -S $((${i}+1)) | xargs -I {} echo `hostname`"-${i}|"{} & done; wait lxmds12-5|2020-05-07 11:21:07|2020-05-07 11:37:07|960|581|iotest-128K-520G lxmds12-2|2020-05-07 11:21:07|2020-05-07 11:37:28|980|569|iotest-128K-520G lxmds12-0|2020-05-07 11:21:07|2020-05-07 11:38:07|1019|547|iotest-128K-520G lxmds12-4|2020-05-07 11:21:07|2020-05-07 11:38:20|1032|541|iotest-128K-520G lxmds12-3|2020-05-07 11:21:07|2020-05-07 11:38:20|1032|541|iotest-128K-520G lxmds12-1|2020-05-07 11:21:07|2020-05-07 11:38:24|1037|538|iotest-128K-520G root@lxmds12:~# for i in {0..5}; do /root/iotest/seq_io_test -b 1M -t 520G -w -f /pool${i}/test3.tmp -S $((${i}+1)) | xargs -I {} echo `hostname`"-${i}|"{} & done; wait lxmds12-2|2020-05-07 11:47:12|2020-05-07 12:03:11|959|582|iotest-1M-520G lxmds12-4|2020-05-07 11:47:12|2020-05-07 12:03:18|966|577|iotest-1M-520G lxmds12-5|2020-05-07 11:47:12|2020-05-07 12:03:21|969|576|iotest-1M-520G lxmds12-1|2020-05-07 11:47:12|2020-05-07 12:03:26|974|573|iotest-1M-520G lxmds12-3|2020-05-07 11:47:12|2020-05-07 12:03:55|1002|557|iotest-1M-520G lxmds12-0|2020-05-07 11:47:12|2020-05-07 12:04:17|1025|544|iotest-1M-520G
6.4 rm
root@lxmds12:/# ls pool* pool0: christo_hvs_pool0.out iozone.DUMMY.10 iozone.DUMMY.13 iozone.DUMMY.2 iozone.DUMMY.5 iozone.DUMMY.8 nohup.out test2.tmp test5.tmp iozone.DUMMY.0 iozone.DUMMY.11 iozone.DUMMY.14 iozone.DUMMY.3 iozone.DUMMY.6 iozone.DUMMY.9 test test3.tmp test.tmp iozone.DUMMY.1 iozone.DUMMY.12 iozone.DUMMY.15 iozone.DUMMY.4 iozone.DUMMY.7 iozone_pool0.out test1.tmp test4.tmp pool1: christo_hvs_pool1.out iozone.DUMMY.10 iozone.DUMMY.13 iozone.DUMMY.2 iozone.DUMMY.5 iozone.DUMMY.8 nohup.out test2.tmp test5.tmp iozone.DUMMY.0 iozone.DUMMY.11 iozone.DUMMY.14 iozone.DUMMY.3 iozone.DUMMY.6 iozone.DUMMY.9 test test3.tmp test.tmp iozone.DUMMY.1 iozone.DUMMY.12 iozone.DUMMY.15 iozone.DUMMY.4 iozone.DUMMY.7 iozone_pool1.out test1.tmp test4.tmp pool2: christo_hvs_pool2.out iozone.DUMMY.10 iozone.DUMMY.13 iozone.DUMMY.2 iozone.DUMMY.5 iozone.DUMMY.8 nohup.out test2.tmp test5.tmp iozone.DUMMY.0 iozone.DUMMY.11 iozone.DUMMY.14 iozone.DUMMY.3 iozone.DUMMY.6 iozone.DUMMY.9 test test3.tmp test.tmp iozone.DUMMY.1 iozone.DUMMY.12 iozone.DUMMY.15 iozone.DUMMY.4 iozone.DUMMY.7 iozone_pool2.out test1.tmp test4.tmp pool3: christo_hvs_pool3.out iozone.DUMMY.10 iozone.DUMMY.13 iozone.DUMMY.2 iozone.DUMMY.5 iozone.DUMMY.8 nohup.out test2.tmp test5.tmp iozone.DUMMY.0 iozone.DUMMY.11 iozone.DUMMY.14 iozone.DUMMY.3 iozone.DUMMY.6 iozone.DUMMY.9 test test3.tmp test.tmp iozone.DUMMY.1 iozone.DUMMY.12 iozone.DUMMY.15 iozone.DUMMY.4 iozone.DUMMY.7 iozone_pool3.out test1.tmp test4.tmp pool4: christo_hvs_pool4.out iozone.DUMMY.10 iozone.DUMMY.13 iozone.DUMMY.2 iozone.DUMMY.5 iozone.DUMMY.8 nohup.out test2.tmp test5.tmp iozone.DUMMY.0 iozone.DUMMY.11 iozone.DUMMY.14 iozone.DUMMY.3 iozone.DUMMY.6 iozone.DUMMY.9 test test3.tmp test.tmp iozone.DUMMY.1 iozone.DUMMY.12 iozone.DUMMY.15 iozone.DUMMY.4 iozone.DUMMY.7 iozone_pool4.out test1.tmp test4.tmp pool5: christo_hvs_pool5.out iozone.DUMMY.10 iozone.DUMMY.13 iozone.DUMMY.2 iozone.DUMMY.5 iozone.DUMMY.8 nohup.out test2.tmp test5.tmp iozone.DUMMY.0 iozone.DUMMY.11 iozone.DUMMY.14 iozone.DUMMY.3 iozone.DUMMY.6 iozone.DUMMY.9 test test3.tmp test.tmp iozone.DUMMY.1 iozone.DUMMY.12 iozone.DUMMY.15 iozone.DUMMY.4 iozone.DUMMY.7 iozone_pool5.out test1.tmp test4.tmp root@lxmds12:/# time rm -f pool*/iozone.DUMMY* real0m7.553s user0m0.004s sys0m1.120s
6.5 resilver
- Destroy one pool to get free disk, overwrite that disk,
zpool offline
a disk in pool2,zpool replace
that with the free disk
pool: pool2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 20 09:10:12 2020 12.6T scanned out of 97.2T at 883M/s, 27h52m to go 1.26T resilvered, 13.01% done config: NAME STATE READ WRITE CKSUM pool2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 encf20 ONLINE 0 0 0 encf21 ONLINE 0 0 0 encf22 ONLINE 0 0 0 encf23 ONLINE 0 0 0 encf24 ONLINE 0 0 0 replacing-5 OFFLINE 0 0 0 encf25 OFFLINE 0 0 0 encf30 ONLINE 0 0 0 (resilvering) encf26 ONLINE 0 0 0 encf27 ONLINE 0 0 0 encf28 ONLINE 0 0 0 encf29 ONLINE 0 0 0
pool: pool2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 20 09:10:12 2020 17.2T scanned out of 97.2T at 502M/s, 46h28m to go 1.71T resilvered, 17.67% done config: NAME STATE READ WRITE CKSUM pool2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 encf20 ONLINE 0 0 0 encf21 ONLINE 0 0 0 encf22 ONLINE 0 0 0 encf23 ONLINE 0 0 0 encf24 ONLINE 0 0 0 replacing-5 OFFLINE 0 0 0 encf25 OFFLINE 0 0 0 encf30 ONLINE 0 0 0 (resilvering) encf26 ONLINE 0 0 0 encf27 ONLINE 0 0 0 encf28 ONLINE 0 0 0 encf29 ONLINE 0 0 0
pool: pool2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 20 09:10:12 2020 17.5T scanned out of 97.2T at 461M/s, 50h19m to go 1.74T resilvered, 17.99% done config: NAME STATE READ WRITE CKSUM pool2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 encf20 ONLINE 0 0 0 encf21 ONLINE 0 0 0 encf22 ONLINE 0 0 0 encf23 ONLINE 0 0 0 encf24 ONLINE 0 0 0 replacing-5 OFFLINE 0 0 0 encf25 OFFLINE 0 0 0 encf30 ONLINE 0 0 0 (resilvering) encf26 ONLINE 0 0 0 encf27 ONLINE 0 0 0 encf28 ONLINE 0 0 0 encf29 ONLINE 0 0 0
pool: pool2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 20 09:10:12 2020 58.6T scanned out of 97.2T at 738M/s, 15h14m to go 5.83T resilvered, 60.28% done config: NAME STATE READ WRITE CKSUM pool2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 encf20 ONLINE 0 0 0 encf21 ONLINE 0 0 0 encf22 ONLINE 0 0 0 encf23 ONLINE 0 0 0 encf24 ONLINE 0 0 0 replacing-5 OFFLINE 0 0 0 encf25 OFFLINE 0 0 0 encf30 ONLINE 0 0 0 (resilvering) encf26 ONLINE 0 0 0 encf27 ONLINE 0 0 0 encf28 ONLINE 0 0 0 encf29 ONLINE 0 0 0
pool: pool2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 20 09:10:12 2020 59.3T scanned out of 97.2T at 515M/s, 21h24m to go 5.90T resilvered, 61.03% done config: NAME STATE READ WRITE CKSUM pool2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 encf20 ONLINE 0 0 0 encf21 ONLINE 0 0 0 encf22 ONLINE 0 0 0 encf23 ONLINE 0 0 0 encf24 ONLINE 0 0 0 replacing-5 OFFLINE 0 0 0 encf25 OFFLINE 0 0 0 encf30 ONLINE 0 0 0 (resilvering) encf26 ONLINE 0 0 0 encf27 ONLINE 0 0 0 encf28 ONLINE 0 0 0 encf29 ONLINE 0 0 0
pool: pool2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 20 09:10:12 2020 59.4T scanned out of 97.2T at 432M/s, 25h28m to go 5.91T resilvered, 61.10% done config: NAME STATE READ WRITE CKSUM pool2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 encf20 ONLINE 0 0 0 encf21 ONLINE 0 0 0 encf22 ONLINE 0 0 0 encf23 ONLINE 0 0 0 encf24 ONLINE 0 0 0 replacing-5 OFFLINE 0 0 0 encf25 OFFLINE 0 0 0 encf30 ONLINE 0 0 0 (resilvering) encf26 ONLINE 0 0 0 encf27 ONLINE 0 0 0 encf28 ONLINE 0 0 0 encf29 ONLINE 0 0 0
pool: pool2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 20 09:10:12 2020 88.9T scanned out of 97.2T at 508M/s, 4h47m to go 8.84T resilvered, 91.41% done config: NAME STATE READ WRITE CKSUM pool2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 encf20 ONLINE 0 0 0 encf21 ONLINE 0 0 0 encf22 ONLINE 0 0 0 encf23 ONLINE 0 0 0 encf24 ONLINE 0 0 0 replacing-5 OFFLINE 0 0 0 encf25 OFFLINE 0 0 0 encf30 ONLINE 0 0 0 (resilvering) encf26 ONLINE 0 0 0 encf27 ONLINE 0 0 0 encf28 ONLINE 0 0 0 encf29 ONLINE 0 0 0
pool: pool2 state: ONLINE scan: resilvered 9.52T in 52h52m with 0 errors on Fri May 22 14:02:43 2020 config: NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 encf20 ONLINE 0 0 0 encf21 ONLINE 0 0 0 encf22 ONLINE 0 0 0 encf23 ONLINE 0 0 0 encf24 ONLINE 0 0 0 encf30 ONLINE 0 0 0 encf26 ONLINE 0 0 0 encf27 ONLINE 0 0 0 encf28 ONLINE 0 0 0 encf29 ONLINE 0 0 0
9.52/0.83 11.4698795181
9.52*1024*1024 = 9982443.52
52*3600 + 52*60 = 190320
9982443.52/190320 = 52.4508381673
6.6 fio
- Run
fio
in parallel:
root@lxmds12:/pool2# nohup fio --rw=randrw --name=testA --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=/pool2/fio_pool02_resilver.out &
read : io=409923MB, bw=306382KB/s, iops=299, runt=1370060msec write: io=409277MB, bw=305899KB/s, iops=298, runt=1370060msec READ: io=409923MB, aggrb=306381KB/s, minb=306381KB/s, maxb=306381KB/s, mint=1370060msec, maxt=1370060msec WRITE: io=409277MB, aggrb=305898KB/s, minb=305898KB/s, maxb=305898KB/s, mint=1370060msec, maxt=1370060msec
root@lxmds12:/pool2# nohup fio --rw=randrw --name=testB --rwmixread=75 --gtod_reduce=1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=/pool2/fio_pool2_resilver-1.out &
read : io=614430MB, bw=45884KB/s, iops=358, runt=13712381msec write: io=204771MB, bw=15292KB/s, iops=119, runt=13712381msec READ: io=614430MB, aggrb=45883KB/s, minb=45883KB/s, maxb=45883KB/s, mint=13712381msec, maxt=13712381msec WRITE: io=204771MB, aggrb=15291KB/s, minb=15291KB/s, maxb=15291KB/s, mint=13712381msec, maxt=13712381msec
- Run in parallel:
dd if=/pool2/test4.tmp of=test6.tmp conv=notrunc
root@lxmds12:/pool2# nohup fio --rw=randrw --name=testC --rwmixread=75 --gtod_reduce=1 --direct=0 --fallocate=none --bs=600k --numj=40 --group_reporting --size=20G --output=/pool2/fio_pool2_resilver-2.out &
read : io=614649MB, bw=240433KB/s, iops=400, runt=2617783msec write: io=204538MB, bw=80009KB/s, iops=133, runt=2617783msec READ: io=614649MB, aggrb=240432KB/s, minb=240432KB/s, maxb=240432KB/s, mint=2617783msec, maxt=2617783msec WRITE: io=204538MB, aggrb=80009KB/s, minb=80009KB/s, maxb=80009KB/s, mint=2617783msec, maxt=2617783msec
root@lxmds12:/pool2# nohup fio --rw=randrw --name=testD --rwmixread=75 --gtod_reduce=1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=60G --output=/pool2/fio_pool2_resilver-3.out &
read : io=1799.9GB, bw=32804KB/s, iops=256, runt=57529756msec write: io=614608MB, bw=10940KB/s, iops=85, runt=57529756msec READ: io=1799.9GB, aggrb=32804KB/s, minb=32804KB/s, maxb=32804KB/s, mint=57529756msec, maxt=57529756msec WRITE: io=614608MB, aggrb=10939KB/s, minb=10939KB/s, maxb=10939KB/s, mint=57529756msec, maxt=57529756msec
root@lxmds12:/pool2# nohup fio --rw=randrw --name=testE --rwmixread=75 --gtod_reduce=1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=60G --output=/pool2/fio_pool2_resilver-4.out &
testE: (groupid=0, jobs=40): err= 0: pid=2203: Sat May 23 19:48:55 2020 read : io=1799.9GB, bw=20716KB/s, iops=161, runt=91097816msec write: io=614608MB, bw=6908.7KB/s, iops=53, runt=91097816msec cpu : usr=0.00%, sys=0.06%, ctx=16065136, majf=0, minf=178 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=14743938/w=4916862/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=1799.9GB, aggrb=20716KB/s, minb=20716KB/s, maxb=20716KB/s, mint=91097816msec, maxt=91097816msec WRITE: io=614608MB, aggrb=6908KB/s, minb=6908KB/s, maxb=6908KB/s, mint=91097816msec, maxt=91097816msec
6.7 Half the RAM
Remove half of the RAM from lxmds12.
6.7.1 fio
- The previous fio (
fio --rw=randrw --rwmixread=75 --bs=128k --numj=40 --size=60G
) does not finish in finite time. Aborted.
root@lxmds12:/pool2# nohup fio --rw=randrw --name=testF --rwmixread=75 --gtod_reduce=1 --direct=0 --fallocate=none --bs=128k --numj=20 --group_reporting --size=60G --output=/pool2/fio_pool2_resilver-5.out &
read : io=921559MB, bw=28757KB/s, iops=224, runt=32815410msec write: io=307241MB, bw=9587.5KB/s, iops=74, runt=32815410msec READ: io=921559MB, aggrb=28757KB/s, minb=28757KB/s, maxb=28757KB/s, mint=32815410msec, maxt=32815410msec WRITE: io=307241MB, aggrb=9587KB/s, minb=9587KB/s, maxb=9587KB/s, mint=32815410msec, maxt=32815410msec
6.7.2 iozone
root@lxmds12:/pool2# nohup iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > /pool2/iozone_pool2-7.out &
Children see throughput for 16 initial writers = 6964797.25 kB/sec Children see throughput for 16 rewriters = 3513521.81 kB/sec Children see throughput for 16 readers = 8122763.59 kB/sec Children see throughput for 16 re-readers = 7408842.03 kB/sec Children see throughput for 16 random readers = 259079.39 kB/sec Children see throughput for 16 random writers = 171948.20 kB/sec
root@lxmds12:~# for i in {0..5}; do cd /pool${i}; nohup iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > /pool${i}/iozone_pool${i}.out & done;
pool0: Children see throughput for 16 initial writers = 1161573.14 kB/sec Children see throughput for 16 rewriters = 1061488.07 kB/se Children see throughput for 16 readers = 1524302.39 kB/sec Children see throughput for 16 re-readers = 1573654.38 kB/sec Children see throughput for 16 random readers = 313600.74 kB/sec Children see throughput for 16 random writers = 184468.65 kB/sec pool1: Children see throughput for 16 initial writers = 1134758.05 kB/sec Children see throughput for 16 rewriters = 1096485.42 kB/sec Children see throughput for 16 readers = 1584781.29 kB/sec Children see throughput for 16 re-readers = 1504147.81 kB/sec Children see throughput for 16 random readers = 311832.94 kB/sec Children see throughput for 16 random writers = 186149.00 kB/sec pool2: Children see throughput for 16 initial writers = 470445.43 kB/sec Children see throughput for 16 rewriters = 969298.48 kB/sec Children see throughput for 16 readers = 4817592.12 kB/sec Children see throughput for 16 re-readers = 4777285.81 kB/sec Children see throughput for 16 random readers = 335100.19 kB/sec Children see throughput for 16 random writers = 178190.00 kB/sec pool3: Children see throughput for 16 initial writers = 1146502.84 kB/sec Children see throughput for 16 rewriters = 1089688.30 kB/sec Children see throughput for 16 readers = 1560928.18 kB/sec Children see throughput for 16 re-readers = 1532869.68 kB/sec Children see throughput for 16 random readers = 312915.50 kB/sec Children see throughput for 16 random writers = 192156.75 kB/sec pool4: Children see throughput for 16 initial writers = 1122061.44 kB/sec Children see throughput for 16 rewriters = 1076516.93 kB/sec Children see throughput for 16 readers = 1527571.52 kB/sec Children see throughput for 16 re-readers = 1517127.38 kB/sec Children see throughput for 16 random readers = 312267.35 kB/sec Children see throughput for 16 random writers = 191226.53 kB/sec pool5 Children see throughput for 16 initial writers = 1137396.80 kB/sec Children see throughput for 16 rewriters = 1090363.91 kB/sec Children see throughput for 16 readers = 1591561.07 kB/sec Children see throughput for 16 re-readers = 1495204.04 kB/sec Children see throughput for 16 random readers = 312947.40 kB/sec Children see throughput for 16 random writers = 192246.54 kB/sec
root@lxmds12:~# for i in {0..5}; do cd /pool${i}; nohup iozone -s 520g -r 1M -i0 -i1 -t1 > /pool${i}/iozone_poolA0${i}.out & done;
pool0: Children see throughput for 1 initial writers = 1520617.88 kB/sec Children see throughput for 1 rewriters = 1485256.00 kB/sec Children see throughput for 1 readers = 1545185.38 kB/sec Children see throughput for 1 re-readers = 1565792.50 kB/sec pool1: Children see throughput for 1 initial writers = 1525333.00 kB/sec Children see throughput for 1 rewriters = 1504559.75 kB/sec Children see throughput for 1 readers = 1565434.38 kB/sec Children see throughput for 1 re-readers = 1529915.50 kB/sec pool2: Children see throughput for 1 initial writers = 1531136.75 kB/sec Children see throughput for 1 rewriters = 1492354.50 kB/sec Children see throughput for 1 readers = 1554275.25 kB/sec Children see throughput for 1 re-readers = 1542847.38 kB/sec pool3: Children see throughput for 1 initial writers = 1496990.75 kB/sec Children see throughput for 1 rewriters = 1457950.62 kB/sec Children see throughput for 1 readers = 1534591.00 kB/sec Children see throughput for 1 re-readers = 1578089.50 kB/sec pool4: Children see throughput for 1 initial writers = 1499981.12 kB/sec Children see throughput for 1 rewriters = 1452061.75 kB/sec Children see throughput for 1 readers = 1543243.88 kB/sec Children see throughput for 1 re-readers = 1584515.88 kB/sec pool5: Children see throughput for 1 initial writers = 1522634.50 kB/sec Children see throughput for 1 rewriters = 1388333.38 kB/sec Children see throughput for 1 readers = 1553251.62 kB/sec Children see throughput for 1 re-readers = 1606568.00 kB/sec
7 lxdr04
7.1 iotest
7.1.1 sipool_ mit 30 Platten
root@lxdr04:/sipool# /root/seq_io_test -b 128K -t 400G -w -f /sipool/iotest1.test 2020-05-07 10:05:34|2020-05-07 10:08:27|173|2482|iotest-128K-400G root@lxdr04:/sipool# /root/seq_io_test -b 128K -t 400G -w -f /sipool/iotest2.test 2020-05-07 10:20:24|2020-05-07 10:23:13|169|2541|iotest-128K-400G root@lxdr04:/sipool# /root/seq_io_test -b 128K -t 400G -w -f /sipool/iotest3.test 2020-05-07 10:24:15|2020-05-07 10:27:08|174|2468|iotest-128K-400G root@lxdr04:/sipool# /root/seq_io_test -b 1M -t 400G -w -f /sipool/iotest4.test 2020-05-07 10:29:13|2020-05-07 10:32:00|167|2571|iotest-1M-400G root@lxdr04:/sipool# /root/seq_io_test -b 1M -t 400G -w -f /sipool/iotest5.test 2020-05-07 11:14:12|2020-05-07 11:16:58|166|2587|iotest-1M-400G
- (2482+2541+2468+2571+2587)/5 = 2530 Megabyte/sec
7.1.2 3 Pools mit 10, 10, 12 Platten
root@lxdr04:~# for i in {0..2}; do /root/seq_io_test -b 1M -t 400G -w -f /sipool${i}/test.tmp -S $((${i}+1)) | xargs -I {} echo `hostname`"-${i}|"{} & done; wait
lxdr04-2|2020-05-07 16:44:12|2020-05-07 16:50:15|363|1183|iotest-1M-400G lxdr04-0|2020-05-07 16:44:12|2020-05-07 16:54:43|631|680|iotest-1M-400G lxdr04-1|2020-05-07 16:44:12|2020-05-07 16:54:47|635|676|iotest-1M-400G
- 363+631+676 = 1670 Megabyte/sec
7.2 iozone
iozone -s 520g -r 64 -i0 -i1 -t1
Children see throughput for 1 initial writers = 2254835.00 kB/sec Children see throughput for 1 rewriters = 268091.91 kB/sec Children see throughput for 1 readers = 4501600.00 kB/sec Children see throughput for 1 re-readers = 4393497.00 kB/sec
- Write: 2254835/1024 = 2202 MB/s
- Read: 4501600/1024 = 4396 MB/s
7.2.1 3 pools
for i in {0..2}; do cd /sipool${i}; iozone -s 400g -r 1M -i0 -i1 -t1 -w > /sipool${i}/iozone_sipool0${i}.out & done; wait
pool0: Children see throughput for 1 initial writers = 3815857.50 kB/sec Children see throughput for 1 rewriters = 2168645.50 kB/sec Children see throughput for 1 readers = 4126025.00 kB/sec Children see throughput for 1 re-readers = 4256738.00 kB/sec pool1: Children see throughput for 1 initial writers = 3668504.25 kB/sec Children see throughput for 1 rewriters = 2178494.25 kB/sec Children see throughput for 1 readers = 4371497.50 kB/sec Children see throughput for 1 re-readers = 4863591.50 kB/sec pool2: Children see throughput for 1 initial writers = 3717757.00 kB/sec Children see throughput for 1 rewriters = 2351752.75 kB/sec Children see throughput for 1 readers = 4660108.50 kB/sec Children see throughput for 1 re-readers = 4672842.00 kB/sec
- Write: (3815857.50 + 3668504.25+ 3717757.00)/1024 = 10940 MB/s
- Read: (4126025 + 4371497.5+ 4660108.5)/1024 = 12849 MB/s
7.2.2 Christo HVS iozone command
root@lxdr04:/# for i in {0..2}; do cd /sipool${i}; nohup iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > /sipool${i}/iozone_sipool1${i}.out & done; wait
pool0: Children see throughput for 16 initial writers = 5338606.73 kB/sec Children see throughput for 16 rewriters = 1897715.18 kB/sec Children see throughput for 16 readers = 11497237.69 kB/sec Children see throughput for 16 re-readers = 13217280.19 kB/sec Children see throughput for 16 random readers = 297445.17 kB/sec Children see throughput for 16 random writers = 144266.70 kB/sec pool1: Children see throughput for 16 initial writers = 5509333.81 kB/sec Children see throughput for 16 rewriters = 1920630.30 kB/sec Children see throughput for 16 readers = 14468874.62 kB/sec Children see throughput for 16 re-readers = 13946359.19 kB/sec Children see throughput for 16 random readers = 297305.86 kB/sec Children see throughput for 16 random writers = 143709.06 kB/sec pool2: Children see throughput for 16 initial writers = 5574532.69 kB/sec Children see throughput for 16 rewriters = 2815750.33 kB/sec Children see throughput for 16 readers = 16874660.44 kB/sec Children see throughput for 16 re-readers = 16844021.25 kB/sec Children see throughput for 16 random readers = 300778.95 kB/sec Children see throughput for 16 random writers = 158320.99 kB/sec
- 16 seq w: (5338606.73 + 5509333.81 + 5574532.69)/1024 = 16038 MB/s
- 16 seq r: (11497237.69+14468874.62+16874660.44)/1024 = 41837 MB/s
- 16 rand w: (144266.70+143709.06+158320.99)/1024 = 436 MB/s
- 16 rand r: (297445.17+297305.86+300778.95)/1024 = 875 MB/s
root@lxdr04:/sipool0# nohup iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out & root@lxdr04:/sipool1# nohup iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out & root@lxdr04:/sipool2# nohup iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out &
pool0: Children see throughput for 16 initial writers = 3064535.12 kB/sec Children see throughput for 16 rewriters = 917678.12 kB/sec Children see throughput for 16 readers = 15025873.25 kB/sec Children see throughput for 16 re-readers = 11652204.88 kB/sec Children see throughput for 16 random readers = 218416.57 kB/sec Children see throughput for 16 random writers = 132769.98 kB/sec pool1: Children see throughput for 16 initial writers = 6670249.44 kB/sec Children see throughput for 16 rewriters = 1947937.26 kB/sec Children see throughput for 16 readers = 16891559.38 kB/sec Children see throughput for 16 re-readers = 14317268.62 kB/sec Children see throughput for 16 random readers = 323277.88 kB/sec Children see throughput for 16 random writers = 156469.99 kB/sec pool2: Children see throughput for 16 initial writers = 6443300.75 kB/sec Children see throughput for 16 rewriters = 2481872.39 kB/sec Children see throughput for 16 readers = 18979199.38 kB/sec Children see throughput for 16 re-readers = 21768917.38 kB/sec Children see throughput for 16 random readers = 288363.62 kB/sec Children see throughput for 16 random writers = 156084.20 kB/sec
- 48 w: ( 3064535.12+6670249.44+6443300.75)/1024 = 15799 MB/s
- 48 r: (15025873.25+1947937.26+2481872.39)/1024 = 19000 MB/s
- 48 rand r: (218416.57+323277.88+288363.62)/1024 = 811 MB/s
- 48 rand w: (132769.98+156469.99+156084.20)/1024 = 435 MB/s
nohup iozone -s 30g -r 1m -i0 -i1 -i2 -t16 -e -w > christoiozone1.out &
pool0: Children see throughput for 16 initial writers = 7413708.72 kB/sec Children see throughput for 16 rewriters = 7214219.72 kB/sec Children see throughput for 16 readers = 7370694.31 kB/sec Children see throughput for 16 re-readers = 7180936.69 kB/sec Children see throughput for 16 random readers = 798112.94 kB/sec Children see throughput for 16 random writers = 11655951.56 kB/sec pool1: Children see throughput for 16 initial writers = 7325560.66 kB/sec Children see throughput for 16 rewriters = 7497495.94 kB/sec Children see throughput for 16 readers = 7260772.91 kB/sec Children see throughput for 16 re-readers = 7297687.09 kB/sec Children see throughput for 16 random readers = 1060871.28 kB/sec Children see throughput for 16 random writers = 11948918.25 kB/sec pool2: Children see throughput for 16 initial writers = 7527573.19 kB/sec Children see throughput for 16 rewriters = 7482336.53 kB/sec Children see throughput for 16 readers = 7525685.72 kB/sec Children see throughput for 16 re-readers = 7523540.06 kB/sec Children see throughput for 16 random readers = 678778.61 kB/sec Children see throughput for 16 random writers = 11721735.88 kB/sec
- 48 w: (7413708.72+7325560.66+7527573.19)/1024 = 21745 MB/s
- 48 r: (7370694.31+7260772.91+7525685.72)/1024 = 21638 MB/s
- 48 rand r: (798112.94+1060871.28+678778.61)/1024 = 2478 MB/s
- 48 rand w: (11655951.56+11948918.25+11721735.88)/1024 = 34499 MB/s
nohup iozone -s 420g -r 1m -i0 -i1 -i2 -t6 -e -w > christo_iozone2.out &
pool0: Children see throughput for 6 initial writers = 5911728.31 kB/sec Children see throughput for 6 rewriters = 7224224.88 kB/sec Children see throughput for 6 readers = 7968046.88 kB/sec Children see throughput for 6 re-readers = 10683283.75 kB/sec Children see throughput for 6 random readers = 405492.87 kB/sec Children see throughput for 6 random writers = 9212703.72 kB/sec pool1: Children see throughput for 6 initial writers = 8699651.00 kB/sec Children see throughput for 6 rewriters = 8137111.25 kB/sec Children see throughput for 6 readers = 8244528.88 kB/sec Children see throughput for 6 re-readers = 8193490.12 kB/sec Children see throughput for 6 random readers = 447095.05 kB/sec Children see throughput for 6 random writers = 6930127.35 kB/sec pool2: Children see throughput for 6 initial writers = 8788334.75 kB/sec Children see throughput for 6 rewriters = 8327840.00 kB/sec Children see throughput for 6 readers = 8562543.88 kB/sec Children see throughput for 6 re-readers = 8219184.75 kB/sec Children see throughput for 6 random readers = 393103.42 kB/sec Children see throughput for 6 random writers = 3776512.93 kB/sec
- 18 w: (5911728.31+8699651.00+8788334.75)/1024 = 22851 MB/s
- 18 r: (7968046.88+8244528.88+8562543.88)/1024 = 24194 MB/s
- 18 rand r: (405492.87+447095.05+393103.42)/1024 = 1216 MB/s
- 18 rand w: (9212703.72+6930127.35+3776512.93)/1024 = 19452 MB/s
7.3 fio
fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_primarycache-all_sync-default_numj40-0.out
read : io=409923MB, bw=269123KB/s, iops=262, runt=1559735msec write: io=409277MB, bw=268699KB/s, iops=262, runt=1559735msec READ: io=409923MB, aggrb=269123KB/s, minb=269123KB/s, maxb=269123KB/s, mint=1559735msec, maxt=1559735msec WRITE: io=409277MB, aggrb=268699KB/s, minb=268699KB/s, maxb=268699KB/s, mint=1559735msec, maxt=1559735msec
- rand read: 269123/1024 = 263 MB/s, 262 iops
- rand write: 268699/1024 = 262 MB/s 262 iops
7.3.1 Christo HVS fio
root@lxdr04:/sipool/fio# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=40G --readwrite=randrw --rwmixread=75
read : io=30719MB, bw=25992KB/s, iops=6498, runt=1210225msec write: io=10241MB, bw=8665.3KB/s, iops=2166, runt=1210225msec cpu : usr=2.26%, sys=42.55%, ctx=2577866, majf=0, minf=9 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=7864109/w=2621651/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=30719MB, aggrb=25992KB/s, minb=25992KB/s, maxb=25992KB/s, mint=1210225msec, maxt=1210225msec WRITE: io=10241MB, aggrb=8665KB/s, minb=8665KB/s, maxb=8665KB/s, mint=1210225msec, maxt=1210225msec
- rand read: 25992/1024 = 25 MB/s, 6498 iops
- rand write: 8665/1024 = 8.5 MB/s, 2166 iops
7.3.2 3 pools
for i in {0..2}; do cd /sipool${i}; nohup fio --rw=randrw --name=tes52 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=/sipool${i}/fio_sipool1${i}.out & done; wait
pool0: read : io=409923MB, bw=175914KB/s, iops=171, runt=2386166msec write: io=409277MB, bw=175637KB/s, iops=171, runt=2386166msec READ: io=409923MB, aggrb=175914KB/s, minb=175914KB/s, maxb=175914KB/s, mint=2386166msec, maxt=2386166msec WRITE: io=409277MB, aggrb=175637KB/s, minb=175637KB/s, maxb=175637KB/s, mint=2386166msec, maxt=2386166msec pool1: read : io=409923MB, bw=204542KB/s, iops=199, runt=2052197msec write: io=409277MB, bw=204220KB/s, iops=199, runt=2052197msec READ: io=409923MB, aggrb=204542KB/s, minb=204542KB/s, maxb=204542KB/s, mint=2052197msec, maxt=2052197msec WRITE: io=409277MB, aggrb=204219KB/s, minb=204219KB/s, maxb=204219KB/s, mint=2052197msec, maxt=2052197msec pool2: read : io=409923MB, bw=203649KB/s, iops=198, runt=2061202msec write: io=409277MB, bw=203328KB/s, iops=198, runt=2061202msec READ: io=409923MB, aggrb=203648KB/s, minb=203648KB/s, maxb=203648KB/s, mint=2061202msec, maxt=2061202msec WRITE: io=409277MB, aggrb=203327KB/s, minb=203327KB/s, maxb=203327KB/s, mint=2061202msec, maxt=2061202msec
- rand read: (175914+204542+203649)/1024 = 570 MB/s, 171+199+198 = 568 iops
- rand write: (175637+204220+203328)/1024 = 570 MB/s, 568 iops
7.3.3 Christo HVS fio
root@lxdr04:/# for i in {0..2}; do cd /sipool${i}; nohup fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=40G --readwrite=randrw --rwmixread=7 --output=/sipool${i}/fio_sipool2${i}.out & done; wait
pool0: read : io=2868.3MB, bw=250787B/s, iops=61, runt=11992566msec write: io=38092MB, bw=3252.6KB/s, iops=813, runt=11992566msec cpu : usr=0.56%, sys=9.41%, ctx=9225044, majf=0, minf=6 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=734274/w=9751486/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=2868.3MB, aggrb=244KB/s, minb=244KB/s, maxb=244KB/s, mint=11992566msec, maxt=11992566msec WRITE: io=38092MB, aggrb=3252KB/s, minb=3252KB/s, maxb=3252KB/s, mint=11992566msec, maxt=11992566msec pool1: read : io=2868.3MB, bw=391978B/s, iops=95, runt=7672830msec write: io=38092MB, bw=5083.7KB/s, iops=1270, runt=7672830msec cpu : usr=0.68%, sys=14.10%, ctx=10042887, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=734274/w=9751486/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=2868.3MB, aggrb=382KB/s, minb=382KB/s, maxb=382KB/s, mint=7672830msec, maxt=7672830msec WRITE: io=38092MB, aggrb=5083KB/s, minb=5083KB/s, maxb=5083KB/s, mint=7672830msec, maxt=7672830msec pool2: read : io=2868.3MB, bw=664257B/s, iops=162, runt=4527742msec write: io=38092MB, bw=8614.9KB/s, iops=2153, runt=4527742msec cpu : usr=0.90%, sys=21.74%, ctx=8557571, majf=0, minf=9 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=734274/w=9751486/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: io=2868.3MB, aggrb=648KB/s, minb=648KB/s, maxb=648KB/s, mint=4527742msec, maxt=4527742msec WRITE: io=38092MB, aggrb=8614KB/s, minb=8614KB/s, maxb=8614KB/s, mint=4527742msec, maxt=4527742msec
- rand read: 244+382+648 = 1274 KB/s, 61+95+162 = 318 iops
- rand write: 3252+5083+8614 = 16949 KB/s, 813+1270+2153 = 4236 iops
root@lxdr04:~# for i in {0..2}; do cd /sipool${i}; nohup fio --randrepeat=1 --gtod_reduce=1 --name=test4 --filename=test4 --bs=4k --numj=32 --size=8G --readwrite=randrw --rwmixread=7 --group_reporting --output=/sipool${i}/fio_sipool4${i}.out & done
#+beginexample
7.3.4 More fio, try to avoid cache: lxdr04 has 376 GB RAM
- Runs for several days, killed:
for i in {0..2}; do cd /sipool${i}; nohup fio --randrepeat=1 --gtod_reduce=1 --name=test5 --bs=4k --numj=32 --size=400G --readwrite=randrw --rwmixread=7 --group_reporting --output=/sipool${i}/fio_sipool5${i}.out & done
pool0: fio: terminating on signal 15 test5: (groupid=0, jobs=32): err= 0: pid=129053: Mon May 18 14:41:27 2020 read : io=7096.5MB, bw=33941B/s, iops=8, runt=219234562msec write: io=94258MB, bw=450827B/s, iops=110, runt=219234562msec cpu : usr=0.00%, sys=0.06%, ctx=40625893, majf=0, minf=1427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1816682/w=24130114/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=7096.5MB, aggrb=33KB/s, minb=33KB/s, maxb=33KB/s, mint=219234562msec, maxt=219234562msec WRITE: io=94258MB, aggrb=440KB/s, minb=440KB/s, maxb=440KB/s, mint=219234562msec, maxt=219234562msec pool1: fio: terminating on signal 15 test5: (groupid=0, jobs=32): err= 0: pid=129381: Mon May 18 14:41:27 2020 read : io=6950.4MB, bw=33257B/s, iops=8, runt=219138361msec write: io=92307MB, bw=441688B/s, iops=107, runt=219138361msec cpu : usr=0.00%, sys=0.06%, ctx=40130481, majf=0, minf=1436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1779278/w=23630608/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=6950.4MB, aggrb=32KB/s, minb=32KB/s, maxb=32KB/s, mint=219138361msec, maxt=219138361msec WRITE: io=92307MB, aggrb=431KB/s, minb=431KB/s, maxb=431KB/s, mint=219138361msec, maxt=219138361msec pool2: fio: terminating on signal 15 test5: (groupid=0, jobs=32): err= 0: pid=129480: Mon May 18 14:41:27 2020 read : io=7419.7MB, bw=35506B/s, iops=8, runt=219115101msec write: io=98583MB, bw=471769B/s, iops=115, runt=219115101msec cpu : usr=0.00%, sys=0.07%, ctx=42501907, majf=0, minf=1395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1899436/w=25237261/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=7419.7MB, aggrb=34KB/s, minb=34KB/s, maxb=34KB/s, mint=219115101msec, maxt=219115101msec WRITE: io=98583MB, aggrb=460KB/s, minb=460KB/s, maxb=460KB/s, mint=219115101msec, maxt=219115101msec
- Runs for several days, killed:
root@lxdr04:/sipool1# nohup fio --randrepeat=1 --gtod_reduce=1 --name=test5 --bs=4k --numj=32 --size=400G --readwrite=randrw --rwmixread=75 --group_reporting --output=/sipool1/fio_sipool61.out &
fio: terminating on signal 15 test5: (groupid=0, jobs=32): err= 0: pid=139540: Tue May 19 18:12:09 2020 read : io=43259MB, bw=459137B/s, iops=112, runt=98794968msec write: io=14422MB, bw=153066B/s, iops=37, runt=98794968msec cpu : usr=0.00%, sys=0.07%, ctx=15714211, majf=0, minf=1311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=11074329/w=3691953/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=43259MB, aggrb=448KB/s, minb=448KB/s, maxb=448KB/s, mint=98794968msec, maxt=98794968msec WRITE: io=14422MB, aggrb=149KB/s, minb=149KB/s, maxb=149KB/s, mint=98794968msec, maxt=98794968msec
- Runs for several days, killed:
root@lxdr04:/sipool0# nohup fio --randrepeat=1 --gtod_reduce=1 --name=test7 --bs=128k --numj=32 --size=400G --readwrite=randrw --rwmixread=75 --group_reporting --output=/sipool0/fio_sipool70.out &
fio: terminating on signal 15 test7: (groupid=0, jobs=32): err= 0: pid=100057: Thu May 21 08:22:18 2020 read : io=958401MB, bw=12967KB/s, iops=101, runt=75686355msec write: io=319492MB, bw=4322.6KB/s, iops=33, runt=75686355msec cpu : usr=0.00%, sys=0.08%, ctx=10912661, majf=0, minf=1472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=7667208/w=2555936/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=958401MB, aggrb=12966KB/s, minb=12966KB/s, maxb=12966KB/s, mint=75686355msec, maxt=75686355msec
- Instead of very large file size try large amount of processes:
root@lxdr04:/sipool2# nohup fio --gtod_reduce=1 --name=test8 --bs=128k --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=/sipool2/fio_sipool28.out &
read : io=983203MB, bw=25208KB/s, iops=196, runt=39939265msec write: io=327517MB, bw=8397.2KB/s, iops=65, runt=39939265msec READ: io=983203MB, aggrb=25208KB/s, minb=25208KB/s, maxb=25208KB/s, mint=39939265msec, maxt=39939265msec WRITE: io=327517MB, aggrb=8397KB/s, minb=8397KB/s, maxb=8397KB/s, mint=39939265msec, maxt=39939265msec
- 2 pools of 16 disks
root@lxdr04:/sipool0# nohup fio --gtod_reduce=1 --name=test9 --bs=128k --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=/sipool0/fio_sipool09.out &
read : io=983203MB, bw=42616KB/s, iops=332, runt=23624886msec write: io=327517MB, bw=14196KB/s, iops=110, runt=23624886msec READ: io=983203MB, aggrb=42616KB/s, minb=42616KB/s, maxb=42616KB/s, mint=23624886msec, maxt=23624886msec WRITE: io=327517MB, aggrb=14195KB/s, minb=14195KB/s, maxb=14195KB/s, mint=23624886msec, maxt=23624886msec
root@lxdr04:/sipool0# nohup fio --gtod_reduce=1 --name=testa --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=/sipool0/fio_sipool0a.out &
read : io=982875MB, bw=318922KB/s, iops=311, runt=3155831msec write: io=327845MB, bw=106379KB/s, iops=103, runt=3155831msec READ: io=982875MB, aggrb=318922KB/s, minb=318922KB/s, maxb=318922KB/s, mint=3155831msec, maxt=3155831msec WRITE: io=327845MB, aggrb=106378KB/s, minb=106378KB/s, maxb=106378KB/s, mint=3155831msec, maxt=3155831msec
root@lxdr04:/sipool1# nohup fio --gtod_reduce=1 --name=testb --bs=128k --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=/sipool0/fio_sipool0b.out &
read : io=983203MB, bw=42324KB/s, iops=330, runt=23788140msec write: io=327517MB, bw=14099KB/s, iops=110, runt=23788140msec READ: io=983203MB, aggrb=42323KB/s, minb=42323KB/s, maxb=42323KB/s, mint=23788140msec, maxt=23788140msec WRITE: io=327517MB, aggrb=14098KB/s, minb=14098KB/s, maxb=14098KB/s, mint=23788140msec, maxt=23788140msec
8 lxfs532 with 18 TB disks
8.1 iozone
[root@lxfs532 bigpool]# iozone -s 520g -r 64 -i0 -i1 -t1 Children see throughput for 1 initial writers = 2,163,126.00 kB/sec Children see throughput for 1 rewriters = 2,552,040.75 kB/sec Children see throughput for 1 readers = 5,151,977.00 kB/sec Children see throughput for 1 re-readers = 4,289,855.50 kB/sec
8.2 fio: zfs recordsize 1M, fio blocksize 1M
[root@lxfs532 bigpool]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_primarycache-all_sync-default_numj40-0.out
read: IOPS=389, BW=390MiB/s (409MB/s)(399GiB/1048029msec) write: IOPS=391, BW=392MiB/s (411MB/s)(401GiB/1048029msec) READ: bw=390MiB/s (409MB/s), 390MiB/s-390MiB/s (409MB/s-409MB/s), io=399GiB (429GB), run=1048029-1048029msec WRITE: bw=392MiB/s (411MB/s), 392MiB/s-392MiB/s (411MB/s-411MB/s), io=401GiB (430GB), run=1048029-1048029msec
8.3 fio: zfs recordsize 1M, fio blocksize 128k
fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_numj40_recs1M-5.out
read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(400GiB/11247586msec) write: IOPS=291, BW=36.4MiB/s (38.2MB/s)(400GiB/11247586msec) READ: bw=36.4MiB/s (38.2MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=400GiB (429GB), run=11247586-11247586msec WRITE: bw=36.4MiB/s (38.2MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=400GiB (430GB), run=11247586-11247586msec
8.4 fio: zfs recordsize 128k, fio blocksize 128k
fio --rw=randrw --name=test2 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_arc-metadata_numj40_recs128k-2.out
read: IOPS=884, BW=111MiB/s (116MB/s)(400GiB/3703190msec) write: IOPS=884, BW=111MiB/s (116MB/s)(400GiB/3703190msec) READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=400GiB (429GB), run=3703190-3703190msec WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=400GiB (430GB), run=3703190-3703190msec
8.5 fio: zfs recordsize 128k, fio blocksize 1M
fio --rw=randrw --name=test3 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_arc-metadata_numj40_recs128k-3.out
read: IOPS=257, BW=257MiB/s (270MB/s)(399GiB/1589845msec) write: IOPS=258, BW=258MiB/s (271MB/s)(401GiB/1589845msec) READ: bw=257MiB/s (270MB/s), 257MiB/s-257MiB/s (270MB/s-270MB/s), io=399GiB (429GB), run=1589845-1589845msec WRITE: bw=258MiB/s (271MB/s), 258MiB/s-258MiB/s (271MB/s-271MB/s), io=401GiB (430GB), run=1589845-1589845msec
8.6 fio: zfs recordsize 1M, fio blocksize 1M, size 4G, numJobs 320
fio --gtod_reduce=1 --name=test9 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=/bigpool/fio_randrw_s4G_bs1M_numj320_recs1M.out
│ read: IOPS=457, BW=458MiB/s (480MB/s)(960GiB/2147553msec) │ write: IOPS=152, BW=153MiB/s (160MB/s)(320GiB/2147553msec) │ READ: bw=458MiB/s (480MB/s), 458MiB/s-458MiB/s (480MB/s-480MB/s), io=960GiB (1030GB), run=2147553-2147553msec │ WRITE: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=320GiB (344GB), run=2147553-2147553msec
8.6.1 No compression
fio --gtod_reduce=1 --name=test4 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=/bigpool/fio_randrw_s4G_bs1M_numj320_recs1M_noComp.out
│ read: IOPS=418, BW=419MiB/s (439MB/s)(960GiB/2348079msec) │ write: IOPS=139, BW=140MiB/s (146MB/s)(320GiB/2348079msec) │ READ: bw=419MiB/s (439MB/s), 419MiB/s-419MiB/s (439MB/s-439MB/s), io=960GiB (1030GB), run=2348079-2348079msec │ WRITE: bw=140MiB/s (146MB/s), 140MiB/s-140MiB/s (146MB/s-146MB/s), io=320GiB (344GB), run=2348079-2348079msec
fio --gtod_reduce=1 --name=test5 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=/bigpool/fio_randrw_s4G_bs1M_numj320_recs1M_noComp2.out
│ read: IOPS=402, BW=402MiB/s (422MB/s)(960GiB/2443275msec) │ write: IOPS=134, BW=134MiB/s (141MB/s)(320GiB/2443275msec) │ READ: bw=402MiB/s (422MB/s), 402MiB/s-402MiB/s (422MB/s-422MB/s), io=960GiB (1030GB), run=2443275-2443275msec │ WRITE: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=320GiB (344GB), run=2443275-2443275msec
8.7 iotest
/root/seq_io_test -b 128K -t 400G -w -f /bigpool/iotest2.test
2021-07-13 19:01:47|2021-07-13 19:10:53|546|786|seq_io_test-write-128K-400G|/bigpool/iotest2.test|
/root/seq_io_test -b 1M -t 400G -w -f /bigpool/iotest3.test
2021-07-13 19:23:12|2021-07-13 19:32:17|544|789|seq_io_test-write-1M-400G|/bigpool/iotest3.test|
8.8 iozone Christo
zfs recordsize 128k
iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out
Children see throughput for 16 initial writers = 7291827.94 kB/sec Children see throughput for 16 rewriters = 7340038.00 kB/sec Children see throughput for 16 readers = 47921452.50 kB/sec Children see throughput for 16 re-readers = 48137217.75 kB/sec Children see throughput for 16 random readers = 34497709.62 kB/sec Children see throughput for 16 random writers = 5712746.75 kB/sec
iozone -s 30g -r 128k -i0 -i1 -i2 -t16 -e -w > christo_iozoner128k.out
Children see throughput for 16 initial writers = 7189762.28 kB/sec Children see throughput for 16 rewriters = 7173958.69 kB/sec Children see throughput for 16 readers = 47765274.00 kB/sec Children see throughput for 16 re-readers = 47667285.75 kB/sec Children see throughput for 16 random readers = 24304307.12 kB/sec Children see throughput for 16 random writers = 5700506.75 kB/sec
zfs recordsize 1M
iozone -s 30g -r 128k -i0 -i1 -i2 -t16 -e -w > christo_iozoner1M.out
Children see throughput for 16 initial writers = 7117030.44 kB/sec Children see throughput for 16 rewriters = 7238060.03 kB/sec Children see throughput for 16 readers = 45702379.50 kB/sec Children see throughput for 16 re-readers = 45598533.75 kB/sec Children see throughput for 16 random readers = 23223005.00 kB/sec Children see throughput for 16 random writers = 5356821.84 kB/sec
nohup fio --gtod_reduce=1 --name=test8 --bs=128k --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=/bigpool/fio_randrw_s4G_bs128k_numj320_recs1M.out &
9 lxmds25 RAID-10 with 10 SSDs, Strip Size = 256 KB
9.1 iozone
[root@lxmds25 mnt]# iozone -s 520g -r 64 -i0 -i1 -t1 Children see throughput for 1 initial writers = 2,997,948.00 kB/sec Children see throughput for 1 rewriters = 6,251,930.50 kB/sec Children see throughput for 1 readers = 15,936,730.00 kB/sec Children see throughput for 1 re-readers = 16,177,189.00 kB/sec
9.2 fio: blocksize 1M, size 20G, numJobs 40
[root@lxmds25 mnt]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-0.out
read: IOPS=3366, BW=3366MiB/s (3530MB/s)(399GiB/121410msec) write: IOPS=3381, BW=3381MiB/s (3546MB/s)(401GiB/121410msec) READ: bw=3366MiB/s (3530MB/s), 3366MiB/s-3366MiB/s (3530MB/s-3530MB/s), io=399GiB (429GB), run=121410-121410msec WRITE: bw=3381MiB/s (3546MB/s), 3381MiB/s-3381MiB/s (3546MB/s-3546MB/s), io=401GiB (430GB), run=121410-121410msec Disk stats (read/write): sdb: ios=3268326/1469417, merge=0/2, ticks=4035640/9145773, in_queue=13198467, util=100.00%
9.3 fio: fio blocksize 128k, size 20G, numJobs 40
[root@lxmds25 mnt]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-1.out
read: IOPS=28.2k, BW=3530MiB/s (3702MB/s)(400GiB/116014msec) write: IOPS=28.2k, BW=3531MiB/s (3703MB/s)(400GiB/116014msec) READ: bw=3530MiB/s (3702MB/s), 3530MiB/s-3530MiB/s (3702MB/s-3702MB/s), io=400GiB (429GB), run=116014-116014msec WRITE: bw=3531MiB/s (3703MB/s), 3531MiB/s-3531MiB/s (3703MB/s-3703MB/s), io=400GiB (430GB), run=116014-116014msec Disk stats (read/write): sdb: ios=3276360/2244343, merge=0/2, ticks=3591974/5704787, in_queue=9316231, util=100.00%
9.4 fio: fio blocksize 1M, size 4G, numJobs 320
[root@lxmds25 mnt]# fio --gtod_reduce=1 --name=test2 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s4G_bs1M_numj320.out
read: IOPS=7735, BW=7736MiB/s (8112MB/s)(960GiB/127034msec) write: IOPS=2581, BW=2582MiB/s (2707MB/s)(320GiB/127034msec) READ: bw=7736MiB/s (8112MB/s), 7736MiB/s-7736MiB/s (8112MB/s-8112MB/s), io=960GiB (1030GB), run=127034-127034msec WRITE: bw=2582MiB/s (2707MB/s), 2582MiB/s-2582MiB/s (2707MB/s-2707MB/s), io=320GiB (344GB), run=127034-127034msec Disk stats (read/write): sdb: ios=7861611/1173488, merge=0/1, ticks=15802639/1041824, in_queue=16978260, util=100.00%
9.5 iozone Christo r=256k
[root@lxmds25 mnt]# iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out
Children see throughput for 16 initial writers = 3,423,072.70 kB/sec Children see throughput for 16 rewriters = 3,780,593.44 kB/sec Children see throughput for 16 readers = 144,292,631.00 kB/sec Children see throughput for 16 re-readers = 150,382,733.00 kB/sec Children see throughput for 16 random readers = 144,393,505.50 kB/sec Children see throughput for 16 random writers = 3,819,433.31 kB/sec
9.6 iozone Christo r=1M
[root@lxmds25 mnt]# iozone -s 30g -r 1M -i0 -i1 -i2 -t16 -e -w > iozone_s30g_r1M_t16.out
Children see throughput for 16 initial writers = 3,705,251.45 kB/sec Children see throughput for 16 rewriters = 4,105,476.19 kB/sec Children see throughput for 16 readers = 127,483,759.00 kB/sec Children see throughput for 16 re-readers = 129,871,680.00 kB/sec Children see throughput for 16 random readers = 132,957,427.00 kB/sec Children see throughput for 16 random writers = 3,780,068.27 kB/sec
9.7 fio: fio blocksize 1M, size 10G, numJobs 320
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test3 --bs=1M --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs1M_numj320.out &
read: IOPS=7316, BW=7317MiB/s (7672MB/s)(2400GiB/335894msec) write: IOPS=2438, BW=2439MiB/s (2557MB/s)(800GiB/335894msec) READ: bw=7317MiB/s (7672MB/s), 7317MiB/s-7317MiB/s (7672MB/s-7672MB/s), io=2400GiB (2577GB), run=335894-335894msec WRITE: bw=2439MiB/s (2557MB/s), 2439MiB/s-2439MiB/s (2557MB/s-2557MB/s), io=800GiB (859GB), run=335894-335894msec Disk stats (read/write): sdb: ios=19660716/3196856, merge=0/2, ticks=41931382/1184424, in_queue=43423106, util=99.64%
9.8 fio: fio blocksize 128k, size 10G, numJobs 320
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test4 --bs=128k --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs128k_numj320.out &
read: IOPS=58.5k, BW=7310MiB/s (7665MB/s)(2400GiB/336179msec) write: IOPS=19.5k, BW=2437MiB/s (2556MB/s)(800GiB/336179msec) READ: bw=7310MiB/s (7665MB/s), 7310MiB/s-7310MiB/s (7665MB/s-7665MB/s), io=2400GiB (2577GB), run=336179-336179msec WRITE: bw=2437MiB/s (2556MB/s), 2437MiB/s-2437MiB/s (2556MB/s-2556MB/s), io=800GiB (859GB), run=336179-336179msec Disk stats (read/write): sdb: ios=19659437/3545128, merge=0/20, ticks=31279715/1313336, in_queue=32898973, util=99.56%
9.9 fio: fio blocksize 128k, size 1G, numJobs 3200
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test5 --bs=128k --numj=3200 --size=1G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s1G_bs128k_numj3200.out &
read: IOPS=52.1k, BW=6512MiB/s (6828MB/s)(2400GiB/377395msec) write: IOPS=17.4k, BW=2171MiB/s (2276MB/s)(800GiB/377395msec) READ: bw=6512MiB/s (6828MB/s), 6512MiB/s-6512MiB/s (6828MB/s-6828MB/s), io=2400GiB (2577GB), run=377395-377395msec WRITE: bw=2171MiB/s (2276MB/s), 2171MiB/s-2171MiB/s (2276MB/s-2276MB/s), io=800GiB (859GB), run=377395-377395msec Disk stats (read/write): sdb: ios=19661130/3350390, merge=0/110, ticks=30206783/1210199, in_queue=31688930, util=90.53%
10 lxmds25 DRBD on Raid-10 with 10 SSDs, Strip Size = 256 KB, connected to lxmds26, protocol C
10.1 iozone
[root@lxmds25 mnt]# iozone -s 520g -r 64 -i0 -i1 -t1 Children see throughput for 1 initial writers = 2,209,934.50 kB/sec Children see throughput for 1 rewriters = 2,243,636.00 kB/sec Children see throughput for 1 readers = 14,361,019.00 kB/sec Children see throughput for 1 re-readers = 14,626,622.00 kB/sec
10.2 fio: blocksize 1M, size 20G, numJobs 40
[root@lxmds25 mnt]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-0.out
read: IOPS=4708, BW=4709MiB/s (4937MB/s)(399GiB/86792msec) write: IOPS=4730, BW=4730MiB/s (4960MB/s)(401GiB/86792msec) READ: bw=4709MiB/s (4937MB/s), 4709MiB/s-4709MiB/s (4937MB/s-4937MB/s), io=399GiB (429GB), run=86792-86792msec WRITE: bw=4730MiB/s (4960MB/s), 4730MiB/s-4730MiB/s (4960MB/s-4960MB/s), io=401GiB (430GB), run=86792-86792msec Disk stats (read/write): drbd0: ios=3269296/240272, merge=0/0, ticks=1878640/486262473, in_queue=499369200, util=100.00%, aggrios=3269352/272388, aggrmerge=0/1, aggrticks=1581153/37987, aggrin_queue=1623237, aggrutil=87.91% sdb: ios=3269352/272388, merge=0/1, ticks=1581153/37987, in_queue=1623237, util=87.91%
10.3 fio: fio blocksize 128k, size 20G, numJobs 40
[root@lxmds25 mnt]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-1.out
read: IOPS=34.8k, BW=4355MiB/s (4567MB/s)(400GiB/94037msec) write: IOPS=34.8k, BW=4356MiB/s (4568MB/s)(400GiB/94037msec) READ: bw=4355MiB/s (4567MB/s), 4355MiB/s-4355MiB/s (4567MB/s-4567MB/s), io=400GiB (429GB), run=94037-94037msec WRITE: bw=4356MiB/s (4568MB/s), 4356MiB/s-4356MiB/s (4568MB/s-4568MB/s), io=400GiB (430GB), run=94037-94037msec Disk stats (read/write): drbd0: ios=3276298/353717, merge=0/0, ticks=1887678/543526495, in_queue=554828148, util=100.00%, aggrios=3276387/392433, aggrmerge=0/1, aggrticks=1643331/41510, aggrin_queue=1689529, aggrutil=95.23% sdb: ios=3276387/392433, merge=0/1, ticks=1643331/41510, in_queue=1689529, util=95.23%
10.4 fio: fio blocksize 1M, size 4G, numJobs 320
[root@lxmds25 mnt]# fio --gtod_reduce=1 --name=test2 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s4G_bs1M_numj320.out
read: IOPS=8674, BW=8675MiB/s (9096MB/s)(960GiB/113286msec) write: IOPS=2895, BW=2895MiB/s (3036MB/s)(320GiB/113286msec) READ: bw=8675MiB/s (9096MB/s), 8675MiB/s-8675MiB/s (9096MB/s-9096MB/s), io=960GiB (1030GB), run=113286-113286msec WRITE: bw=2895MiB/s (3036MB/s), 2895MiB/s-2895MiB/s (3036MB/s-3036MB/s), io=320GiB (344GB), run=113286-113286msec Disk stats (read/write): drbd0: ios=7861828/167730, merge=0/0, ticks=29054989/595319277, in_queue=634418813, util=100.00%, aggrios=7861848/175694, aggrmerge=0/3, aggrticks=12290607/80030, aggrin_queue=12452126, aggrutil=93.16% sdb: ios=7861848/175694, merge=0/3, ticks=12290607/80030, in_queue=12452126, util=93.16%
10.5 iozone Christo r=256k
[root@lxmds25 mnt]# iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out
Children see throughput for 16 initial writers = 856,253.89 kB/sec Children see throughput for 16 rewriters = 890,500.39 kB/sec Children see throughput for 16 readers = 133,765,743.00 kB/sec Children see throughput for 16 re-readers = 146,717,865.50 kB/sec Children see throughput for 16 random readers = 133,288,741.50 kB/sec Children see throughput for 16 random writers = 923,164.14 kB/sec
10.6 iozone Christo r=1M
[root@lxmds25 mnt]# iozone -s 30g -r 1M -i0 -i1 -i2 -t16 -e -w > iozone_s30g_r1M_t16.out
Children see throughput for 16 initial writers = 855735.22 kB/sec Children see throughput for 16 rewriters = 998390.23 kB/sec Children see throughput for 16 readers = 159075864.00 kB/sec Children see throughput for 16 re-readers = 122061260.00 kB/sec Children see throughput for 16 random readers = 160518307.00 kB/sec Children see throughput for 16 random writers = 884795.45 kB/sec
10.7 fio: fio blocksize 1M, size 10G, numJobs 320
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test3 --bs=1M --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs1M_numj320.out &
read: IOPS=3661, BW=3662MiB/s (3840MB/s)(2400GiB/671156msec) write: IOPS=1220, BW=1221MiB/s (1280MB/s)(800GiB/671156msec) READ: bw=3662MiB/s (3840MB/s), 3662MiB/s-3662MiB/s (3840MB/s-3840MB/s), io=2400GiB (2577GB), run=671156-671156msec WRITE: bw=1221MiB/s (1280MB/s), 1221MiB/s-1221MiB/s (1280MB/s-1280MB/s), io=800GiB (859GB), run=671156-671156msec Disk stats (read/write): drbd0: ios=19660810/1917583, merge=0/0, ticks=41130376/740632639, in_queue=810098484, util=100.00%, aggrios=19660907/2260020, aggrmerge=0/3075, aggrticks=20137054/470524, aggrin_queue=20707418, aggrutil=96.83% sdb: ios=19660907/2260020, merge=0/3075, ticks=20137054/470524, in_queue=20707418, util=96.83%
10.8 fio: fio blocksize 128k, size 10G, numJobs 320
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test4 --bs=128k --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs128k_numj320.out &
read: IOPS=3661, BW=3662MiB/s (3840MB/s)(2400GiB/671156msec) write: IOPS=1220, BW=1221MiB/s (1280MB/s)(800GiB/671156msec) READ: bw=3662MiB/s (3840MB/s), 3662MiB/s-3662MiB/s (3840MB/s-3840MB/s), io=2400GiB (2577GB), run=671156-671156msec WRITE: bw=1221MiB/s (1280MB/s), 1221MiB/s-1221MiB/s (1280MB/s-1280MB/s), io=800GiB (859GB), run=671156-671156msec Disk stats (read/write): drbd0: ios=19660810/1917583, merge=0/0, ticks=41130376/740632639, in_queue=810098484, util=100.00%, aggrios=19660907/2260020, aggrmerge=0/3075, aggrticks=20137054/470524, aggrin_queue=20707418, aggrutil=96.83% sdb: ios=19660907/2260020, merge=0/3075, ticks=20137054/470524, in_queue=20707418, util=96.83%
10.9 fio: fio blocksize 128k, size 1G, numJobs 3200
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test5 --bs=128k --numj=3200 --size=1G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s1G_bs128k_numj3200.out &
read: IOPS=25.1k, BW=3144MiB/s (3296MB/s)(2400GiB/781803msec) write: IOPS=2135, BW=267MiB/s (280MB/s)(800GiB/3069168msec) READ: bw=801MiB/s (840MB/s), 801MiB/s-801MiB/s (840MB/s-840MB/s), io=2400GiB (2577GB), run=3069168-3069168msec WRITE: bw=267MiB/s (280MB/s), 267MiB/s-267MiB/s (280MB/s-280MB/s), io=800GiB (859GB), run=3069168-3069168msec Disk stats (read/write): drbd0: ios=629155936/121331602, merge=0/0, ticks=1394049998/32778154, in_queue=18446744070963296870, util=100.00%, aggrios=19661134/118173320, aggrmerge=609495154/3174089, aggrticks=153075863/21298595, aggrin_queue=174489473, aggrutil=99.92% sdb: ios=19661134/118173320, merge=609495154/3174089, ticks=153075863/21298595, in_queue=174489473, util=99.92%
10.10 fio: Thomas-Krenn write latency test
[root@lxmds25 ~]# fio --rw=randwrite --name=latency_write-1 --bs=4k --direct=1 --filename=/dev/drbd0 --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_latency_write-1.out write: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(984MiB/60000msec) slat (usec): min=5, max=199, avg= 9.49, stdev= 2.08 clat (usec): min=43, max=25898, avg=226.19, stdev=131.96 lat (usec): min=140, max=25906, avg=235.82, stdev=131.94 clat percentiles (usec): | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 239], 95.00th=[ 245], | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 627], 99.95th=[ 635], | 99.99th=[ 3228] bw ( KiB/s): min=15488, max=18240, per=100.00%, avg=16791.51, stdev=512.04, samples=119 iops : min= 3872, max= 4560, avg=4197.85, stdev=128.02, samples=119 lat (usec) : 50=0.01%, 100=0.01%, 250=96.64%, 500=3.06%, 750=0.27% lat (usec) : 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% cpu : usr=1.48%, sys=6.28%, ctx=253691, majf=0, minf=2765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,251831,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=16.4MiB/s (17.2MB/s), 16.4MiB/s-16.4MiB/s (17.2MB/s-17.2MB/s), io=984MiB (1031MB), run=60000-60000msec Disk stats (read/write): drbd0: ios=41/251186, merge=0/0, ticks=3/53758, in_queue=53323, util=88.94%, aggrios=82/503292, aggrmerge=0/0, aggrticks=8/10894, aggrin_queue=10856, aggrutil=18.04% sdb: ios=82/503292, merge=0/0, ticks=8/10894, in_queue=10856, util=18.04%
10.11 fio: Thomas-Krenn read latency test
[root@lxmds25 ~]# fio --rw=randread --name=latency_read-1 --bs=4k --direct=1 --filename=/dev/drbd0 --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_latency_read-1.out read: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(2496MiB/60000msec) slat (usec): min=2, max=309, avg= 5.34, stdev= 1.62 clat (nsec): min=820, max=1003.2k, avg=87487.18, stdev=28645.69 lat (usec): min=23, max=1008, avg=92.93, stdev=28.78 clat percentiles (usec): | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 84], | 30.00th=[ 94], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 97], | 70.00th=[ 99], 80.00th=[ 111], 90.00th=[ 112], 95.00th=[ 113], | 99.00th=[ 117], 99.50th=[ 131], 99.90th=[ 149], 99.95th=[ 151], | 99.99th=[ 169] bw ( KiB/s): min=37264, max=51664, per=99.92%, avg=42560.82, stdev=4098.26, samples=119 iops : min= 9316, max=12916, avg=10640.21, stdev=1024.58, samples=119 lat (nsec) : 1000=0.01% lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=17.59% lat (usec) : 100=53.13%, 250=29.26%, 500=0.01% lat (msec) : 2=0.01% cpu : usr=4.17%, sys=7.06%, ctx=640489, majf=0, minf=2902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=638922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=2496MiB (2617MB), run=60000-60000msec Disk stats (read/write): drbd0: ios=636993/0, merge=0/0, ticks=54528/0, in_queue=54364, util=90.67%, aggrios=638922/0, aggrmerge=0/0, aggrticks=54310/0, aggrin_queue=54257, aggrutil=90.21% sdb: ios=638922/0, merge=0/0, ticks=54310/0, in_queue=54257, util=90.21%
10.12 fio: Thomas-Krenn IOPS write test
[root@lxmds25 ~]# fio --rw=randwrite --name=iops_write-1 --bs=4k --direct=1 --filename=/dev/drbd0 --numjobs=4 --ioengine=libaio --iodepth=32 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_iops_write-1.out write: IOPS=43.3k, BW=169MiB/s (178MB/s)(9.92GiB/60004msec) slat (nsec): min=810, max=1739.0k, avg=7918.37, stdev=10310.07 clat (usec): min=110, max=48979, avg=2941.76, stdev=944.07 lat (usec): min=133, max=48986, avg=2949.76, stdev=944.01 clat percentiles (usec): | 1.00th=[ 1434], 5.00th=[ 1860], 10.00th=[ 2474], 20.00th=[ 2769], | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3294], 95.00th=[ 3425], | 99.00th=[ 3654], 99.50th=[ 3785], 99.90th=[17695], 99.95th=[21627], | 99.99th=[39584] bw ( KiB/s): min=38080, max=80632, per=24.99%, avg=43328.62, stdev=6322.37, samples=480 iops : min= 9520, max=20158, avg=10832.09, stdev=1580.57, samples=480 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.03% lat (msec) : 2=6.66%, 4=93.02%, 10=0.12%, 20=0.05%, 50=0.08% cpu : usr=2.45%, sys=11.51%, ctx=1893715, majf=0, minf=10955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,2600646,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=9.92GiB (10.7GB), run=60004-60004msec Disk stats (read/write): drbd0: ios=40/2592779, merge=0/0, ticks=7/7616106, in_queue=7630826, util=99.97%, aggrios=82/2647324, aggrmerge=0/0, aggrticks=14/1182513, aggrin_queue=1173009, aggrutil=97.54% sdb: ios=82/2647324, merge=0/0, ticks=14/1182513, in_queue=1173009, util=97.54%
10.13 fio: Thomas-Krenn IOPS read test
[root@lxmds25 ~]# fio --rw=randread --name=iops_read-1 --bs=4k --direct=1 --filename=/dev/drbd0 --numjobs=4 --ioengine=libaio --iodepth=32 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_iops_read-1.out read: IOPS=92.1k, BW=360MiB/s (377MB/s)(21.1GiB/60001msec) slat (nsec): min=1960, max=1666.8k, avg=41769.48, stdev=22404.71 clat (usec): min=99, max=39152, avg=1346.24, stdev=422.62 lat (usec): min=101, max=39265, avg=1388.10, stdev=432.97 clat percentiles (usec): | 1.00th=[ 717], 5.00th=[ 914], 10.00th=[ 938], 20.00th=[ 988], | 30.00th=[ 1090], 40.00th=[ 1401], 50.00th=[ 1467], 60.00th=[ 1500], | 70.00th=[ 1532], 80.00th=[ 1565], 90.00th=[ 1598], 95.00th=[ 1614], | 99.00th=[ 1680], 99.50th=[ 1778], 99.90th=[ 5669], 99.95th=[ 9372], | 99.99th=[15401] bw ( KiB/s): min= 8144, max=171376, per=24.99%, avg=92076.60, stdev=20943.61, samples=476 iops : min= 2036, max=42844, avg=23019.12, stdev=5235.90, samples=476 lat (usec) : 100=0.01%, 250=0.01%, 500=0.25%, 750=1.30%, 1000=20.18% lat (msec) : 2=78.09%, 4=0.04%, 10=0.09%, 20=0.03%, 50=0.01% cpu : usr=1.62%, sys=97.26%, ctx=126227, majf=0, minf=54915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=5526691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): READ: bw=360MiB/s (377MB/s), 360MiB/s-360MiB/s (377MB/s-377MB/s), io=21.1GiB (22.6GB), run=60001-60001msec Disk stats (read/write): drbd0: ios=5506971/0, merge=0/0, ticks=974987/0, in_queue=975372, util=100.00%, aggrios=5526691/0, aggrmerge=0/0, aggrticks=938164/0, aggrin_queue=940008, aggrutil=100.00% sdb: ios=5526691/0, merge=0/0, ticks=938164/0, in_queue=940008, util=100.00%
10.14 fio IOPS write test 32 * 64
[root@lxmds25 ~]# nohup fio --rw=randwrite --name=iops_write-4 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=32 --ioengine=libaio --iodepth=64 --refill_buffers --runtime=10m --time_based --group_reporting --output=fio_iops_write-4.out &
write: IOPS=47.5k, BW=186MiB/s (195MB/s)(109GiB/600007msec) slat (nsec): min=1600, max=36836k, avg=666528.71, stdev=2174356.09 clat (usec): min=444, max=128980, avg=42405.91, stdev=9752.57 lat (usec): min=467, max=129183, avg=43072.66, stdev=9776.77 clat percentiles (usec): | 1.00th=[20841], 5.00th=[27132], 10.00th=[30278], 20.00th=[34341], | 30.00th=[37487], 40.00th=[39584], 50.00th=[42206], 60.00th=[44303], | 70.00th=[46924], 80.00th=[50070], 90.00th=[54789], 95.00th=[58983], | 99.00th=[67634], 99.50th=[70779], 99.90th=[78119], 99.95th=[81265], | 99.99th=[88605] bw ( KiB/s): min= 4504, max=39648, per=3.12%, avg=5939.63, stdev=589.59, samples=38385 iops : min= 1126, max= 9912, avg=1484.88, stdev=147.39, samples=38385 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.02%, 4=0.04%, 10=0.03%, 20=0.73%, 50=78.81% lat (msec) : 100=20.37%, 250=0.01% cpu : usr=0.58%, sys=15.07%, ctx=2787277, majf=0, minf=1596908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=0,28517936,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): WRITE: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=109GiB (117GB), run=600007-600007msec Disk stats (read/write): sdb: ios=45/28515617, merge=0/0, ticks=83/94734465, in_queue=95627624, util=100.00%
11 lxmds25 DRBD on Raid-10 with 10 SSDs, Strip Size = 256 KB, connected to lxmds26, protocol C, ldiskfs
11.1 iozone
Children see throughput for 1 initial writers = 1,512,894.88 kB/sec Children see throughput for 1 rewriters = 2,335,991.75 kB/sec Children see throughput for 1 readers = 15,371,228.00 kB/sec Children see throughput for 1 re-readers = 15,834,251.00 kB/sec
11.2 fio: blocksize 1M, size 20G, numJobs 40
[root@lxmds25 mnt]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-0.out
read: IOPS=4772, BW=4772MiB/s (5004MB/s)(399GiB/85637msec) write: IOPS=4793, BW=4794MiB/s (5027MB/s)(401GiB/85637msec) READ: bw=4772MiB/s (5004MB/s), 4772MiB/s-4772MiB/s (5004MB/s-5004MB/s), io=399GiB (429GB), run=85637-85637msec WRITE: bw=4794MiB/s (5027MB/s), 4794MiB/s-4794MiB/s (5027MB/s-5027MB/s), io=401GiB (430GB), run=85637-85637msec Disk stats (read/write): drbd0: ios=3271397/220011, merge=0/0, ticks=1824242/418810080, in_queue=432469238, util=100.00%, aggrios=3271461/244772, aggrmerge=0/320, aggrticks=1523927/96331, aggrin_queue=1620173, aggrutil=85.54% sdb: ios=3271461/244772, merge=0/320, ticks=1523927/96331, in_queue=1620173, util=85.54%
11.3 fio: fio blocksize 128k, size 20G, numJobs 40
[root@lxmds25 mnt]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-1.out
read: IOPS=32.7k, BW=4081MiB/s (4280MB/s)(400GiB/100345msec) write: IOPS=32.7k, BW=4082MiB/s (4281MB/s)(400GiB/100345msec) READ: bw=4081MiB/s (4280MB/s), 4081MiB/s-4081MiB/s (4280MB/s-4280MB/s), io=400GiB (429GB), run=100345-100345msec WRITE: bw=4082MiB/s (4281MB/s), 4082MiB/s-4082MiB/s (4281MB/s-4281MB/s), io=400GiB (430GB), run=100345-100345msec Disk stats (read/write): drbd0: ios=3277699/356244, merge=0/0, ticks=1813992/545903795, in_queue=557333718, util=100.00%, aggrios=3277750/396455, aggrmerge=0/343, aggrticks=1485640/78771, aggrin_queue=1561483, aggrutil=91.45% sdb: ios=3277750/396455, merge=0/343, ticks=1485640/78771, in_queue=1561483, util=91.45%
11.4 fio: fio blocksize 1M, size 4G, numJobs 320
[root@lxmds25 mnt]# fio --gtod_reduce=1 --name=test2 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s4G_bs1M_numj320.out
read: IOPS=8543, BW=8544MiB/s (8959MB/s)(960GiB/115026msec) write: IOPS=2851, BW=2851MiB/s (2990MB/s)(320GiB/115026msec) READ: bw=8544MiB/s (8959MB/s), 8544MiB/s-8544MiB/s (8959MB/s-8959MB/s), io=960GiB (1030GB), run=115026-115026msec WRITE: bw=2851MiB/s (2990MB/s), 2851MiB/s-2851MiB/s (2990MB/s-2990MB/s), io=320GiB (344GB), run=115026-115026msec Disk stats (read/write): drbd0: ios=7861817/194971, merge=0/0, ticks=25774764/631262537, in_queue=668288214, util=100.00%, aggrios=7861848/205599, aggrmerge=0/1126, aggrticks=11841398/104955, aggrin_queue=12022812, aggrutil=95.06% sdb: ios=7861848/205599, merge=0/1126, ticks=11841398/104955, in_queue=12022812, util=95.06%
11.5 iozone Christo r=256k
[root@lxmds25 mnt]# iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out
Children see throughput for 16 initial writers = 898,300.05 kB/sec Children see throughput for 16 rewriters = 1,030,727.07 kB/sec Children see throughput for 16 readers = 124,012,785.00 kB/sec Children see throughput for 16 re-readers = 132,555,624.00 kB/sec Children see throughput for 16 random readers = 124,728,047.50 kB/sec Children see throughput for 16 random writers = 893,957.63 kB/sec
11.6 iozone Christo r=1M
[root@lxmds25 mnt]# iozone -s 30g -r 1M -i0 -i1 -i2 -t16 -e -w > iozone_s30g_r1M_t16.out
Children see throughput for 16 initial writers = 914542.80 kB/sec Children see throughput for 16 rewriters = 1102565.16 kB/sec Children see throughput for 16 readers = 128206478.50 kB/sec Children see throughput for 16 re-readers = 122366258.50 kB/sec Children see throughput for 16 random readers = 122365320.00 kB/sec Children see throughput for 16 random writers = 884909.11 kB/sec
11.7 fio: fio blocksize 1M, size 10G, numJobs 320
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test3 --bs=1M --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs1M_numj320.out &
read: IOPS=3708, BW=3708MiB/s (3888MB/s)(2400GiB/662778msec) write: IOPS=1235, BW=1236MiB/s (1296MB/s)(800GiB/662778msec) READ: bw=3708MiB/s (3888MB/s), 3708MiB/s-3708MiB/s (3888MB/s-3888MB/s), io=2400GiB (2577GB), run=662778-662778msec WRITE: bw=1236MiB/s (1296MB/s), 1236MiB/s-1236MiB/s (1296MB/s-1296MB/s), io=800GiB (859GB), run=662778-662778msec Disk stats (read/write): drbd0: ios=19660367/1922356, merge=0/0, ticks=43979895/674646406, in_queue=743769193, util=100.00%, aggrios=19660905/2235165, aggrmerge=0/5321, aggrticks=20586767/562937, aggrin_queue=21238733, aggrutil=91.75% sdb: ios=19660905/2235165, merge=0/5321, ticks=20586767/562937, in_queue=21238733, util=91.75%
11.8 fio: fio blocksize 128k, size 10G, numJobs 320
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test4 --bs=128k --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs128k_numj320.out &
read: IOPS=28.9k, BW=3611MiB/s (3786MB/s)(2400GiB/680571msec) write: IOPS=9631, BW=1204MiB/s (1262MB/s)(800GiB/680571msec) READ: bw=3611MiB/s (3786MB/s), 3611MiB/s-3611MiB/s (3786MB/s-3786MB/s), io=2400GiB (2577GB), run=680571-680571msec WRITE: bw=1204MiB/s (1262MB/s), 1204MiB/s-1204MiB/s (1262MB/s-1262MB/s), io=800GiB (859GB), run=680571-680571msec Disk stats (read/write): drbd0: ios=19659160/3300743, merge=0/0, ticks=43494597/766828550, in_queue=837710563, util=99.99%, aggrios=19659453/3866551, aggrmerge=0/14442, aggrticks=21099042/850112, aggrin_queue=22029262, aggrutil=92.51% sdb: ios=19659453/3866551, merge=0/14442, ticks=21099042/850112, in_queue=22029262, util=92.51%
11.9 fio: fio blocksize 128k, size 1G, numJobs 3200
[root@lxmds25 mnt]# nohup fio --gtod_reduce=1 --name=test5 --bs=128k --numj=3200 --size=1G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s1G_bs128k_numj3200.out &
read: IOPS=25.6k, BW=3196MiB/s (3351MB/s)(2400GiB/769053msec) write: IOPS=8521, BW=1065MiB/s (1117MB/s)(800GiB/769053msec) READ: bw=3196MiB/s (3351MB/s), 3196MiB/s-3196MiB/s (3351MB/s-3351MB/s), io=2400GiB (2577GB), run=769053-769053msec WRITE: bw=1065MiB/s (1117MB/s), 1065MiB/s-1065MiB/s (1117MB/s-1117MB/s), io=800GiB (859GB), run=769053-769053msec Disk stats (read/write): drbd0: ios=19660990/3316786, merge=0/0, ticks=126645202/1072466474, in_queue=1202208926, util=97.47%, aggrios=19661141/3922641, aggrmerge=0/35467, aggrticks=17534359/833479, aggrin_queue=18447364, aggrutil=88.59% sdb: ios=19661141/3922641, merge=0/35467, ticks=17534359/833479, in_queue=18447364, util=88.59%
12 lxmds26
- RAID-10 with 10 SSDs, Strip Size = 256 KB
12.1 iozone s=30g r=256k t=100
iozone -s 30g -r 256k -i0 -i1 -i2 -t100 -e -w
Children see throughput for 100 initial writers = 4,350,142.45 kB/sec Children see throughput for 100 rewriters = 4,249,757.45 kB/sec Children see throughput for 100 readers = 10,353,915.52 kB/sec Children see throughput for 100 re-readers = 10,257,988.84 kB/sec Children see throughput for 100 random readers = 12,850,446.80 kB/sec Children see throughput for 100 random writers = 4,046,748.79 kB/sec
12.2 iozone s=30g r=1M t=100
iozone -s 30g -r 1M -i0 -i1 -i2 -t100 -e -w
Children see throughput for 100 initial writers = 4,349,301.95 kB/sec Children see throughput for 100 rewriters = 4,273,346.77 kB/sec Children see throughput for 100 readers = 10,394,703.91 kB/sec Children see throughput for 100 re-readers = 10,293,047.98 kB/sec Children see throughput for 100 random readers = 13,652,094.05 kB/sec Children see throughput for 100 random writers = 4,151,209.18 kB/sec
12.3 fio: blocksize 1M, size 800G, numJobs 4
[root@lxmds26 mnt]# nohup fio --gtod_reduce=1 --name=test4 --bs=1M --numj=4 --size=800G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s800G_bs1M_numj4.out &
read: IOPS=1289, BW=1289MiB/s (1352MB/s)(2397GiB/1904403msec) write: IOPS=431, BW=432MiB/s (452MB/s)(803GiB/1904403msec) READ: bw=1289MiB/s (1352MB/s), 1289MiB/s-1289MiB/s (1352MB/s-1352MB/s), io=2397GiB (2574GB), run=1904403-1904403msec WRITE: bw=432MiB/s (452MB/s), 432MiB/s-432MiB/s (452MB/s-452MB/s), io=803GiB (862GB), run=1904403-1904403msec Disk stats (read/write): sdb: ios=19639713/3265576, merge=0/0, ticks=5136338/6896096, in_queue=12026097, util=94.08%
12.4 fio: blocksize 128k, size 800G, numJobs 4
[root@lxmds26 mnt]# nohup fio --gtod_reduce=1 --name=test5 --bs=128k --numj=4 --size=800G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s800G_bs128k_numj4.out &
read: IOPS=9620, BW=1203MiB/s (1261MB/s)(2400GiB/2043785msec) write: IOPS=3206, BW=401MiB/s (420MB/s)(800GiB/2043785msec) READ: bw=1203MiB/s (1261MB/s), 1203MiB/s-1203MiB/s (1261MB/s-1261MB/s), io=2400GiB (2577GB), run=2043785-2043785msec WRITE: bw=401MiB/s (420MB/s), 401MiB/s-401MiB/s (420MB/s-420MB/s), io=800GiB (859GB), run=2043785-2043785msec Disk stats (read/write): sdb: ios=19660830/6351876, merge=0/0, ticks=5588904/19364195, in_queue=24951655, util=94.53%
13 lxmds27
- RAID-6 with 12 SSDs, Strip Size = 256 KB
13.1 iozone
[root@lxmds27 mnt]# iozone -s 520g -r 64 -i0 -i1 -t1 Children see throughput for 1 initial writers = 3,344,782.00 kB/sec Children see throughput for 1 rewriters = 6,401,479.00 kB/sec Children see throughput for 1 readers = 16,041,008.00 kB/sec Children see throughput for 1 re-readers = 16,043,880.00 kB/sec
13.2 fio: blocksize 1M, size 20G, numJobs 40
[root@lxmds27 mnt]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-0.out
read: IOPS=2339, BW=2340MiB/s (2453MB/s)(399GiB/174672msec) write: IOPS=2350, BW=2350MiB/s (2464MB/s)(401GiB/174672msec) READ: bw=2340MiB/s (2453MB/s), 2340MiB/s-2340MiB/s (2453MB/s-2453MB/s), io=399GiB (429GB), run=174672-174672msec WRITE: bw=2350MiB/s (2464MB/s), 2350MiB/s-2350MiB/s (2464MB/s-2464MB/s), io=401GiB (430GB), run=174672-174672msec Disk stats (read/write): sdb: ios=3269113/274786, merge=0/4, ticks=6487118/17020674, in_queue=23518570, util=100.00%
13.3 fio: fio blocksize 128k, size 20G, numJobs 40
[root@lxmds27 mnt]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-1.out
read: IOPS=26.7k, BW=3335MiB/s (3497MB/s)(400GiB/122800msec) write: IOPS=26.7k, BW=3336MiB/s (3498MB/s)(400GiB/122800msec) READ: bw=3335MiB/s (3497MB/s), 3335MiB/s-3335MiB/s (3497MB/s-3497MB/s), io=400GiB (429GB), run=122800-122800msec WRITE: bw=3336MiB/s (3498MB/s), 3336MiB/s-3336MiB/s (3498MB/s-3498MB/s), io=400GiB (430GB), run=122800-122800msec Disk stats (read/write): sdb: ios=3276311/821576, merge=0/2, ticks=4316232/12482473, in_queue=16836624, util=100.00%
13.4 fio: fio blocksize 1M, size 4G, numJobs 320
[root@lxmds27 mnt]# fio --gtod_reduce=1 --name=test2 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s4G_bs1M_numj320.out
read: IOPS=4043, BW=4043MiB/s (4240MB/s)(960GiB/243047msec) write: IOPS=1349, BW=1349MiB/s (1415MB/s)(320GiB/243047msec) READ: bw=4043MiB/s (4240MB/s), 4043MiB/s-4043MiB/s (4240MB/s-4240MB/s), io=960GiB (1030GB), run=243047-243047msec WRITE: bw=1349MiB/s (1415MB/s), 1349MiB/s-1349MiB/s (1415MB/s-1415MB/s), io=320GiB (344GB), run=243047-243047msec Disk stats (read/write): sdb: ios=7860182/308667, merge=0/1, ticks=31497746/21830521, in_queue=53478863, util=100.00%
13.5 iozone Christo r=256k
[root@lxmds27 mnt]# iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out
Children see throughput for 16 initial writers = 3,953,940.58 kB/sec Children see throughput for 16 rewriters = 4,378,503.91 kB/sec Children see throughput for 16 readers = 131,526,953.00 kB/sec Children see throughput for 16 re-readers = 128,003,283.00 kB/sec Children see throughput for 16 random readers = 146,831,536.50 kB/sec Children see throughput for 16 random writers = 3,767,646.69 kB/sec
13.6 iozone Christo r=1M
[root@lxmds27 mnt]# iozone -s 30g -r 1M -i0 -i1 -i2 -t16 -e -w > iozone_s30g_r1M_t16.out
Children see throughput for 16 initial writers = 4,244,890.48 kB/sec Children see throughput for 16 rewriters = 4,520,549.94 kB/sec Children see throughput for 16 readers = 155,816,645.50 kB/sec Children see throughput for 16 re-readers = 162,639,869.50 kB/sec Children see throughput for 16 random readers = 135,203,236.00 kB/sec Children see throughput for 16 random writers = 3,989,166.41 kB/sec
13.7 fio: fio blocksize 1M, size 10G, numJobs 320
[root@lxmds27 mnt]# nohup fio --gtod_reduce=1 --name=test3 --bs=1M --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs1M_numj320.out &
read: IOPS=3817, BW=3818MiB/s (4003MB/s)(2400GiB/643727msec) write: IOPS=1272, BW=1273MiB/s (1334MB/s)(800GiB/643727msec) READ: bw=3818MiB/s (4003MB/s), 3818MiB/s-3818MiB/s (4003MB/s-4003MB/s), io=2400GiB (2577GB), run=643727-643727msec WRITE: bw=1273MiB/s (1334MB/s), 1273MiB/s-1273MiB/s (1334MB/s-1334MB/s), io=800GiB (859GB), run=643727-643727msec Disk stats (read/write): sdb: ios=19660240/781713, merge=0/4, ticks=81655026/57367163, in_queue=139382322, util=100.00%
13.8 fio: fio blocksize 128k, size 10G, numJobs 320
[root@lxmds27 mnt]# nohup fio --gtod_reduce=1 --name=test4 --bs=128k --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs128k_numj320.out &
read: IOPS=41.1k, BW=5133MiB/s (5382MB/s)(2400GiB/478770msec) write: IOPS=13.7k, BW=1711MiB/s (1795MB/s)(800GiB/478770msec) READ: bw=5133MiB/s (5382MB/s), 5133MiB/s-5133MiB/s (5382MB/s-5382MB/s), io=2400GiB (2577GB), run=478770-478770msec WRITE: bw=1711MiB/s (1795MB/s), 1711MiB/s-1711MiB/s (1795MB/s-1795MB/s), io=800GiB (859GB), run=478770-478770msec Disk stats (read/write): sdb: ios=19659432/4062978, merge=0/8, ticks=56021669/49133623, in_queue=105724962, util=99.20%
13.9 fio: fio blocksize 128k, size 1G, numJobs 3200
[root@lxmds27 mnt]# nohup fio --gtod_reduce=1 --name=test5 --bs=128k --numj=3200 --size=1G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s1G_bs128k_numj3200.out &
read: IOPS=40.1k, BW=5007MiB/s (5250MB/s)(2400GiB/490825msec) write: IOPS=13.4k, BW=1669MiB/s (1750MB/s)(800GiB/490825msec) READ: bw=5007MiB/s (5250MB/s), 5007MiB/s-5007MiB/s (5250MB/s-5250MB/s), io=2400GiB (2577GB), run=490825-490825msec WRITE: bw=1669MiB/s (1750MB/s), 1669MiB/s-1669MiB/s (1750MB/s-1750MB/s), io=800GiB (859GB), run=490825-490825msec Disk stats (read/write): sdb: ios=19660810/3217503, merge=0/202, ticks=49613969/42179338, in_queue=92308635, util=91.19%
13.10 fio: Thomas-Krenn write latency test
[root@lxmds27 ~]# fio --rw=randwrite --name=latency_write-1 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_latency_write-1.out latency_write-1: (groupid=0, jobs=1): err= 0: pid=10633: Sun Jan 9 16:29:24 2022 write: IOPS=18.2k, BW=71.0MiB/s (74.5MB/s)(4261MiB/60000msec) slat (nsec): min=1740, max=138301, avg=4201.10, stdev=1034.81 clat (nsec): min=460, max=1246.4k, avg=48425.08, stdev=117339.03 lat (usec): min=19, max=1251, avg=52.71, stdev=117.41 clat percentiles (usec): | 1.00th=[ 20], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 22], | 30.00th=[ 22], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 23], | 70.00th=[ 23], 80.00th=[ 23], 90.00th=[ 48], 95.00th=[ 227], | 99.00th=[ 979], 99.50th=[ 988], 99.90th=[ 996], 99.95th=[ 1004], | 99.99th=[ 1012] bw ( KiB/s): min=58792, max=162256, per=100.00%, avg=72792.33, stdev=26981.11, samples=119 iops : min=14698, max=40564, avg=18198.09, stdev=6745.26, samples=119 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=8.44%, 50=81.61% lat (usec) : 100=2.66%, 250=3.83%, 500=2.25%, 750=0.01%, 1000=1.14% lat (msec) : 2=0.07% cpu : usr=9.07%, sys=9.50%, ctx=1092725, majf=0, minf=4522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,1090843,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=4261MiB (4468MB), run=60000-60000msec Disk stats (read/write): sdb: ios=43/1087554, merge=0/0, ticks=7/50660, in_queue=50569, util=84.34%
13.11 fio: Thomas-Krenn write latency test
[root@lxmds27 ~]# fio --rw=randread --name=latency_read-1 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_latency_read-1.out read: IOPS=9546, BW=37.3MiB/s (39.1MB/s)(2238MiB/60001msec) slat (nsec): min=1720, max=147602, avg=3961.13, stdev=1685.73 clat (nsec): min=1430, max=300043, avg=99768.53, stdev=8229.69 lat (usec): min=88, max=302, avg=103.81, stdev= 8.87 clat percentiles (usec): | 1.00th=[ 90], 5.00th=[ 92], 10.00th=[ 92], 20.00th=[ 94], | 30.00th=[ 95], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 97], | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 112], 95.00th=[ 112], | 99.00th=[ 116], 99.50th=[ 131], 99.90th=[ 147], 99.95th=[ 149], | 99.99th=[ 165] bw ( KiB/s): min=36072, max=40416, per=99.99%, avg=38183.09, stdev=962.57, samples=120 iops : min= 9018, max=10104, avg=9545.77, stdev=240.64, samples=120 lat (usec) : 2=0.01%, 50=0.01%, 100=65.65%, 250=34.35%, 500=0.01% cpu : usr=2.78%, sys=5.23%, ctx=574066, majf=0, minf=2865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=572814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=2238MiB (2346MB), run=60001-60001msec Disk stats (read/write): sdb: ios=570847/0, merge=0/0, ticks=55856/0, in_queue=55820, util=93.10%
13.12 fio: Thomas-Krenn IOPS write test
[root@lxmds27 ~]# fio --rw=randwrite --name=iops_write-1 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=4 --ioengine=libaio --iodepth=32 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_iops_write-1.out iops_write-1: (groupid=0, jobs=4): err= 0: pid=28463: Mon Jan 10 09:12:54 2022 write: IOPS=23.2k, BW=90.4MiB/s (94.8MB/s)(10.6GiB/120010msec) slat (nsec): min=1499, max=3731.1k, avg=15217.14, stdev=12563.38 clat (usec): min=147, max=34182, avg=5506.13, stdev=3258.60 lat (usec): min=151, max=34187, avg=5521.48, stdev=3255.83 clat percentiles (usec): | 1.00th=[ 742], 5.00th=[ 988], 10.00th=[ 1401], 20.00th=[ 2573], | 30.00th=[ 3294], 40.00th=[ 4752], 50.00th=[ 5800], 60.00th=[ 6194], | 70.00th=[ 6652], 80.00th=[ 7504], 90.00th=[ 9241], 95.00th=[11338], | 99.00th=[15795], 99.50th=[17171], 99.90th=[22676], 99.95th=[23725], | 99.99th=[28181] bw ( KiB/s): min=18288, max=115336, per=25.00%, avg=23149.72, stdev=10972.94, samples=960 iops : min= 4572, max=28834, avg=5787.40, stdev=2743.24, samples=960 lat (usec) : 250=0.03%, 500=0.03%, 750=0.97%, 1000=4.23% lat (msec) : 2=10.39%, 4=20.17%, 10=56.79%, 20=7.19%, 50=0.21% cpu : usr=2.37%, sys=9.86%, ctx=806562, majf=0, minf=40521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,2778668,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=10.6GiB (11.4GB), run=120010-120010msec Disk stats (read/write): sdb: ios=46/2778643, merge=0/0, ticks=3/14900038, in_queue=14940892, util=99.98%
13.13 fio: Thomas-Krenn IOPS write test
[root@lxmds27 ~]# fio --rw=randread --name=iops_read-1 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=4 --ioengine=libaio --iodepth=32 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_iops_read-1.out read: IOPS=95.4k, BW=373MiB/s (391MB/s)(21.8GiB/60001msec) slat (nsec): min=1480, max=410124, avg=40701.89, stdev=18506.00 clat (usec): min=77, max=7547, avg=1298.91, stdev=181.14 lat (usec): min=80, max=7565, avg=1339.69, stdev=186.36 clat percentiles (usec): | 1.00th=[ 709], 5.00th=[ 873], 10.00th=[ 1123], 20.00th=[ 1270], | 30.00th=[ 1303], 40.00th=[ 1319], 50.00th=[ 1336], 60.00th=[ 1369], | 70.00th=[ 1385], 80.00th=[ 1401], 90.00th=[ 1434], 95.00th=[ 1450], | 99.00th=[ 1483], 99.50th=[ 1500], 99.90th=[ 1860], 99.95th=[ 2073], | 99.99th=[ 4015] bw ( KiB/s): min=81344, max=172544, per=25.00%, avg=95369.04, stdev=12206.09, samples=477 iops : min=20336, max=43136, avg=23842.22, stdev=3051.50, samples=477 lat (usec) : 100=0.01%, 250=0.37%, 500=0.16%, 750=0.92%, 1000=6.46% lat (msec) : 2=92.02%, 4=0.05%, 10=0.01% cpu : usr=1.56%, sys=98.37%, ctx=12430, majf=0, minf=54861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=5722449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): READ: bw=373MiB/s (391MB/s), 373MiB/s-373MiB/s (391MB/s-391MB/s), io=21.8GiB (23.4GB), run=60001-60001msec Disk stats (read/write): sdb: ios=5699101/0, merge=0/0, ticks=697397/0, in_queue=698420, util=100.00%
14 lxmds28
- RAID-6 with 12 SSDs, Strip Size = 1 MB
14.1 iozone
[root@lxmds28 mnt]# iozone -s 520g -r 64 -i0 -i1 -t1 Children see throughput for 1 initial writers = 2,654,844.75 kB/sec Children see throughput for 1 rewriters = 6,239,911.50 kB/sec Children see throughput for 1 readers = 14,225,136.00 kB/sec Children see throughput for 1 re-readers = 13,524,429.00 kB/sec
14.2 fio: blocksize 1M, size 20G, numJobs 40
[root@lxmds28 mnt]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-0.out
read: IOPS=1222, BW=1222MiB/s (1282MB/s)(399GiB/334293msec) write: IOPS=1228, BW=1228MiB/s (1288MB/s)(401GiB/334293msec) READ: bw=1222MiB/s (1282MB/s), 1222MiB/s-1222MiB/s (1282MB/s-1282MB/s), io=399GiB (429GB), run=334293-334293msec WRITE: bw=1228MiB/s (1288MB/s), 1228MiB/s-1228MiB/s (1288MB/s-1288MB/s), io=401GiB (430GB), run=334293-334293msec Disk stats (read/write): sdb: ios=3269308/360845, merge=0/4, ticks=12863341/37891139, in_queue=50766053, util=100.00%
14.3 fio: fio blocksize 128k, size 20G, numJobs 40
[root@lxmds28 mnt]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-1.out
read: IOPS=29.5k, BW=3688MiB/s (3867MB/s)(400GiB/111039msec) write: IOPS=29.5k, BW=3689MiB/s (3868MB/s)(400GiB/111039msec) READ: bw=3688MiB/s (3867MB/s), 3688MiB/s-3688MiB/s (3867MB/s-3867MB/s), io=400GiB (429GB), run=111039-111039msec WRITE: bw=3689MiB/s (3868MB/s), 3689MiB/s-3689MiB/s (3868MB/s-3868MB/s), io=400GiB (430GB), run=111039-111039msec Disk stats (read/write): sdb: ios=3276227/614535, merge=0/2, ticks=3883605/9422503, in_queue=13335405, util=100.00%
14.4 fio: fio blocksize 1M, size 4G, numJobs 320
[root@lxmds28 mnt]# fio --gtod_reduce=1 --name=test2 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s4G_bs1M_numj320.out
read: IOPS=2216, BW=2217MiB/s (2324MB/s)(960GiB/443327msec) write: IOPS=739, BW=740MiB/s (776MB/s)(320GiB/443327msec) READ: bw=2217MiB/s (2324MB/s), 2217MiB/s-2217MiB/s (2324MB/s-2324MB/s), io=960GiB (1030GB), run=443327-443327msec WRITE: bw=740MiB/s (776MB/s), 740MiB/s-740MiB/s (776MB/s-776MB/s), io=320GiB (344GB), run=443327-443327msec Disk stats (read/write): sdb: ios=7861621/305076, merge=0/3, ticks=57631930/48488062, in_queue=106266209, util=100.00%
14.5 iozone Christo r=256k
[root@lxmds28 mnt]# iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out
Children see throughput for 16 initial writers = 4029544.72 kB/sec Children see throughput for 16 rewriters = 4424115.84 kB/sec Children see throughput for 16 readers = 136470258.50 kB/sec Children see throughput for 16 re-readers = 128284862.00 kB/sec Children see throughput for 16 random readers = 133986308.50 kB/sec Children see throughput for 16 random writers = 3844770.73 kB/sec
14.6 iozone Christo r=1M
[root@lxmds28 mnt]# iozone -s 30g -r 1M -i0 -i1 -i2 -t16 -e -w > iozone_s30g_r1M_t16.out
Children see throughput for 16 initial writers = 4313629.92 kB/sec Children see throughput for 16 rewriters = 4726156.25 kB/sec Children see throughput for 16 readers = 150859357.50 kB/sec Children see throughput for 16 re-readers = 163217793.50 kB/sec Children see throughput for 16 random readers = 153726537.00 kB/sec Children see throughput for 16 random writers = 3914092.16 kB/sec
14.7 fio: fio blocksize 1M, size 10G, numJobs 320
[root@lxmds28 mnt]# nohup fio --gtod_reduce=1 --name=test3 --bs=1M --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs1M_numj320.out &
read: IOPS=4377, BW=4378MiB/s (4590MB/s)(2400GiB/561377msec) write: IOPS=1459, BW=1459MiB/s (1530MB/s)(800GiB/561377msec) READ: bw=4378MiB/s (4590MB/s), 4378MiB/s-4378MiB/s (4590MB/s-4590MB/s), io=2400GiB (2577GB), run=561377-561377msec WRITE: bw=1459MiB/s (1530MB/s), 1459MiB/s-1459MiB/s (1530MB/s-1530MB/s), io=800GiB (859GB), run=561377-561377msec Disk stats (read/write): sdb: ios=19659778/811547, merge=0/47, ticks=54931349/64324173, in_queue=119810514, util=100.00%
14.8 fio: fio blocksize 128k, size 10G, numJobs 320
[root@lxmds28 mnt]# nohup fio --gtod_reduce=1 --name=test4 --bs=128k --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs128k_numj320.out &
read: IOPS=28.6k, BW=3572MiB/s (3745MB/s)(2400GiB/688048msec) write: IOPS=9526, BW=1191MiB/s (1249MB/s)(800GiB/688048msec) READ: bw=3572MiB/s (3745MB/s), 3572MiB/s-3572MiB/s (3745MB/s-3745MB/s), io=2400GiB (2577GB), run=688048-688048msec WRITE: bw=1191MiB/s (1249MB/s), 1191MiB/s-1191MiB/s (1249MB/s-1249MB/s), io=800GiB (859GB), run=688048-688048msec Disk stats (read/write): sdb: ios=19659414/3277418, merge=0/153, ticks=71813076/77644569, in_queue=149944462, util=100.00%
14.9 fio: fio blocksize 128k, size 1G, numJobs 3200
[root@lxmds28 mnt]# nohup fio --gtod_reduce=1 --name=test5 --bs=128k --numj=3200 --size=1G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s1G_bs128k_numj3200.out &
read: IOPS=27.4k, BW=3431MiB/s (3598MB/s)(2400GiB/716265msec) write: IOPS=9149, BW=1144MiB/s (1199MB/s)(800GiB/716265msec) READ: bw=3431MiB/s (3598MB/s), 3431MiB/s-3431MiB/s (3598MB/s-3598MB/s), io=2400GiB (2577GB), run=716265-716265msec WRITE: bw=1144MiB/s (1199MB/s), 1144MiB/s-1144MiB/s (1199MB/s-1199MB/s), io=800GiB (859GB), run=716265-716265msec Disk stats (read/write): sdb: ios=19661035/3241742, merge=0/930, ticks=75988546/77718334, in_queue=154214436, util=95.55%
15 lxmds29
- Raid-10 with 10 SSDs, Strip Size = 1MB
15.1 iozone
[root@lxmds29 mnt]# iozone -s 520g -r 64 -i0 -i1 -t1 Children see throughput for 1 initial writers = 3,033,598.00 kB/sec Children see throughput for 1 rewriters = 6,449,934.50 kB/sec Children see throughput for 1 readers = 15,191,724.00 kB/sec Children see throughput for 1 re-readers = 14,227,332.00 kB/sec
15.2 fio: blocksize 1M, size 20G, numJobs 40
[root@lxmds29 mnt]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-0.out
read: IOPS=2875, BW=2875MiB/s (3015MB/s)(399GiB/142138msec) write: IOPS=2888, BW=2888MiB/s (3029MB/s)(401GiB/142138msec) READ: bw=2875MiB/s (3015MB/s), 2875MiB/s-2875MiB/s (3015MB/s-3015MB/s), io=399GiB (429GB), run=142138-142138msec WRITE: bw=2888MiB/s (3029MB/s), 2888MiB/s-2888MiB/s (3029MB/s-3029MB/s), io=401GiB (430GB), run=142138-142138msec Disk stats (read/write): sdb: ios=3269236/249070, merge=0/1, ticks=4229192/8973863, in_queue=13208813, util=89.06%
15.3 fio: fio blocksize 128k, size 20G, numJobs 40
[root@lxmds28 mnt]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-1.out
read: IOPS=33.6k, BW=4203MiB/s (4407MB/s)(400GiB/97449msec) write: IOPS=33.6k, BW=4204MiB/s (4408MB/s)(400GiB/97449msec) READ: bw=4203MiB/s (4407MB/s), 4203MiB/s-4203MiB/s (4407MB/s-4407MB/s), io=400GiB (429GB), run=97449-97449msec WRITE: bw=4204MiB/s (4408MB/s), 4204MiB/s-4204MiB/s (4408MB/s-4408MB/s), io=400GiB (430GB), run=97449-97449msec Disk stats (read/write): sdb: ios=3276373/1333430, merge=0/2, ticks=3015679/5549748, in_queue=8580766, util=100.00%
15.4 fio: fio blocksize 1M, size 4G, numJobs 320
[root@lxmds29 mnt]# fio --gtod_reduce=1 --name=test2 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s4G_bs1M_numj320.out
read: IOPS=8026, BW=8027MiB/s (8417MB/s)(960GiB/122429msec) write: IOPS=2679, BW=2679MiB/s (2809MB/s)(320GiB/122429msec) READ: bw=8027MiB/s (8417MB/s), 8027MiB/s-8027MiB/s (8417MB/s-8417MB/s), io=960GiB (1030GB), run=122429-122429msec WRITE: bw=2679MiB/s (2809MB/s), 2679MiB/s-2679MiB/s (2809MB/s-2809MB/s), io=320GiB (344GB), run=122429-122429msec Disk stats (read/write): sdb: ios=7861563/271143, merge=0/1, ticks=15928265/4469758, in_queue=20510935, util=100.00%
15.5 iozone Christo r=256k
[root@lxmds29 mnt]# iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out
Children see throughput for 16 initial writers = 3409461.02 kB/sec Children see throughput for 16 rewriters = 3605250.38 kB/sec Children see throughput for 16 readers = 144537406.50 kB/sec Children see throughput for 16 re-readers = 146408546.00 kB/sec Children see throughput for 16 random readers = 135727478.50 kB/sec Children see throughput for 16 random writers = 3404319.97 kB/sec
15.6 iozone Christo r=1M
[root@lxmds29 mnt]# iozone -s 30g -r 1M -i0 -i1 -i2 -t16 -e -w > iozone_s30g_r1M_t16.out
Children see throughput for 16 initial writers = 3435893.30 kB/sec Children see throughput for 16 rewriters = 3833443.08 kB/sec Children see throughput for 16 readers = 144459361.50 kB/sec Children see throughput for 16 re-readers = 147963702.50 kB/sec Children see throughput for 16 random readers = 130343504.00 kB/sec Children see throughput for 16 random writers = 3561341.97 kB/sec
15.7 fio: fio blocksize 1M, size 10G, numJobs 320
[root@lxmds29 mnt]# nohup fio --gtod_reduce=1 --name=test3 --bs=1M --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs1M_numj320.out &
read: IOPS=7890, BW=7891MiB/s (8274MB/s)(2400GiB/311450msec) write: IOPS=2630, BW=2630MiB/s (2758MB/s)(800GiB/311450msec) READ: bw=7891MiB/s (8274MB/s), 7891MiB/s-7891MiB/s (8274MB/s-8274MB/s), io=2400GiB (2577GB), run=311450-311450msec WRITE: bw=2630MiB/s (2758MB/s), 2630MiB/s-2630MiB/s (2758MB/s-2758MB/s), io=800GiB (859GB), run=311450-311450msec Disk stats (read/write): sdb: ios=19660228/771293, merge=0/2, ticks=40174333/18681842, in_queue=59178337, util=100.00%
15.8 fio: fio blocksize 128k, size 10G, numJobs 320
[root@lxmds29 mnt]# nohup fio --gtod_reduce=1 --name=test4 --bs=128k --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs128k_numj320.out &
read: IOPS=59.5k, BW=7441MiB/s (7803MB/s)(2400GiB/330234msec) write: IOPS=19.8k, BW=2481MiB/s (2602MB/s)(800GiB/330234msec) READ: bw=7441MiB/s (7803MB/s), 7441MiB/s-7441MiB/s (7803MB/s-7803MB/s), io=2400GiB (2577GB), run=330234-330234msec WRITE: bw=2481MiB/s (2602MB/s), 2481MiB/s-2481MiB/s (2602MB/s-2602MB/s), io=800GiB (859GB), run=330234-330234msec Disk stats (read/write): sdb: ios=19659449/4489157, merge=0/6, ticks=34759572/2792264, in_queue=37824581, util=98.21%
15.9 fio: fio blocksize 128k, size 1G, numJobs 3200
[root@lxmds29 mnt]# nohup fio --gtod_reduce=1 --name=test5 --bs=128k --numj=3200 --size=1G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s1G_bs128k_numj3200.out &
read: IOPS=54.7k, BW=6837MiB/s (7169MB/s)(2400GiB/359463msec) write: IOPS=18.2k, BW=2279MiB/s (2390MB/s)(800GiB/359463msec) READ: bw=6837MiB/s (7169MB/s), 6837MiB/s-6837MiB/s (7169MB/s-7169MB/s), io=2400GiB (2577GB), run=359463-359463msec WRITE: bw=2279MiB/s (2390MB/s), 2279MiB/s-2279MiB/s (2390MB/s-2390MB/s), io=800GiB (859GB), run=359463-359463msec Disk stats (read/write): sdb: ios=19660990/3249206, merge=0/108, ticks=33290413/1365063, in_queue=34913024, util=88.81%
15.10 fio: Thomas-Krenn IOPS write test
[root@lxmds29 ~]# fio --rw=randwrite --name=iops_write-1 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=4 --ioengine=libaio --iodepth=32 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_iops_write-1.out iops_write-1: (groupid=0, jobs=4): err= 0: pid=62730: Sun Jan 9 16:30:17 2022 write: IOPS=46.7k, BW=182MiB/s (191MB/s)(10.7GiB/60003msec) slat (nsec): min=1520, max=4019.6k, avg=11351.72, stdev=9634.15 clat (usec): min=190, max=10678, avg=2725.73, stdev=493.65 lat (usec): min=193, max=10713, avg=2737.20, stdev=494.78 clat percentiles (usec): | 1.00th=[ 1090], 5.00th=[ 1156], 10.00th=[ 2343], 20.00th=[ 2769], | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2933], 95.00th=[ 2966], | 99.00th=[ 3097], 99.50th=[ 3392], 99.90th=[ 3425], 99.95th=[ 3458], | 99.99th=[ 5800] bw ( KiB/s): min=43368, max=115288, per=25.00%, avg=46684.06, stdev=11786.27, samples=479 iops : min=10842, max=28822, avg=11670.99, stdev=2946.57, samples=479 lat (usec) : 250=0.01%, 500=0.07%, 750=0.01%, 1000=0.01% lat (msec) : 2=8.14%, 4=91.74%, 10=0.02%, 20=0.01% cpu : usr=4.42%, sys=15.22%, ctx=917076, majf=0, minf=25544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,2801493,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=10.7GiB (11.5GB), run=60003-60003msec Disk stats (read/write): sdb: ios=88/2793153, merge=0/0, ticks=4/7530281, in_queue=7558293, util=99.98%
15.11 fio: Thomas-Krenn IOPS read test
iops_read-1: (groupid=0, jobs=4): err= 0: pid=62820: Sun Jan 9 16:31:45 2022 read: IOPS=96.1k, BW=375MiB/s (394MB/s)(21.0GiB/60001msec) slat (nsec): min=1660, max=374094, avg=40596.58, stdev=19486.53 clat (usec): min=71, max=10108, avg=1289.82, stdev=156.86 lat (usec): min=74, max=10168, avg=1330.49, stdev=161.34 clat percentiles (usec): | 1.00th=[ 832], 5.00th=[ 889], 10.00th=[ 1090], 20.00th=[ 1254], | 30.00th=[ 1303], 40.00th=[ 1319], 50.00th=[ 1336], 60.00th=[ 1352], | 70.00th=[ 1369], 80.00th=[ 1385], 90.00th=[ 1401], 95.00th=[ 1418], | 99.00th=[ 1450], 99.50th=[ 1467], 99.90th=[ 1549], 99.95th=[ 1680], | 99.99th=[ 4113] bw ( KiB/s): min=87200, max=142984, per=25.00%, avg=96113.79, stdev=10797.34, samples=476 iops : min=21800, max=35746, avg=24028.42, stdev=2699.34, samples=476 lat (usec) : 100=0.01%, 250=0.02%, 500=0.01%, 750=0.21%, 1000=8.98% lat (msec) : 2=90.76%, 4=0.03%, 10=0.01%, 20=0.01% cpu : usr=1.41%, sys=98.53%, ctx=12364, majf=0, minf=30756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=5766502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): READ: bw=375MiB/s (394MB/s), 375MiB/s-375MiB/s (394MB/s-394MB/s), io=21.0GiB (23.6GB), run=60001-60001msec Disk stats (read/write): sdb: ios=5750480/0, merge=0/0, ticks=729488/0, in_queue=733106, util=100.00%
15.12 fio: Thomas-Krenn write latency test
[root@lxmds29 ~]# fio --rw=randwrite --name=latency_write-1 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_latency_write-1.out write: IOPS=24.4k, BW=95.4MiB/s (100MB/s)(11.2GiB/120000msec) slat (nsec): min=1609, max=147212, avg=4320.96, stdev=858.73 clat (nsec): min=690, max=3448.6k, avg=34225.76, stdev=72353.64 lat (usec): min=18, max=3454, avg=38.64, stdev=72.38 clat percentiles (usec): | 1.00th=[ 20], 5.00th=[ 22], 10.00th=[ 22], 20.00th=[ 22], | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 23], | 70.00th=[ 23], 80.00th=[ 23], 90.00th=[ 24], 95.00th=[ 26], | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 603], 99.95th=[ 603], | 99.99th=[ 603] bw ( KiB/s): min=94384, max=167024, per=100.00%, avg=97669.77, stdev=11486.01, samples=239 iops : min=23596, max=41756, avg=24417.42, stdev=2871.51, samples=239 lat (nsec) : 750=0.01%, 1000=0.01% lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=1.39%, 50=94.23% lat (usec) : 100=1.53%, 250=0.02%, 500=1.41%, 750=1.42%, 1000=0.01% lat (msec) : 4=0.01% cpu : usr=13.00%, sys=12.68%, ctx=2934667, majf=0, minf=3953 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2930000,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=95.4MiB/s (100MB/s), 95.4MiB/s-95.4MiB/s (100MB/s-100MB/s), io=11.2GiB (12.0GB), run=120000-120000msec Disk stats (read/write): sdb: ios=45/2926205, merge=0/0, ticks=7/93835, in_queue=93578, util=78.03%
15.13 fio: Thomas-Krenn read latency test
[root@lxmds29 ~]# fio --rw=randread --name=latency_read-1 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_latency_read-1.out read: IOPS=9574, BW=37.4MiB/s (39.2MB/s)(2244MiB/60001msec) slat (nsec): min=1719, max=344914, avg=4531.12, stdev=1620.33 clat (nsec): min=1030, max=2459.0k, avg=98847.34, stdev=13868.12 lat (usec): min=23, max=2464, avg=103.48, stdev=14.17 clat percentiles (usec): | 1.00th=[ 23], 5.00th=[ 92], 10.00th=[ 93], 20.00th=[ 94], | 30.00th=[ 94], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 98], | 70.00th=[ 110], 80.00th=[ 111], 90.00th=[ 111], 95.00th=[ 114], | 99.00th=[ 117], 99.50th=[ 131], 99.90th=[ 147], 99.95th=[ 153], | 99.99th=[ 172] bw ( KiB/s): min=37584, max=50992, per=100.00%, avg=38298.04, stdev=1583.30, samples=119 iops : min= 9396, max=12748, avg=9574.50, stdev=395.83, samples=119 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.04%, 50=1.58% lat (usec) : 100=63.09%, 250=35.28%, 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01% cpu : usr=2.98%, sys=6.21%, ctx=575827, majf=0, minf=3510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=574464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=2244MiB (2353MB), run=60001-60001msec Disk stats (read/write): sdb: ios=572935/0, merge=0/0, ticks=55310/0, in_queue=55262, util=92.16%
15.14 fio IOPS write test 32 * 64
[root@lxmds29 ~]# nohup fio --rw=randwrite --name=iops_write-4 --bs=4k --direct=1 --filename=/dev/sdb --numjobs=32 --ioengine=libaio --iodepth=64 --refill_buffers --runtime=10m --time_based --group_reporting --output=fio_iops_write-4.out &
write: IOPS=42.0k, BW=164MiB/s (172MB/s)(96.2GiB/600012msec) slat (nsec): min=1670, max=61463k, avg=751583.49, stdev=2060675.68 clat (usec): min=476, max=211261, avg=47949.59, stdev=12798.96 lat (usec): min=506, max=211287, avg=48701.63, stdev=12897.32 clat percentiles (msec): | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 38], | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 51], | 70.00th=[ 54], 80.00th=[ 58], 90.00th=[ 64], 95.00th=[ 70], | 99.00th=[ 84], 99.50th=[ 89], 99.90th=[ 106], 99.95th=[ 115], | 99.99th=[ 138] bw ( KiB/s): min= 2072, max=19385, per=3.12%, avg=5252.92, stdev=636.74, samples=38374 iops : min= 518, max= 4846, avg=1313.21, stdev=159.19, samples=38374 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.02%, 20=0.79%, 50=58.36% lat (msec) : 100=40.64%, 250=0.18% cpu : usr=0.52%, sys=25.67%, ctx=3619220, majf=0, minf=4327344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=0,25223266,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=96.2GiB (103GB), run=600012-600012msec Disk stats (read/write): sdb: ios=43/25217078, merge=0/0, ticks=10/91342286, in_queue=92063260, util=100.00%
16 lxmds30
- DRBD on Raid-10 with 10 SSDs, Strip Size = 256 kB
16.1 iozone
[root@lxmds30 mnt]# iozone -s 520g -r 64 -i0 -i1 -t1 Children see throughput for 1 initial writers = 1,697,386.88 kB/sec Children see throughput for 1 rewriters = 1,558,910.12 kB/sec Children see throughput for 1 readers = 12,504,567.00 kB/sec Children see throughput for 1 re-readers = 13,474,678.00 kB/sec
16.2 fio: blocksize 1M, size 20G, numJobs 40
[root@lxmds30 mnt]# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-0.out
read: IOPS=935, BW=935MiB/s (981MB/s)(399GiB/437029msec) write: IOPS=939, BW=939MiB/s (985MB/s)(401GiB/437029msec) READ: bw=935MiB/s (981MB/s), 935MiB/s-935MiB/s (981MB/s-981MB/s), io=399GiB (429GB), run=437029-437029msec WRITE: bw=939MiB/s (985MB/s), 939MiB/s-939MiB/s (985MB/s-985MB/s), io=401GiB (430GB), run=437029-437029msec Disk stats (read/write): drbd0: ios=104615872/15229160, merge=0/0, ticks=227412048/10465563, in_queue=240761896, util=100.00%, aggrios=3269352/15250506, aggrmerge=101349912/49, aggrticks=12693874/10206056, aggrin_queue=22906471, aggrutil=100.00% sdb: ios=3269352/15250506, merge=101349912/49, ticks=12693874/10206056, in_queue=22906471, util=100.00%
16.3 fio: fio blocksize 128k, size 20G, numJobs 40
[root@lxmds30 mnt]# fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-1.out
read: IOPS=7520, BW=940MiB/s (986MB/s)(400GiB/435659msec) write: IOPS=7522, BW=940MiB/s (986MB/s)(400GiB/435659msec) READ: bw=940MiB/s (986MB/s), 940MiB/s-940MiB/s (986MB/s-986MB/s), io=400GiB (429GB), run=435659-435659msec WRITE: bw=940MiB/s (986MB/s), 940MiB/s-940MiB/s (986MB/s-986MB/s), io=400GiB (430GB), run=435659-435659msec Disk stats (read/write): drbd0: ios=104843072/15257070, merge=0/0, ticks=222885191/7413117, in_queue=233038667, util=100.00%, aggrios=3276387/15287539, aggrmerge=101567997/64, aggrticks=12494138/7174284, aggrin_queue=19674262, aggrutil=100.00% sdb: ios=3276387/15287539, merge=101567997/64, ticks=12494138/7174284, in_queue=19674262, util=100.00%
16.4 fio: fio blocksize 1M, size 4G, numJobs 320
[root@lxmds30 mnt]# fio --gtod_reduce=1 --name=test2 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s4G_bs1M_numj320.out
read: IOPS=1052, BW=1052MiB/s (1103MB/s)(960GiB/934089msec) write: IOPS=351, BW=351MiB/s (368MB/s)(320GiB/934089msec) READ: bw=1052MiB/s (1103MB/s), 1052MiB/s-1052MiB/s (1103MB/s-1103MB/s), io=960GiB (1030GB), run=934089-934089msec WRITE: bw=351MiB/s (368MB/s), 351MiB/s-351MiB/s (368MB/s-368MB/s), io=320GiB (344GB), run=934089-934089msec Disk stats (read/write): drbd0: ios=251573952/2974215, merge=0/0, ticks=2873434645/25607272, in_queue=18446744072380214778, util=100.00%, aggrios=7861848/3004407, aggrmerge=243717288/744, aggrticks=120298687/24069954, aggrin_queue=144468130, aggrutil=100.00% sdb: ios=7861848/3004407, merge=243717288/744, ticks=120298687/24069954, in_queue=144468130, util=100.00%
16.5 iozone Christo r=256k
[root@lxmds29 mnt]# iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out
Children see throughput for 16 initial writers = 326883.97 kB/sec Children see throughput for 16 rewriters = 416329.61 kB/sec Children see throughput for 16 readers = 143612569.00 kB/sec Children see throughput for 16 re-readers = 136355560.50 kB/sec Children see throughput for 16 random readers = 143895947.50 kB/sec Children see throughput for 16 random writers = 394584.65 kB/sec
16.6 iozone Christo r=1M
[root@lxmds30 mnt]# iozone -s 30g -r 1M -i0 -i1 -i2 -t16 -e -w > iozone_s30g_r1M_t16.out
Children see throughput for 16 initial writers = 391769.19 kB/sec Children see throughput for 16 rewriters = 464184.56 kB/sec Children see throughput for 16 readers = 148018322.50 kB/sec Children see throughput for 16 re-readers = 154949658.00 kB/sec Children see throughput for 16 random readers = 141972244.00 kB/sec Children see throughput for 16 random writers = 396501.99 kB/sec
16.7 fio: fio blocksize 1M, size 10G, numJobs 320
[root@lxmds30 mnt]# nohup fio --gtod_reduce=1 --name=test3 --bs=1M --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs1M_numj320.out &
read: IOPS=802, BW=802MiB/s (841MB/s)(2400GiB/3062884msec) write: IOPS=267, BW=267MiB/s (280MB/s)(800GiB/3062884msec) READ: bw=802MiB/s (841MB/s), 802MiB/s-802MiB/s (841MB/s-841MB/s), io=2400GiB (2577GB), run=3062884-3062884msec WRITE: bw=267MiB/s (280MB/s), 267MiB/s-267MiB/s (280MB/s-280MB/s), io=800GiB (859GB), run=3062884-3062884msec Disk stats (read/write): drbd0: ios=629148352/120850249, merge=0/0, ticks=18446744072727822636/36635615, in_queue=3418624698, util=100.00%, aggrios=19660906/114779213, aggrmerge=609488022/6074379, aggrticks=140712931/22314917, aggrin_queue=163116604, aggrutil=100.00% sdb: ios=19660906/114779213, merge=609488022/6074379, ticks=140712931/22314917, in_queue=163116604, util=100.00%
16.8 fio: fio blocksize 128k, size 10G, numJobs 320
[root@lxmds30 mnt]# nohup fio --gtod_reduce=1 --name=test4 --bs=128k --numj=320 --size=10G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s10G_bs128k_numj320.out &
read: IOPS=6000, BW=750MiB/s (786MB/s)(2400GiB/3276558msec) write: IOPS=2000, BW=250MiB/s (262MB/s)(800GiB/3276558msec) READ: bw=750MiB/s (786MB/s), 750MiB/s-750MiB/s (786MB/s-786MB/s), io=2400GiB (2577GB), run=3276558-3276558msec WRITE: bw=250MiB/s (262MB/s), 250MiB/s-250MiB/s (262MB/s-262MB/s), io=800GiB (859GB), run=3276558-3276558msec Disk stats (read/write): drbd0: ios=629101888/121223656, merge=0/0, ticks=18446744072849689368/30608634, in_queue=18446744072936853552, util=100.00%, aggrios=19659456/115544978, aggrmerge=609443040/5688591, aggrticks=153900115/17575856, aggrin_queue=171570579, aggrutil=100.00%
16.9 fio: fio blocksize 128k, size 1G, numJobs 3200
[root@lxmds29 mnt]# nohup fio --gtod_reduce=1 --name=test5 --bs=128k --numj=3200 --size=1G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s1G_bs128k_numj3200.out &
read: IOPS=6406, BW=801MiB/s (840MB/s)(2400GiB/3069168msec) write: IOPS=2135, BW=267MiB/s (280MB/s)(800GiB/3069168msec) READ: bw=801MiB/s (840MB/s), 801MiB/s-801MiB/s (840MB/s-840MB/s), io=2400GiB (2577GB), run=3069168-3069168msec WRITE: bw=267MiB/s (280MB/s), 267MiB/s-267MiB/s (280MB/s-280MB/s), io=800GiB (859GB), run=3069168-3069168msec Disk stats (read/write): drbd0: ios=629155936/121331602, merge=0/0, ticks=1394049998/32778154, in_queue=18446744070963296870, util=100.00%, aggrios=19661134/118173320, aggrmerge=609495154/3174089, aggrticks=153075863/21298595, aggrin_queue=174489473, aggrutil=99.92% sdb: ios=19661134/118173320, merge=609495154/3174089, ticks=153075863/21298595, in_queue=174489473, util=99.92%
17 cephfs on lxbk0377
17.1 iozone
root@lxbk0377:/cephfs/test# iozone -s 520g -r 64 -i0 -i1 -t1
Children see throughput for 1 initial writers = 853049.25 kB/sec Children see throughput for 1 rewriters = 853982.25 kB/sec Children see throughput for 1 readers = 613430.50 kB/sec
17.2 fio: blocksize 1M, size 20G, numJobs 40
root@lxbk0377:/cephfs/test# fio --rw=randrw --name=test0 --direct=0 --fallocate=none --bs=1M --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs1M_numj40-0.out
read: IOPS=95, BW=95.9MiB/s (101MB/s)(399GiB/4261571msec) write: IOPS=96, BW=96.3MiB/s (101MB/s)(401GiB/4261571msec); 0 zone resets READ: bw=95.9MiB/s (101MB/s), 95.9MiB/s-95.9MiB/s (101MB/s-101MB/s), io=399GiB (429GB), run=4261571-4261571msec WRITE: bw=96.3MiB/s (101MB/s), 96.3MiB/s-96.3MiB/s (101MB/s-101MB/s), io=401GiB (430GB), run=4261571-4261571msec
17.3 fio: fio blocksize 128k, size 20G, numJobs 40
root@lxbk0377:/cephfs/test# nohup fio --rw=randrw --name=test1 --direct=0 --fallocate=none --bs=128k --numj=40 --group_reporting --size=20G --output=fio_randrw_s20G_bs128k_numj40-1.out &
read: IOPS=519, BW=64.0MiB/s (68.1MB/s)(400GiB/6305463msec) write: IOPS=519, BW=64.0MiB/s (68.1MB/s)(400GiB/6305463msec); 0 zone resets READ: bw=64.0MiB/s (68.1MB/s), 64.0MiB/s-64.0MiB/s (68.1MB/s-68.1MB/s), io=400GiB (429GB), run=6305463-6305463msec WRITE: bw=64.0MiB/s (68.1MB/s), 64.0MiB/s-64.0MiB/s (68.1MB/s-68.1MB/s), io=400GiB (430GB), run=6305463-6305463msec
17.4 fio: fio blocksize 1M, size 4G, numJobs 320
root@lxbk0377:/cephfs/test# nohup fio --gtod_reduce=1 --name=test2 --bs=1M --numj=320 --size=4G --readwrite=randrw --rwmixread=75 --group_reporting --output=fio_randrw_s4G_bs1M_numj320.out &
read: IOPS=139, BW=139MiB/s (146MB/s)(960GiB/7058440msec) write: IOPS=46, BW=46.5MiB/s (48.7MB/s)(320GiB/7058440msec); 0 zone resets READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=960GiB (1030GB), run=7058440-7058440msec WRITE: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=320GiB (344GB), run=7058440-7058440msec
17.5 iozone Christo r=256k
root@lxbk0377:/cephfs/test# nohup iozone -s 30g -r 256k -i0 -i1 -i2 -t16 -e -w > christo_iozone.out &
Children see throughput for 16 initial writers = 768733.39 kB/sec Children see throughput for 16 rewriters = 798154.96 kB/sec Children see throughput for 16 readers = 894907.47 kB/sec Children see throughput for 16 re-readers = 895743.62 kB/sec Children see throughput for 16 random readers = 658520.19 kB/sec Children see throughput for 16 random writers = 498290.33 kB/sec
17.6 iozone Christo r=1M
[root@lxmds30 mnt]# iozone -s 30g -r 1M -i0 -i1 -i2 -t16 -e -w > iozone_s30g_r1M_t16.out
Children see throughput for 16 initial writers = 748407.37 kB/sec Children see throughput for 16 rewriters = 773186.92 kB/sec Children see throughput for 16 readers = 874338.89 kB/sec Children see throughput for 16 re-readers = 1014372.55 kB/sec Children see throughput for 16 random readers = 862001.12 kB/sec Children see throughput for 16 random writers = 532786.21 kB/sec
17.7 fio: Thomas-Krenn write latency test
root@lxbk0377:/cephfs/test# fio --rw=randwrite --name=latency_write-1 --bs=4k --direct=1 --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based --size=1M > krenn_fio_latency_write-1.out
latency_write-1: (groupid=0, jobs=1): err= 0: pid=500571: Wed Mar 9 13:45:55 2022 write: IOPS=18, BW=75.7KiB/s (77.5kB/s)(4548KiB/60056msec); 0 zone resets slat (usec): min=25, max=41061, avg=75.13, stdev=1216.63 clat (msec): min=27, max=154, avg=52.74, stdev=21.29 lat (msec): min=27, max=154, avg=52.81, stdev=21.41 clat percentiles (msec): | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 38], | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 46], 60.00th=[ 48], | 70.00th=[ 54], 80.00th=[ 63], 90.00th=[ 83], 95.00th=[ 103], | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 155], | 99.99th=[ 155] bw ( KiB/s): min= 40, max= 96, per=100.00%, avg=75.86, stdev=14.77, samples=119 iops : min= 10, max= 24, avg=18.96, stdev= 3.71, samples=119 lat (msec) : 50=63.50%, 100=31.05%, 250=5.45% cpu : usr=0.02%, sys=0.14%, ctx=1140, majf=0, minf=9 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,1137,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=75.7KiB/s (77.5kB/s), 75.7KiB/s-75.7KiB/s (77.5kB/s-77.5kB/s), io=4548KiB (4657kB), run=60056-60056msec
root@lxbk0377:/cephfs/test# fio --rw=randwrite --name=latency_write-2 --bs=4k --direct=1 --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based --size=1G > krenn_fio_latency_write-2.out
latency_write-2: (groupid=0, jobs=1): err= 0: pid=499973: Wed Mar 9 13:30:58 2022 write: IOPS=15, BW=63.0KiB/s (65.5kB/s)(3840KiB/60009msec); 0 zone resets slat (usec): min=24, max=73611, avg=152.32, stdev=2627.80 clat (msec): min=20, max=1365, avg=62.35, stdev=79.46 lat (msec): min=20, max=1365, avg=62.50, stdev=79.49 clat percentiles (msec): | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], | 30.00th=[ 37], 40.00th=[ 43], 50.00th=[ 47], 60.00th=[ 54], | 70.00th=[ 66], 80.00th=[ 80], 90.00th=[ 95], 95.00th=[ 109], | 99.00th=[ 205], 99.50th=[ 393], 99.90th=[ 1368], 99.95th=[ 1368], | 99.99th=[ 1368] bw ( KiB/s): min= 8, max= 96, per=100.00%, avg=66.96, stdev=18.09, samples=114 iops : min= 2, max= 24, avg=16.74, stdev= 4.53, samples=114 lat (msec) : 50=54.90%, 100=37.29%, 250=6.98%, 500=0.42%, 1000=0.10% lat (msec) : 2000=0.31% cpu : usr=0.01%, sys=0.09%, ctx=963, majf=0, minf=9 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,960,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=63.0KiB/s (65.5kB/s), 63.0KiB/s-63.0KiB/s (65.5kB/s-65.5kB/s), io=3840KiB (3932kB), run=60009-60009msec
17.8 fio: Thomas-Krenn read latency test
root@lxbk0377:/cephfs/test# fio --rw=randread --name=latency_read-1 --bs=4k --direct=1 --size=2617M --numjobs=1 --ioengine=libaio --iodepth=1 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_latency_read-1.out
latency_read-1: (groupid=0, jobs=1): err= 0: pid=500633: Wed Mar 9 13:49:25 2022 read: IOPS=1778, BW=7114KiB/s (7284kB/s)(417MiB/60001msec) slat (usec): min=7, max=136, avg=28.18, stdev= 9.90 clat (usec): min=118, max=11227, avg=529.29, stdev=153.20 lat (usec): min=132, max=11244, avg=557.91, stdev=154.61 clat percentiles (usec): | 1.00th=[ 206], 5.00th=[ 338], 10.00th=[ 379], 20.00th=[ 420], | 30.00th=[ 453], 40.00th=[ 490], 50.00th=[ 529], 60.00th=[ 562], | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 693], 95.00th=[ 734], | 99.00th=[ 791], 99.50th=[ 816], 99.90th=[ 1467], 99.95th=[ 1991], | 99.99th=[ 4817] bw ( KiB/s): min= 6264, max=12008, per=100.00%, avg=7126.10, stdev=841.96, samples=119 iops : min= 1566, max= 3002, avg=1781.52, stdev=210.49, samples=119 lat (usec) : 250=2.97%, 500=39.97%, 750=53.88%, 1000=2.99% lat (msec) : 2=0.15%, 4=0.04%, 10=0.01%, 20=0.01% cpu : usr=2.56%, sys=9.79%, ctx=106713, majf=0, minf=15 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=106708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=7114KiB/s (7284kB/s), 7114KiB/s-7114KiB/s (7284kB/s-7284kB/s), io=417MiB (437MB), run=60001-60001msec
17.9 fio: Thomas-Krenn IOPS write test
root@lxbk0377:/cephfs/test# nohup fio --rw=randwrite --name=iops_write-1 --bs=4k --direct=1 --size=10G --numjobs=4 --ioengine=libaio --iodepth=32 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_iops_write-1.out &
iops_write-1: (groupid=0, jobs=4): err= 0: pid=502216: Wed Mar 9 15:26:53 2022 write: IOPS=1280, BW=5124KiB/s (5247kB/s)(301MiB/60161msec); 0 zone resets slat (usec): min=5, max=94014, avg=34.21, stdev=743.17 clat (msec): min=26, max=339, avg=99.67, stdev=42.91 lat (msec): min=26, max=339, avg=99.71, stdev=42.92 clat percentiles (msec): | 1.00th=[ 49], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 69], | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 93], | 70.00th=[ 104], 80.00th=[ 126], 90.00th=[ 167], 95.00th=[ 197], | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 284], 99.95th=[ 292], | 99.99th=[ 330] bw ( KiB/s): min= 3617, max= 6400, per=100.00%, avg=5142.24, stdev=137.96, samples=476 iops : min= 903, max= 1600, avg=1285.55, stdev=34.50, samples=476 lat (msec) : 50=1.53%, 100=65.70%, 250=32.31%, 500=0.46% cpu : usr=0.36%, sys=1.41%, ctx=49394, majf=0, minf=51 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.8%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,77061,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=5124KiB/s (5247kB/s), 5124KiB/s-5124KiB/s (5247kB/s-5247kB/s), io=301MiB (316MB), run=60161-60161msec
17.10 fio: Thomas-Krenn IOPS read test
root@lxbk0377:/cephfs/test# nohup fio --rw=randread --name=iops_read-1 --bs=4k --direct=1 --size=10G --numjobs=4 --ioengine=libaio --iodepth=32 --refill_buffers --group_reporting --runtime=60 --time_based > krenn_fio_iops_read-1.out &
iops_read-1: (groupid=0, jobs=4): err= 0: pid=502751: Wed Mar 9 15:46:51 2022 read: IOPS=76.2k, BW=298MiB/s (312MB/s)(17.4GiB/60002msec) slat (usec): min=4, max=437, avg= 9.02, stdev= 4.67 clat (usec): min=126, max=30544, avg=1669.29, stdev=409.84 lat (usec): min=137, max=30551, avg=1678.38, stdev=409.40 clat percentiles (usec): | 1.00th=[ 930], 5.00th=[ 1106], 10.00th=[ 1172], 20.00th=[ 1287], | 30.00th=[ 1418], 40.00th=[ 1532], 50.00th=[ 1663], 60.00th=[ 1795], | 70.00th=[ 1909], 80.00th=[ 2040], 90.00th=[ 2180], 95.00th=[ 2278], | 99.00th=[ 2409], 99.50th=[ 2474], 99.90th=[ 2671], 99.95th=[ 3064], | 99.99th=[ 9503] bw ( KiB/s): min=285096, max=327888, per=100.00%, avg=305257.33, stdev=2994.79, samples=476 iops : min=71274, max=81972, avg=76314.34, stdev=748.70, samples=476 lat (usec) : 250=0.05%, 500=0.17%, 750=0.31%, 1000=1.05% lat (msec) : 2=75.40%, 4=22.99%, 10=0.02%, 20=0.01%, 50=0.01% cpu : usr=2.82%, sys=20.73%, ctx=3557679, majf=0, minf=1002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=4574129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): READ: bw=298MiB/s (312MB/s), 298MiB/s-298MiB/s (312MB/s-312MB/s), io=17.4GiB (18.7GB), run=60002-60002msec
17.11 fio IOPS write test 32 * 64
root@lxbk0377:/cephfs/test# nohup fio --rw=randwrite --name=iops_write-4 --bs=4k --direct=1 --size=2G --numjobs=32 --ioengine=libaio --iodepth=64 --refill_buffers --runtime=10m --time_based --group_reporting --output=fio_iops_write-4.out &
iops_write-4: (groupid=0, jobs=32): err= 0: pid=505736: Wed Mar 9 19:19:53 2022 write: IOPS=9408, BW=36.8MiB/s (38.5MB/s)(21.6GiB/600613msec); 0 zone resets slat (usec): min=5, max=438743, avg=27.67, stdev=1067.29 clat (msec): min=45, max=1645, avg=217.44, stdev=113.40 lat (msec): min=45, max=1806, avg=217.47, stdev=113.42 clat percentiles (msec): | 1.00th=[ 85], 5.00th=[ 103], 10.00th=[ 115], 20.00th=[ 133], | 30.00th=[ 150], 40.00th=[ 169], 50.00th=[ 192], 60.00th=[ 218], | 70.00th=[ 247], 80.00th=[ 284], 90.00th=[ 342], 95.00th=[ 418], | 99.00th=[ 617], 99.50th=[ 735], 99.90th=[ 1183], 99.95th=[ 1301], | 99.99th=[ 1469] bw ( KiB/s): min= 1622, max=56584, per=100.00%, avg=37701.59, stdev=211.43, samples=38354 iops : min= 400, max=14146, avg=9425.38, stdev=52.86, samples=38354 lat (msec) : 50=0.01%, 100=4.12%, 250=66.97%, 500=26.47%, 750=1.99% lat (msec) : 1000=0.24%, 2000=0.21% cpu : usr=0.24%, sys=1.00%, ctx=5101020, majf=0, minf=9335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=0,5650600,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): WRITE: bw=36.8MiB/s (38.5MB/s), 36.8MiB/s-36.8MiB/s (38.5MB/s-38.5MB/s), io=21.6GiB (23.1GB), run=600613-600613msec