Hello!
I posted some other posts about which SLOG to pick for my NAS, but before that task, I got some test gear to perform some benchmarks to see where I stand and what can I expect from the system.
What do I have:
- IBM xServer 3550 M4
- 2x Intel Xeon E5-2640
- 164GB memory
- 2x SAS drives for system
- LSI SAS 9207-8e SAS HBA
- 2x Supermicro SC837E26-RJBOD1 JBOD case for drives
- 42x 4TB Seagate Constellation drives
- 4x Intel S3700 100GB SSDs gor SLOG
- OmniOS as OS
My final configuration will have 52 SAS drives, but for now I have to work with 42 SATA drives.
In this system I can try different configurations, but since I have 42 drives, I'm starting with 7x 6 drives RaidZ2 pool + 2x mirror SSD for SLOG.
My current workload looks like 50% seq / 50% random (according to iopattern dtrace script) and very little of it is sync write, at least according to zilstat (sync write every 4s-5s). Is there a way to see average I/O request size, so I can better define my test scenario?
Storage is mostly used for backups and a few clients use it for active data. Clients also mostly write and almost never read. Reads only happen when data is copied to another omnios server for backup (seq transfer I guess).
All clients connect through iSCSI, so I want to test iSCSI and local speeds.
The biggest problem that I have now is how to accurately measure my system performance, so I have a few questions for you guys:
- What tool do you use? I did some tests with 'fio' tool. Would you recommend it or do you use something else?
- What parameters do you use with 'fio'?
- How to see the actual transfer speed in OmniOS? 'iostat' gives weirdly high numbers, 'zpool iostat' looks a bit more realistic.
- According to the hardware, what speeds should or can I expect? What speeds are expected for seq read/write, what speeds for random read/write? Can I expect to saturate 10Gbps link when doing seq read/write?
- How would you test such configuration?
Matej
I posted some other posts about which SLOG to pick for my NAS, but before that task, I got some test gear to perform some benchmarks to see where I stand and what can I expect from the system.
What do I have:
- IBM xServer 3550 M4
- 2x Intel Xeon E5-2640
- 164GB memory
- 2x SAS drives for system
- LSI SAS 9207-8e SAS HBA
- 2x Supermicro SC837E26-RJBOD1 JBOD case for drives
- 42x 4TB Seagate Constellation drives
- 4x Intel S3700 100GB SSDs gor SLOG
- OmniOS as OS
My final configuration will have 52 SAS drives, but for now I have to work with 42 SATA drives.
In this system I can try different configurations, but since I have 42 drives, I'm starting with 7x 6 drives RaidZ2 pool + 2x mirror SSD for SLOG.
My current workload looks like 50% seq / 50% random (according to iopattern dtrace script) and very little of it is sync write, at least according to zilstat (sync write every 4s-5s). Is there a way to see average I/O request size, so I can better define my test scenario?
Storage is mostly used for backups and a few clients use it for active data. Clients also mostly write and almost never read. Reads only happen when data is copied to another omnios server for backup (seq transfer I guess).
All clients connect through iSCSI, so I want to test iSCSI and local speeds.
The biggest problem that I have now is how to accurately measure my system performance, so I have a few questions for you guys:
- What tool do you use? I did some tests with 'fio' tool. Would you recommend it or do you use something else?
- What parameters do you use with 'fio'?
- How to see the actual transfer speed in OmniOS? 'iostat' gives weirdly high numbers, 'zpool iostat' looks a bit more realistic.
- According to the hardware, what speeds should or can I expect? What speeds are expected for seq read/write, what speeds for random read/write? Can I expect to saturate 10Gbps link when doing seq read/write?
- How would you test such configuration?
Matej