I want to maximize the speed of the 12Gbps SAS interfaces in my 2U - 4 node QUANTA servers.
I know my choices are to either go full SSD, or SSD cached Hybrid HDD.
I have 6 SAS 12Gbps slots per server node.
Right now I have an SSD in slot 1 and the remaining 5 slots are HDD.
When I benchmark my transfer rate node-to-node using only RAM Drives as targets, I get the full wirespeed of 5.67GB/s (4200 Jumbo MTU).
I know there is NO WAY the drives I have will support that speed, so...
The Question Becomes:
Which drives will make the most of the SAS 12Gbps bus?
(i.e. which drives - SSD and HDD would you select to reduce the bottle neck as much as possible?)
The Current Inventory:
=============================
The current SSD - 1 per node:
=============================
TOSHIBA
PX02SMF080
800Gb (745.213)
12.0 Gbps
400 Write / 900 Read <---individual performance* of a single SSD
=============================
The Current HDD - 5 Per Node:
=============================
HGST
H101812SFSUN12T
1.2Tb (1.092)
206 Write / 206 Read <---individual performance* of a single HDD
*(as tested by ATTO using Queue Depth of 16)
=============================
SAS Controller in each Node:
=============================
(LSI) AVAGO MegaRAID SDS PCI Express ROMB (Image below)
BIOS Version: 6.36.00.3_4.1908.00_0x06180205
Firmware Package Version: 24:21.0-0148
Vendor ID: 0x1000
SubVendor ID: 0x152d
Device ID: 0x5d
The server is driving it at PCIe 3.x so it's not bottle necked at all.
Benchmark of the ConnectX-3 ETH running at 56Gb/s - Jumbo MTU (4200)
I know my choices are to either go full SSD, or SSD cached Hybrid HDD.
I have 6 SAS 12Gbps slots per server node.
Right now I have an SSD in slot 1 and the remaining 5 slots are HDD.
When I benchmark my transfer rate node-to-node using only RAM Drives as targets, I get the full wirespeed of 5.67GB/s (4200 Jumbo MTU).
I know there is NO WAY the drives I have will support that speed, so...
The Question Becomes:
Which drives will make the most of the SAS 12Gbps bus?
(i.e. which drives - SSD and HDD would you select to reduce the bottle neck as much as possible?)
The Current Inventory:
=============================
The current SSD - 1 per node:
=============================
TOSHIBA
PX02SMF080
800Gb (745.213)
12.0 Gbps
400 Write / 900 Read <---individual performance* of a single SSD
=============================
The Current HDD - 5 Per Node:
=============================
HGST
H101812SFSUN12T
1.2Tb (1.092)
206 Write / 206 Read <---individual performance* of a single HDD
*(as tested by ATTO using Queue Depth of 16)
=============================
SAS Controller in each Node:
=============================
(LSI) AVAGO MegaRAID SDS PCI Express ROMB (Image below)
A.K.A. - DAS2BTH7CB0 QUANTA SAS 3108 12Gbs RAID CARD
A.K.A. - AVAGO 3108 MegaRAID
Firmware Version: 4.680.00-8555BIOS Version: 6.36.00.3_4.1908.00_0x06180205
Firmware Package Version: 24:21.0-0148
Vendor ID: 0x1000
SubVendor ID: 0x152d
Device ID: 0x5d
The server is driving it at PCIe 3.x so it's not bottle necked at all.
Benchmark of the ConnectX-3 ETH running at 56Gb/s - Jumbo MTU (4200)