I have the need for stupid amounts of storage and no redundancy (don't ask,not my call).
1.2 Petabytes spread over a max of 28 units.
I have already sorted a solution out using 4U Supermicro cases but would like to use this as a fallback and investigate other options available as this project is unlikely to require delivery for another 4 months.
I came across the BackBlaze pod blog a while ago and wondered to myself what could be done around that idea. I intend this thread to be a scratchpad / brainstorming / innovation place to see if it can be bettered for less than the 28x4U servers cost.
Please note I am in Singapore so prices over here are very different from the US, UK and many other places. Distribution is very limited and distributors work hard to meet the 'Charge as much as the market will allow' motto of business.
I am currently looking at DAS units for the bulk of the storage and separate 1 or 2 CPU servers (E3-1200 or E5-2600) running ESXi for the processing function required utilizing ESXi to create a number of VMs.
Requirements are;
The 22 populated and 2 spare slots (unpopulated) drives are based on 2TB drives. 3TB drives can be used with resulting changes in number or populated and free slots taken in to account. The drives are SATA.
I am currently looking at a custom 4U case with a couple of Supermicro 24drive hotswap backplanes on the bottom with the drives mounted vertically (tail end down). The backplanes have a SAS expander allowing dual mini-SAS connection (8 lanes total). I did look at three backplanes in the DAS case but you would need to remove the entire top of the case to swap dead drives out and that would be difficult in a fully populated rack.
The 4 SAS cables from the two backplanes would connect to int->ext converters which would then link to something like a Highpoint 2744 PCIe 2.0 x16 controller. This would be housed in a Supermicro X9DRD-iF motherboard (need to confirm ESXi ability to run on this board but the i350 network chipset seems to be supported) with dual E5-26XX cpus and a stack of ram (spec as yet undecided). The SAS controller is PCIe 2.0 x16 so can handle 8GB/s on the PCIe bus. The quad SAS cables from the backplanes can handle 9.6GB/s (600MB/s *4 channels *4 cables). This means the PCIe bus is the limit but still allows the drives to give 166MB/s (give or take) so plenty for the DAS.
After Patricks great review of the Silicom PEG6I 6 port network card it seems this may be ideal for the network connectivity. 6 ports shared via VT-d in pairs plus the two on the motherboard gives 4 VMs. Two more Highpoint 2722s will give 4 more external SAS connectors allowing a second DAS box to be connected so there will be enough drives for the 4 VMs. I would imagine low(ish) end E5s should be fine.
I will separate in to the different machine builds to make it clearer later when I get home.
Notes, suggestions, stupid mistakes pointed out all welcome.
RB
1.2 Petabytes spread over a max of 28 units.
I have already sorted a solution out using 4U Supermicro cases but would like to use this as a fallback and investigate other options available as this project is unlikely to require delivery for another 4 months.
I came across the BackBlaze pod blog a while ago and wondered to myself what could be done around that idea. I intend this thread to be a scratchpad / brainstorming / innovation place to see if it can be bettered for less than the 28x4U servers cost.
Please note I am in Singapore so prices over here are very different from the US, UK and many other places. Distribution is very limited and distributors work hard to meet the 'Charge as much as the market will allow' motto of business.
I am currently looking at DAS units for the bulk of the storage and separate 1 or 2 CPU servers (E3-1200 or E5-2600) running ESXi for the processing function required utilizing ESXi to create a number of VMs.
Requirements are;
- 1.2 Petabytes of storage
- Low end 4 core cpu per every 22 + 2 spare empty slots hard drives
- Dual lan for every 24 drives (22 filled & 2 empty).
- Each 24 drives (as above) to be directly controlled by the VMs (VT-d)
- Less than 112U total size (4Ux28 units)
- Each VM to run Win 7 Pro (again, out of my control).
- Hard drives will be spanned or striped with no redundancy (I know, I know, not my call either).
- Redundant PSUs required in all machines.
- Less than S$250,000
The 22 populated and 2 spare slots (unpopulated) drives are based on 2TB drives. 3TB drives can be used with resulting changes in number or populated and free slots taken in to account. The drives are SATA.
I am currently looking at a custom 4U case with a couple of Supermicro 24drive hotswap backplanes on the bottom with the drives mounted vertically (tail end down). The backplanes have a SAS expander allowing dual mini-SAS connection (8 lanes total). I did look at three backplanes in the DAS case but you would need to remove the entire top of the case to swap dead drives out and that would be difficult in a fully populated rack.
The 4 SAS cables from the two backplanes would connect to int->ext converters which would then link to something like a Highpoint 2744 PCIe 2.0 x16 controller. This would be housed in a Supermicro X9DRD-iF motherboard (need to confirm ESXi ability to run on this board but the i350 network chipset seems to be supported) with dual E5-26XX cpus and a stack of ram (spec as yet undecided). The SAS controller is PCIe 2.0 x16 so can handle 8GB/s on the PCIe bus. The quad SAS cables from the backplanes can handle 9.6GB/s (600MB/s *4 channels *4 cables). This means the PCIe bus is the limit but still allows the drives to give 166MB/s (give or take) so plenty for the DAS.
After Patricks great review of the Silicom PEG6I 6 port network card it seems this may be ideal for the network connectivity. 6 ports shared via VT-d in pairs plus the two on the motherboard gives 4 VMs. Two more Highpoint 2722s will give 4 more external SAS connectors allowing a second DAS box to be connected so there will be enough drives for the 4 VMs. I would imagine low(ish) end E5s should be fine.
I will separate in to the different machine builds to make it clearer later when I get home.
Notes, suggestions, stupid mistakes pointed out all welcome.
RB