Dual DAS setup?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jaesii

New Member
Feb 6, 2016
11
1
3
33
I'm in the process of building a new SAN for my vmware lab.

My new storage server consists of :
Chenbro RM23612 12-Bay Chassis
Supermicro X8DT6-F Motherboard with the onboard LSI 2008 flashed to IT Mode.

I want to have the fastest connection to my esxi servers using DAS.
I've seen a lot of people mention the HP SAS Expander card, but that looks like it only has a single das connection.
I am looking to connect two servers to the SAN. Are there any dual DAS expanders out there?

My other option was to just use a dual port fibre channel card using PTP.

My current storage server is just iSCSI over Ethernet. I've hit a bottleneck over Ethernet and need to move into direct attached.

Is there any recommendations I should look into?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
These days I might be inclined to get something newer than the HP SAS Expander card.

Maybe a dumb question here is do you want to move everything to dual port SAS (e.g. drives, backplanes and etc.)? If you wanted to change that storage server I might be inclined to instead get some Mellanox FDR VPI cards and just upgrade to 40GbE.
 
  • Like
Reactions: Chuntzu

Jaesii

New Member
Feb 6, 2016
11
1
3
33
Well since this is for a home lab, I don't think i need 40GbE.

My esxi hosts are 1U servers with 1 disk capacity, I would like to present the large disks from my JBOD server to the hosts, but I would need a card with two outputs, most cards I look at are single output.
 

KioskAdmin

Active Member
Jan 20, 2015
156
32
28
53
Well since this is for a home lab, I don't think i need 40GbE.

My esxi hosts are 1U servers with 1 disk capacity, I would like to present the large disks from my JBOD server to the hosts, but I would need a card with two outputs, most cards I look at are single output.
The problem you're running into is that you're trying to do something that hardware is not really built for.

So you have 2 hosts and want to have one disk array that you share both to right?

You will need to:
  1. Get dual port SAS drives
  2. Get a chassis that supports dual port SAS backplanes
  3. Have an expander that supports dual ports (actually the easy part)
  4. Get an extra cable to connect the chassis to the second server

On the other hand, faster Ethernet is way easier.
  1. Get dual port adapter for the storage server
  2. Get single port adapter for each ESXi host
  3. Get cables
Just moving to 10GbE is cheap.
  1. 1x Dual port adapter (storage server): 100% GENUINE INTEL 10gbs Ethernet Server Adapter DP PCI-E X520-DA2 E10G42BTDA $135
  2. 2x Single port adapter (ESXi): Intel X520-DA1 E10G42BTDAG1P5 10GbE Ethernet Converged Network Adapter $43 each
  3. 2x Cables just get SFP+ DACs (direct attach cables): $15-30 just tell ppl here how long of cables you need.
Figure you'd need to spend about $300 to go to 10GbE. If you're using FreeNAS on your storage server get this dual port adapter instead: Chelsio T420-CR 110-1120-40 Dual Port 10GbE Unified Wire adapter w Short Bracket for slightly better support/ performance. Otherwise, stick with Intel.

For 40GbE Mellanox cards are the cheapest. Make sure you get ConnectX-3 FCBT cards (see http://www.mellanox.com/related-docs/prod_adapter_cards/ConnectX3_VPI_Card.pdf )
  1. 1x Dual port adapter: MCX354A-FCBT Mellanox Dual-Port ConnectX-3 FDR InfiniBand 40GigE PCI-E Card MCX354A-FCBT
  2. 2x Single port adapters: Mellanox ConnectX-3 MCX353A-FCBT VPI Adapter Card Half Height Bracket
  3. Cables again, you can use copper DAC's but budget $40-75 per cable depending on the length and if they're available cheap

With your setup, I'd stick with 10GbE. You need PCIe 3.0 for 40GbE really so you'd need to upgrade the motherboard/ CPUs on your storage server to get full 40GbE speeds.

The BEST part about this: if you ever want to go away from using this setup as direct attach to your 2x ESXi hosts and wanted to go to more hosts, you can get cheap SFP+ switches and expand your architecture. Heck, you could trade that dual port 10GbE adapter for a single port one, and get something like this: Amazon.com: D-Link Systems 28-Port Gigabit SmartPro Stackable Switch & 4 10GbE SFP+ Ports (DGS-1510-28X): Computers & Accessories and you'd have another 10GbE port and be able to have 1GbE connectivity too.

I was going to try going crazy with a DAS until I found this site. I'm happy I didn't waste all that money as I now have 12 virtualization hosts and four storage servers in the lab.
 

Jaesii

New Member
Feb 6, 2016
11
1
3
33
If fibre is easier, I already own two HP ProCurve 2810 switches that each have 4 SFP+ ports on them.

But I also like the idea of using 10GbE CNAs with direct attach cables. That seems to be the best way to go.
 
Last edited:

Jaesii

New Member
Feb 6, 2016
11
1
3
33
Thank you KioskAdmin, you answered my question. I've never went down the DAS route yet, I would have probably spent a lot more money than I need to with trial and error attempts. This is why I love communities like this.

I will keep those Intel CNAs and the Mellanox Dual port in my ebay watch list.
I will be running FreeNAS on my storage server. Which is going to be another learning curve, im currently running Server 2012 R2 as an iSCSI target with 4 1GB nics teamed into an iSCSI vlan on my switch. The esx servers both have 4 nics, two dedicated to vmotion and iscsi and the other two are vm network and management.

Im really surprised how much of a bottleneck there is over ethernet, even when using LACP.
 

KioskAdmin

Active Member
Jan 20, 2015
156
32
28
53
That's really easy then, you should go SFP+ 10GbE.

That Chelsio adapter I linked the T420 is good for FreeBSD and applications based off FreeBSD like FreeNAS and pfSense. You can just get single port adapters on all three machines. You don't need fiber, you can use DAC's which will cost you less than the optics will. You just need to ensure they work with the HP switch.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
You could give scaleio a shot, think it requires 3 nodes though. I works great from the short time I have used it.