Question for Community: Cheap Storage Backend for ESXI 5.5

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
I am looking to redo my current ESXI Box that I use for learning and home systems.

Please help me with some ideas.


--Purpose---

I would like to build a shared storage server for vmware that is capable of moving data at around at least 4Gbps and has the capability to service 2 hosts.


-- My Plan--

Decouple the Integrated Storage from my current ESXI Whitebox and move it to a standalone storage server

Build 1 More ESXI Host for a total of 2 for HA redundancy.

The storage Server will be at least 15 - 20 feet away from the ESXI Hosts (Laundry Room Apt Closet).


--Requirements--

1. Able to move Data at at least 4Gbps but faster is welcomed

2. Must be Directly Connected Shared Storage

3. Semi Affordable if possibly (I cant afford a $2000 FC Solution lol)

4. Quietish if possible (no loud switches or GF might beat me lol)



--Questions--

1. Does anyone know if it is possible to do Fiber Channel on the cheap (with Sata drives not FC Drives) ? Most home setups here seem to use Infiniband (which sounds great) but not sure about long cables.

I would probably skip the FC Switch and and just use Point to Point Connections for each host.


2. Can a Windows Server with Fiber Channel HBA serve up raw storage to Vmware?


3. What OS is recommended for the FC Backend if I didn't use Windows. I have used Freenas before which was great but it was not able to manage the RAID Controller I had at the Time (an Adaptec 5805).


4. If there is a recommendation besides Fiber Channel what would you recommend. I am trying to stay away from ISCI and only use directly connected solutions.


--Current Setup--

https://docs.google.com/spreadsheet/ccc?key=0AhFarcJDQ74UdG9oQXpjZlV6Z0xLb0lMZW1oS3pjaVE

ESXI Server + Integrated Storage:

3.4 GHz Xeon 1245V2

32 GB RAM

8 3TB WD RED's

2 Intel 520 240GB

2 Samsung 840 Pro 256GB

LSI 9265-8I Raid Controller


External Storage:

Synology DS1812

8 4TB Seagate HDD (don't remember model off top of my head)

--------------------------------------------------------------------
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
There are fairly inexpensive optical cables for Infiniband: IBM Tyco 30M Optical QDR Infiniband 4x10 QSFP Cable TE Acoo 2123272 2 49Y0494 | eBay

100ft and cheap. Point to point if you only want 2-3 machines connected is certainly the way to go. Not having a big switch will save a ton of power, space and noise.

Run subnet manager on an ESXi storage server (could even use this guide: http://www.servethehome.com/omnios-napp-it-zfs-applianc-vmware-esxi-minutes/ to get a ZFS system running). And then just fill with cards.

What I might do if I was you would be to see if you can get one of those Supermicro boards with built in components. Might even be able to score a dual port 10 gig ethernet one for around $500-600, and maybe even with a LSI controller. That would save you many hundreds for your setup and you could use Ethernet. This is a case where going E5-2600 might actually save a huge amount of money. Plus, you then have many expansion slots and with 5.5 you can load up memory.
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
There are fairly inexpensive optical cables for Infiniband: IBM Tyco 30M Optical QDR Infiniband 4x10 QSFP Cable TE Acoo 2123272 2 49Y0494 | eBay

100ft and cheap. Point to point if you only want 2-3 machines connected is certainly the way to go. Not having a big switch will save a ton of power, space and noise.

Run subnet manager on an ESXi storage server (could even use this guide: http://www.servethehome.com/omnios-napp-it-zfs-applianc-vmware-esxi-minutes/ to get a ZFS system running). And then just fill with cards.

What I might do if I was you would be to see if you can get one of those Supermicro boards with built in components. Might even be able to score a dual port 10 gig ethernet one for around $500-600, and maybe even with a LSI controller. That would save you many hundreds for your setup and you could use Ethernet. This is a case where going E5-2600 might actually save a huge amount of money. Plus, you then have many expansion slots and with 5.5 you can load up memory.

That sounds great I am new to Infiniband what cards would you recomend for the Infiniband HBA?

Also how would I manage the LSI Card (besides the boot up BIOS) . Currently for VMware I can manage the LSI card after loading a specific CIM Provider.

I do not know if I would be able to do that same thing with other OS's (well besides windows and possibly linux but unsure on that one).
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Sweet thanks for the suggestion.

Few questions.

1. What cable would I use for these.

2. I currently have cat 6 drops around the apt that I did myself. Is it even possible to do a network drop for this type of network (without super expensive tools)?

3. Still not sure how I would manage my RAID Controller for the storage server besides web bios.

Thanks for your help
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Here are a few thoughts:

1. Folloing the above, for QDR Infiniband, I actually just picked up 2x QDR 30m cables for $80 shipped (that is for both). Very inexpensive for this type of cable but not something that you can really terminate easily on your own.

2. If you already have Cat 6 drops - the idea of going 10 Gbase-T is a fairly decent one. You could get something like this Supermicro X9DRH-7TF and then get an adapter for each server. It may be close in price but you would save time/ money not having to run your own.

That would keep everything Ethernet and easy to manage. It is a $650-700 motherboard but is also has a LSI controller built-in and the 10Gbase-T adapter. Certainly upside there.

3. Check out LSI MegaRAID. Not my favorite tool, but it does let you manage multiple controllers, for example, if you had one in a storage server and one on the Supermicro motherboard noted above.

General thoughts
Key questions I have reading this are: how much power/ server do you need? What features do you need?

For a low power lab, you can get away with a Core i3, E3 or even the new Avoton platform. Very inexpensive/ easy.
For a higher power lab, that is when you start looking at more memory and more expansion cards.

If you are mainly connecting three nodes, and there is a shared/ common node (e.g. the storage server) I would probably use the shared node as my switch. Once there are 4+ nodes, that is when I would strongly start looking at picking up a dedicated switch. These high-speed switches cost a lot and use a good amount of power so keeping one on for a year has a measurable power/ cost impact.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
This is a good card for QDR/ IB. It is also the B model which would be better for SMB3.

HP Infiniband QDR Mellanox Connectx 2 VPI 593412 001 592520 B21 MHQH29B XTR | eBay

You could then get a single port card for the current machine and a third box if you wanted. Like these.
I can second the vote for Mellanox ConnectX-2 cards. For comparison, the earlier ConnectX cards are cheaper but less compatible and don't have RDMA and the ConnectX-3 cards are significantly more expensive in return for PCIe3. The great thing about these cards is the fact that they can be used in Infiniband mode or 10GbE Ethernet mode - or both at the same time if you have dual-port cards.
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Here are a few thoughts:

1. Folloing the above, for QDR Infiniband, I actually just picked up 2x Used IBM Mellanox Connectx 2 VPI QSFP QDR IB 81Y1533 Adapter MHQH19B XTR | eBay for $80 shipped (that is for both). Very inexpensive for this type of cable but not something that you can really terminate easily on your own.

Ok this seems like it might be the way to go since most people here recommend it. I am not sure about Infinaban as at work I only deal with FC and ISCSI.

Are these cards compatibly with ESXI 5.5?

Also If I loaded the storage backend with Server 2012 would vmware still be able to use it for raw data?



2. If you already have Cat 6 drops - the idea of going 10 Gbase-T is a fairly decent one. You could get something like this Supermicro X9DRH-7TF and then get an adapter for each server. It may be close in price but you would save time/ money not having to run your own.

That would keep everything Ethernet and easy to manage. It is a $650-700 motherboard but is also has a LSI controller built-in and the 10Gbase-T adapter. Certainly upside there.

Yes I read your review of this motherboard and was amazed at the value (so wish this was out when I built my first ESXI node).

I did think about going 10G but it seemed like the cost was still high the last time I looked at it seriously. Have prices come down on the Intel 10G cards? Cheapest switch I saw was 900ish from Netgear.

Maybe 4 Nics (2 for Storage Node) and 1 for each ESXI Node? (thoughts)

3. Check out LSI MegaRAID. Not my favorite tool, but it does let you manage multiple controllers, for example, if you had one in a storage server and one on the Supermicro motherboard noted above.

I do currently use MegaRaid Storage Manager to manage my 9265-8I raid controller but that only works because I am running VMware. I am not sure if I would still be able to manage my RAID without rebooting into the bios for instance if I used Freenas.

General thoughts
Key questions I have reading this are: how much power/ server do you need? What features do you need?

Well I am currently running a 3.4Ghz Xeon 1245V2 (Ivy Bridge Based) which I think is equivalent to a Core I7 of the Ivy Bridge architecture.

So I probably am low end compared to some setups I have seen here. Quiet cool performance is more my style


For a low power lab, you can get away with a Core i3, E3 or even the new Avoton platform. Very inexpensive/ easy.
For a higher power lab, that is when you start looking at more memory and more expansion cards.

If you are mainly connecting three nodes, and there is a shared/ common node (e.g. the storage server) I would probably use the shared node as my switch. Once there are 4+ nodes, that is when I would strongly start looking at picking up a dedicated switch. These high-speed switches cost a lot and use a good amount of power so keeping one on for a year has a measurable power/ cost impact.

I am only looking at 3 Nodes if you include the Storage Server so point to point I think would be ideal for me.
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
I can second the vote for Mellanox ConnectX-2 cards. For comparison, the earlier ConnectX cards are cheaper but less compatible and don't have RDMA and the ConnectX-3 cards are significantly more expensive in return for PCIe3. The great thing about these cards is the fact that they can be used in Infiniband mode or 10GbE Ethernet mode - or both at the same time if you have dual-port cards.
Thats sweet how do you change the mode they operate in?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA

I am only looking at 3 Nodes if you include the Storage Server so point to point I think would be ideal for me.
A single port IB card on each VM node and a dual-port card on the storage server would be a very low-cost storage network for you. Run the IB cards in IPoIB mode or switch them to 10GbE mode to make configuration easy and use iSCSI or SMB3 as your protocol.
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Guys I think my plan might be set to go with Infiniband as it seems highly recommended for lost cost setups with gobs of throughput.

TangoWhiskey9, Patric, and DBA all recomend the Mellanox cards.

Since I know very little about Infiniband I have a few questions.

1. What do these cards show up as in vmware ESXI?

2. Is it as simple as dropping the cards in a PCI-E port and connecting the cables or is there configuration that needs to be done on each card?


3. Do the cards work like network adapters (in that you assign them an ip address) or do they work like fiber channel (as in you connect them and they see the storage)?

4. I am looking at purchasing the following:

(x1) HP Infiniband QDR Mellanox ConnectX-2 VPI (for Storage Backend Server)

(x2) IBM Mellanox ConnectX-2 VPI (For 2 ESXI Nodes)

(x4) Cables to connect them all but not sure what kind I need

Does anyone see any issue with this config for ESXI with a Windows backend or possibly somthing else


5. What do you guys use for your storage backend OS?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
ESXi is not semi-affordable! It's insanely expensive! but the price to pay to run your entire enterprise business on 3 servers - worth every penny.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Hey do you guys have any info on setting up infiniband with ESXI or windows?
With Windows it is very easy. With ESXi drivers are not an issue, if you do want to run Subnet Manager (key Google term for the "hardest" part of setup) on ESXi you can do that too.

Relatively simple these days especially if you are using Mellanox ConnectX-2 or newer cards. They are used in many high-end environments so that makes life easier.