Advice for c6100 esxi home lab build out.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

cmpufxr

New Member
Apr 25, 2015
19
0
1
55
I currently have a 1905 running pfsense firewall, 3 1950 running esxi, and 1 2050 running openfiler. the openfiler and esxi nodes are connected via qlogic fibr cards. I also have a cisco ws-c2960g-24tc-l switch for networking.

I ordered a c6100 xs23-ty3 to replace them all. so I have a c6100 and I want to run pfsense on node 1, open filer on node 2 (6 drives shared storage), and esxi on node 3 and 4 (3 drives each local storage). All of the OSs will run on USB 16 gb flash drives.

my question is what would be the most cost effective way to connect the storage node to the esxi nodes as the qlogic fibr card I have now (pci-e) are not half hieght, half lentgh cards.

I was thinking with going with infiniband, a 1 port card in each esxi nodes and a 2 port card in the storage node. but I don't know whitch cards will fit and I believe I can connect the cards without using a switch but I have no experience with infiniband.

EDIT: or are there qlogic cards that would fit in the c6100 as I am more familiar with setting those up with the software. Do infiniband cards use WWN? like I said I have never used infiniband cards before. also the c6100 has both the pci-e and the mezzanine options. which would be better?

What would your recommendation be?
 
Last edited:

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
You have some decisions to make and that's whether you want to pony up the money for the mezzanine cards or use an IB dual port pci card. The mezz cards are part # JR3P1 and can be found on eBay for around ~$120 and will leave you with the ability to add something in the pci slot later on. If you're absolutely sure you want to go with just a pci IB card you can get a dual port for $29 off eBay HERE and direct connect 3 servers without using a switch which is what I think you're after. You'll have to run OpenSM on 2 nodes but it's painless to setup as a service and it just works.

I don't have any experience with these cards in esxi or openfiler because I mainly run Hyper-V 2012 R2. Maybe someone else can chime in but the mellanox stuff is pretty well supported and used around these forums by members.

P.S. Congrats on the new system I absolutely love mine.
 

cmpufxr

New Member
Apr 25, 2015
19
0
1
55
ok, I just received the c6100 from fedex, it looks like if I get low profile brackets the 2 port qle2462 fibr cards will fit but the 4 port qle2464-nap wont.

It also came with four 4x Dell Nvidia P797 PCI-e 2.0 x16 Host Interface Cards R562T, I don't know what I will do with those.

Looks like I need one card but then again they just accepted my offer for 3 of those new mezzanine cards for 100.00 a piece so I will go with them.

Now to find cables...
 

cmpufxr

New Member
Apr 25, 2015
19
0
1
55
You have some decisions to make and that's whether you want to pony up the money for the mezzanine cards or use an IB dual port pci card. The mezz cards are part # JR3P1 and can be found on eBay for around ~$120 and will leave you with the ability to add something in the pci slot later on. If you're absolutely sure you want to go with just a pci IB card you can get a dual port for $29 off eBay HERE and direct connect 3 servers without using a switch which is what I think you're after. You'll have to run OpenSM on 2 nodes but it's painless to setup as a service and it just works.

I don't have any experience with these cards in esxi or openfiler because I mainly run Hyper-V 2012 R2. Maybe someone else can chime in but the mellanox stuff is pretty well supported and used around these forums by members.

P.S. Congrats on the new system I absolutely love mine.

Thank you for your help!!
 

cmpufxr

New Member
Apr 25, 2015
19
0
1
55
Could use some more help, I have configured the xs23-ty3 and have esxi loaded on all four nodes. I have also purchased a Mellanox mts3600q-1bnc switch to connect all the node together.

The question I have is what would be the best way to connect the Mellanox switch to my Cisco 2960g. The 2960g have four sfp ports. I did try to use a qsfp to sfp+ breakout cable but I need to somehow convert the 10g sfp+ down to 1g sfp as the switch cannot use sfp+

or find a way to go from qsfp to sfp.
 

cmpufxr

New Member
Apr 25, 2015
19
0
1
55
As I was going to virtualize my Gateway/Firewall (pfsense) I have decided that I could just use that to route traffic to and from the infiniband and Ethernet networks.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
I currently have a 1905 running pfsense firewall, 3 1950 running esxi, and 1 2050 running openfiler. the openfiler and esxi nodes are connected via qlogic fibr cards. I also have a cisco ws-c2960g-24tc-l switch for networking.

I ordered a c6100 xs23-ty3 to replace them all. so I have a c6100 and I want to run pfsense on node 1, open filer on node 2 (6 drives shared storage), and esxi on node 3 and 4 (3 drives each local storage). All of the OSs will run on USB 16 gb flash drives.

my question is what would be the most cost effective way to connect the storage node to the esxi nodes as the qlogic fibr card I have now (pci-e) are not half hieght, half lentgh cards.

I was thinking with going with infiniband, a 1 port card in each esxi nodes and a 2 port card in the storage node. but I don't know whitch cards will fit and I believe I can connect the cards without using a switch but I have no experience with infiniband.

EDIT: or are there qlogic cards that would fit in the c6100 as I am more familiar with setting those up with the software. Do infiniband cards use WWN? like I said I have never used infiniband cards before. also the c6100 has both the pci-e and the mezzanine options. which would be better?

What would your recommendation be?
Any particular reason you are not planning to migrate from OpenFiler to ZFS?
 

cmpufxr

New Member
Apr 25, 2015
19
0
1
55
Any particular reason you are not planning to migrate from OpenFiler to ZFS?

The way I have set this up is.

I configured the C6100 with all four node running esxi, the top two nodes each have a 10 TB data-store(6 2tb drives raid 5 using the m5015 raid card each) , I use open-filer VMs to present two 2x 2 TB luns for shared storage(one from each datastore). All the hosts are connected by 40gb infiniband cards and switch. With the data-stores being local drives I will only be using the infiniband for network access only. Unless I build a new storage device.

I use open-filer because most importantly I am familiar with it and two it just works. Once I set it up I wont ever touch it again.

Also everything important is backed up to disk and my CDP device.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
OpenFiler is deprecated. There are lots of really good reasons to use something else. My lab would be going to waste if I only used it for things I was already familiar with.
 

cmpufxr

New Member
Apr 25, 2015
19
0
1
55
OpenFiler is deprecated. There are lots of really good reasons to use something else. My lab would be going to waste if I only used it for things I was already familiar with.

That's true, and sometime in the future when my interests are with storage devices I will most likely build something new, but right now I am concentrating on vSphere and VDI as that is the direction that my work is taking me.