Is augmenting my existing setup with a SAN build a good idea?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ch33rios

Member
Nov 29, 2016
102
6
18
43
Im currently running an 'all-in-one' ESXi 6.5 build where my storage tier is a quasi-san but all built into the existing host. The VM root file-system is a VMDK housed on one of the SSD datastores and then I have 4 HDD's passed through to the NAS OS via RDM. The HDDs are setup as a RAID10 array.

However, Im really wondering if I should perhaps look at offloading that VM (right now I have 8GB RAM and 2 cores allocated to it) to free up some resources for my other ideas on the actual compute side by building a small NAS. Part of me would like to go crazy and get a Xeon D but thats probably overkill. A low powered quad core (seems like the Celeron/Pentium line-up is strong enough) can be had for minimal cost along with a case from the folks at U-NAS. I'd simply transfer over my existing 4 drives to that setup and be done.

The only other thing that I'd be concerned about is performance across the wire for read/write. Right now I'm getting REALLY good read/write on the HDDs and while I can absolutely stand to lose a little performance (Im not running a production network here...its just my home lab) I dont want to back myself into a corner if I have 5 VMs accessing the NAS NFS datastore at the same time. Granted, I dont have alot of systems that would require extremely high IOPs but trying to reduce hiccuping where I can.

I recently just read a post by a homelabber regarding using infiniband NICs to connect his new home NAS with his server and that seemed to reap positive results and the cost was right too so that was one other thing I would be looking at if building a remote NAS for the ESXi system.

At the end of the day, I'm still pretty green when it comes to all this stuff so be gentle :) Just wanting to have a good resilient setup that isn't going to blow away the bank. Thanks all!
 
  • Like
Reactions: K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I have tried both options and both have their own advantages. Currently while I complete my rebuild, I have one box with a FreeNAS VM acting as a NFS datastore to all vms on that host. Performance over 1GBe link is acceptable when I access it from another host. I have not done any benchmarks for performance. I eventually plan to move over to a 10GBe network for storage access.

With just 2 hosts, one compute and one storage, you can direct connect them via 10GBe/40GBe(may be an overkill) without a switch.

Following this thread with interest.
 

Ch33rios

Member
Nov 29, 2016
102
6
18
43
I have tried both options and both have their own advantages. Currently while I complete my rebuild, I have one box with a FreeNAS VM acting as a NFS datastore to all vms on that host. Performance over 1GBe link is acceptable when I access it from another host. I have not done any benchmarks for performance. I eventually plan to move over to a 10GBe network for storage access.

With just 2 hosts, one compute and one storage, you can direct connect them via 10GBe/40GBe(may be an overkill) without a switch.

Following this thread with interest.
I was thinking about 10Gbe as well but two 10Gb NICs weren't exactly the cheapest in comparison to an Infiniband add-on card.


THIS is the article that led me down this path regarding the infiniband stuff.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Infiniband nics work well on direct connect if both sides support the cards. If you make sure of that then its no problem.
O/c you have double the failure probability with two boards/cpus/psu/ram modules unless you can move any one functionality over ;)
 

Ch33rios

Member
Nov 29, 2016
102
6
18
43
Infiniband nics work well on direct connect if both sides support the cards. If you make sure of that then its no problem.
O/c you have double the failure probability with two boards/cpus/psu/ram modules unless you can move any one functionality over ;)
Yeah thats the thing...am I really gaining THAT much by separating based upon what I want to do? Probably not...for the moment :D It might just be better to wait a bit till I truly hit my limit and then perhaps invest in a better cpu with more cores+more RAM. But dang-it I hate waiting!
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
I was thinking about 10Gbe as well but two 10Gb NICs weren't exactly the cheapest in comparison to an Infiniband add-on card.
MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD W/CABLE | eBay gets you a card and a cable for 20 bucks and cards by them self for 17 ( MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD ). How cheap are your infiniband options?

You probably won't saturate 1G with four hard drives. You probably won't saturate 10G with a couple of SSDs.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
10gb is nice, i got this LOT OF 2 RT8N1 DELL 10GB ETHERNET NETWORK TYPE HIGH PROFILE W/ CABLE | eBay
for 36 and also 2 DELL PowerConnect 5524 switches for 225 and have a super cheap 10gb setup. I still have two more sfp+ ports open which will be occupied by a third server that will become my storage box. The cards and dell were plug and play, and esxi host loaded the driver without any interaction on my part. Not sure if all 10gb setups are that easy as this was my first. I'm sure I can also optimize further for iscsi but thats after i get my setup going.