SAN Build Advice Needed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

asintado08

Member
Sep 16, 2014
81
10
8
46
I have a homelab right now.

Xeon 1270v3

X10SLM+-LN4F
32gb ram
3 256gb SSD
48 port gigabit switch

I need additional storage space because I am going to buy another esxi host. Right now, the disk are connected directly to the esxi. I need about 10tb of additional storage for datastore.

I am planning to buy a HBA card and passthrough the card to freenas. Can freenas handle 4 1gb nics? I'm pretty sure I will not have a 4gb connection. What I want is to make sure that each host can have a 1gb connection to the SAN. So if my other host have 4 nic as well, if I dedicated those nics to one guest each, i want to make sure that freenas will be able to split the connection to 4 nics as well instead of just one nic only. I forgot the term for this.

I need to achieve at least 90mbps read/write and transfer speed from each host to nas. So if I am planning to allocate four nics to the freenas, I should make sure that the read/write speed of my freenas is at least 400mbps right?

I am planning to get those SAS drive from ebay and make a raidz2 out of it. Or I could just get WD blacks and use SSD cache on it. Which one is better?

I need a suggestion for a new case. My current lab is on a desktop case. I am okay with barebone servers because I could remove the internals and I will get a free PSU.

Esxi question. Do I need get vmug for this? Currently, I am happy with the free version of esxi. Do I need vsan license if I want to connect my other host to a san?

End result for this server will be:

Freenas
Pfsense
and some other hosts.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Let me answer these in order:

1. Not quite on the 400Mbps. Bonded network interfaces don't exactly scale like that even when using LACP. I would play with the Link Aggregation settings in FreeNAS, there are several settings, but others here may know. You would want to pass the physical NIC(s) and the HBA, for that matter, to the FreeNAS VM using DirectPath I/O. Also make sure that VT-D is enabled in your BIOS. Another option, if i'm reading you right is to add some 10Gb cards to both boxes and Direct Connect. 10Gb NICs are cheap these days as is TwinAX which can be found on eBay. I"m saying this because it sounds like you have other "guests" on another ESXi server? If they are clients, like PC's then it's a different story unless you get a switch with 10Gb Uplinks and connect the FreeNAS server to that. The Clients, in that case could be 1Gb. I think that's where I would need some more clarification.

upload_2016-6-25_9-2-36.png

upload_2016-6-25_9-4-6.png

2. What type of workload? If your just serving up files like media, etc, then don't worry about the cache. If you can try to provide 2GB of RAM for each 1TB of storage so in your case 20GB which hopefully leaves you enough for the other workloads you want to do. If it's providing storage to other ESXi host(s), you might could see some benefit out of a mirrored cache but RAM is the priority. I see this all the time, people wanted to throw SSD into a ZFS variant because "everyone is doing it" but don't address the RAM requirements first.

3. If you don't want any of the advanced features then no you don't need the VMUG licensing. Connecting the other ESXi host to FreeNAS will not be a problem and won't require additional licensing. You wouldn't' benefit from features like HA anyway since the FreeNAS server resides on the one host and if that host goes down, well..you get the idea.
 
Last edited:

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I'm getting hung up on the 90MBps to each host as well. If you have 2-4 hosts and are just going to assign a NIC to them, that will work. You can give each NIC its own NIC/ IP and that will be fine even without LACP. Not recommended, but fine.

SAS on ebay or SATA WD Blacks is a good question. Can I vote for neither? I would get WD Red or WD He and get 6TB or bigger drives. If you "only" need 10TB, then get four drives and RAID 1 them. SAS drives are good, as are WD Blacks but if you already have SSDs you can save money skipping SAS controllers. Get mass storage from HD and performance from SSD.

Case - do you want a tower again? What is the other host going to be? That E3? Something different?

@virtualfng is spot on with the licensing. If you wanted to go VSAN then VMUG would be good.
 

bds1904

Active Member
Aug 30, 2013
271
76
28
Buy 2x connect-x2 cards & a DAC on eBay and set up a private subnet between the 2 ESXi hosts. It'll run less than $100.

That will allow you to use the NFS share on the first host and have the same NFS share on the second host at 10Gb
 

asintado08

Member
Sep 16, 2014
81
10
8
46
I think you guys already answered my biggest concern which is storage throughput from host to another host. If I understand this correctly, I can buy 10gb cards and pass through it to my freenas and connect the another card to the other host without buying a 10gb switch.

regarding the case, I wanted a rackmounted case so that I could properly set up the rack.
 

bds1904

Active Member
Aug 30, 2013
271
76
28
I think you guys already answered my biggest concern which is storage throughput from host to another host. If I understand this correctly, I can buy 10gb cards and pass through it to my freenas and connect the another card to the other host without buying a 10gb switch.
Don't pass thru the 10gb card. Have ESXi have a management interface on it with an IP of 172.16.0.2, also have it as a sperate VM network. Then add a vmxnet3 adapter to FreeNAS with a static IP of 172.16.0.1. The other 10Gb card in the other machine should also have a management interface of 172.16.0.3.

Setting it up this way will make it so you have NFS access to the freenas disk array on both ESXi boxes.

Look into an "ESXi AIO" setup. Think of it as an all in one setup with an additional server also.

For the record I really recommend against FreeNAS for this purpose. OmniOS+Napp-It works much better and is easier to deploy.
 
Last edited: