Need help brainstorming home storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
Well due to the Great deals section, I just got 4 x 8TB hard drives which is now making me think of how to set it up in my home environment. I don't have a lot of content to store. Most of my home files are stored in VM WS2012r2 Essentials server with two 6TB drives. One hdd is used for content and the other hdd is for backup of that content. I also backup to one external 5TB hdd and online cloud. So as you can see 32TB of raw storage is overkill for just storing my content. :)

On to my brainstorming.

I have 4 esxi nodes all with local storage. My intel server has 8 x 3.5 hdd trays and 8 2.5 7mm trays (only good for ssd drives). The dell server has 8 2.5 hdd trays. The Supermicro is in a pc case and can hold various 3.5/2.5 hdd. The forth is just my htpc acting like a node :) for testing so super limited.

What i like to do is use the 4 x 8tb drives as main storage for home content (5-6TB) and then setup the rest of the storage pool for my VMs on the four esxi nodes via NFS/ISCSI. Then re-cycle down the other hdds I have to a backup pool for the home content and VMs. (2 x 6 tb RED 3.5, 3 x 3TB RED 3.5 plus various other hdds). I also like to keep it somewhat simple design.

My though is to put the 4x 8tb drives on super micro server, since it's always on and runs my home prod servers. It's limited as it only has 2 pcie slots and max 32GB ram. Currently I have the onboard lsi card flashed to IT mode and pass-through to WS2012R2E VM server with 2x6TB RED drives attached. I was thinking of adding 4 x 8tb drives to LSI card since it has 8 slots and maybe add 1-2 ssd drives. Then using WS to share the storage out to ESXI nodes via iscsi. I havent researched this yet, but I think Starwind vsan can do this and it is free?

Any thoughts on doing the above on the Windows Sever? If possible I was planning to just use the two onboard nics for VMs traffic and replace the quad nic pcie with 10GB card and plug it into my switch. The dell and intel are already hooked up with 10GB. The 10 GB would be how the storage is shared out.
 
  • Like
Reactions: Patrick and K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Windows server itself can be used to share the storage via iscsi. If it is just one node then I think you can just use it instead of starwind.

I am currently using a similar setup while I restructure my stuff. Have an lsi card passed through to a Windows server vm. Have 6 8tb drives connected to the lsi card and pooled via drive pool to be presented as a single volume. Access is via a single gbe link and I get ~120MBps transfers.

I am not using it for any vm storage though. Just media and backups.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
Currently my WS acting as a my home file server, I can access the content at 110 MB/sec on any of the networked pcs so i know it good to max out the 1GB connection. My concern is when i add VM vmfs to the mix.

The problem i always heard about windows is speed of software raid vs hardward vs freenas. I have 10 GB link i want to make sure i get enough speed/IO to use the storage as vmfs for the 2/3 other servers. Not that i will be running heavy VMs or software on them but i like to have performance similar or greater then local storage. That is why I was thinking about starwind vsan, or maybe their other product the starwind iscsi if its still available and free.

i think i will play with a vm WS with a few vmfs disks added to see what i can do with it.

any other experiences greatly appreciated :)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I am afraid 4x8TB are not a good basis for a vm data store... at least not without a sas or nvme layer between them.
Have neither used StarWind nor WS iSCSI but from a pure Hardware point of view (IOPS) this is going to be quite slow.
Might still be ok for you, depending on the amount of vms, but nowhere near 10GBe line speed.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
It probably be fine for my use case. esp for home lab. I just want to optimize what I have to best of my hardware's ability and my ability for configuring it :)

A few points that would be helpfully while testing stuff out are:
1. what kind of raid setup works best for VMs? Raid 0, 1, 5 etc?
1.a I will have a vm backup solution in place so raid for protection of hdd failure issue a primary need, its a nice to have though.
2. What is the best way to test IO for VM when trying out different configs?
2.a For example can I use CrystalDiskMark? if so what kind of settings should I be using? This is on windows server box for now so linux utils are out.
3. Currently I have 4 250GB Evo 850 ssd drives I can use. If keeping to my plans above, I can either use 2-4 with WS2012R2E VM by swapping out the 2 6TB drives. Thus having 4x8tb and 4x250GBssd. In Raid 0 the 4SSD can do the following:
ssd1.JPG
Would making the ssd layer cache provide enough IOPs to cover the bulk storage?

I'll keep posting as I play around. :)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
For most vmWare deployments IOPS is key, so if you want some protection-> Raid 1, else Raid0 (quite risky;))
Testing - Deploy a vm on your data pool and run whatever you are going to run later - Database, Games dunno;)

EVO 850s are ok on short term activity, they will not sustain longer writes. I had significantly drop in write speed on vmotions as ESX data store already on my 1TB's and 500's both.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
side note

this is testing starwind over 10GB between two esxi servers.
server host vm is ws2012r2e test box,
Test client is w8 test box
I'm assuming (havent played with 10gb much since i put the cards in) that I'm maxing out the 10GB connection?

upload_2017-4-24_19-23-47.png
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
so i moved onto testing freenas for now. I have a question. I created two pools, one of the 4 x 8tb drives in stripe and another pool of 4x 250gb ssd. both are showing same benchmarch results. So something off in my config. Any thoughts?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
@marcoi - What kind of benchmarks? Local or network? Random or Sequential? What tool are you testing with? As much info will help.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
so over network, but on the same esxi host. both freenas and client vm(w8) are on the same vm switch using vmxnet3 and 9k is set on both nics.
In w8 client i add the storage using iscsi and then use crystaldisk to benchmark the attached disk after it is init/formatted. I set then set crystaldisk to 1gb and run all the tests for 2 rounds.

I been screwing around with different configs, setting up a vol of HDD and a vol of SSD, then creating a target using a file on each vol. ( i might be using the wrong terminology:) )
I just found out if i create a zpool on a volume i can add a target as a device vs a file.

i also bumped up the ram on the freenas server to 16gb to test if it make a difference.

any high level guidlines would be useful as i play around.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Can you show us the example from the other pool as well? There should be some differences after all;)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I assume still running?
Make sure to pick random data not compressable :)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Can you test with only 1/2 ssds ?
Not really a clue, just trying to establish a baseline
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
I'm trying ws2012r2e vm now to see if i get the same speeds. might help determine if its a hardware vs software issue
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
First set of ws2012 tests.

local on server, 4x8tb in stripe mode.
upload_2017-4-27_10-14-22.png

local server, 4x250gb ssd in stripe mode
upload_2017-4-27_10-19-40.png

Test on w81 client, 4x8tb striped hdd over iscsi on same esxi host.
upload_2017-4-27_10-36-6.png


Test on w81 client, 4x250gb striped ssd over iscsi on same esxi host.
upload_2017-4-27_10-40-43.png

So some seems off on Freenas. I would expect better performance on Freenas over windows server.