Need help brainstorming home storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
You don't want your single Windows host as an iSCSI target - it's rather slow (MSFT had no cache for iSCSI), it's not certified for VM hosts (OK, you don't use that but still...), and you make your storage a SPOF.

For backups and some bulk media dump you'd better stick with either NFS or SMB3, block protocols like iSCSI & Co aren't your best friends here :)

Windows server itself can be used to share the storage via iscsi. If it is just one node then I think you can just use it instead of starwind.

I am currently using a similar setup while I restructure my stuff. Have an lsi card passed through to a Windows server vm. Have 6 8tb drives connected to the lsi card and pooled via drive pool to be presented as a single volume. Access is via a single gbe link and I get ~120MBps transfers.

I am not using it for any vm storage though. Just media and backups.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
You don't want your single Windows host as an iSCSI target - it's rather slow (MSFT had no cache for iSCSI), it's not certified for VM hosts (OK, you don't use that but still...), and you make your storage a SPOF.

For backups and some bulk media dump you'd better stick with either NFS or SMB3, block protocols like iSCSI & Co aren't your best friends here :)
This is an always on media server. Esxi host with HBA passed on to a windows host running drive pool. this host Also runs my VCSA, DNS, unifi, plex and a few other vms. I Just need the storage server here to be able to serve 2 concurrent plex streams and it does that.

This is definitely not a setup that I would suggest for any performance oriented workload or where you care about avoiding data loss due to drive failure.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
so I think i will test out freenas, i have 8tb in raid 10, with 4 250 gb ssd in stripe l2cache. This is the results so far
upload_2017-4-27_17-30-14.png

Going to hook it up to esxi node tomorrow and see how many w10 vms i can run with crystaldisk running :)
 
  • Like
Reactions: K D

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
got freenas connected up to an esi node. I had 6 clones of windows 10 running off the one iscsi target. I had all of them running crystaldisk roughly at the same time. Seems pretty good to me? Any thoughts?


upload_2017-4-27_22-49-28.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
You do know the caveats of iscsi/freenas I assume so I'll save the lecture;)
CDM is not really representative of actual ESX client performance, but its good enough as long as you are happy with it :)
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
yeah i know using CDM isnt the best, but i feel it at least a gives me a good apple to apple comparison of testing different configs.

A question for the storage people: Is it better to have two sets of raid 1 using 2 x 8tb and raid 0 using 2x ssd as cache vs having raid 10 for 4x 8tb and raid 0 for 4x ssd as cache?

Another though is I have an intel raid card: Intel® RAID Controller RS25AB080 Product Specifications
Its essentially a LSI2208 card with 1gb cache, bbu and i have the ssd cache key for it. Currently I have it setup for local storage on one of my esxi nodes with 3x3tb red hdd and 4 x 250gb ssd as raid 10 cache as the limit of cache is only 500GB. Would the intel card provide better performance over freenas with it mode lsi raid card? I was think of using the raid card with 4x8tb in raid 10, and 4x250gb ssd as cache then passing it into a starwind esxi linux vm that was recently released. Then using starwind to provide the iscsi layer, etc..

Thoughts or comments welcomed.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
so i'm getting a new server to replace my cube server. Its an intel server that i plan on using 2x e5-2670 v1 cpu and 128GB ram. It has intel raid card based on lsi 2208 with 1gb cache and capacitor bbu. Only thing missing is the key to enable ssd cache which i have on my current intel server. This server will have 2x 10GB connections back to the dell switch. It will be my home prod box. so all my always on home vms will be on it and it will act as storage box for the other two esxi servers (intel and dell).

I have the 4x8tb reds and 4x 250gb ssd drives that will be placed in the new server and used as storage space for VM and backup of VM. I can't decide what is the better approach, using the raid card hardware then configuring the drives on that or just put a raid card in IT mode and use Freenas. I can use the raid card to configure the drives, then pass the datastores off to Starwind VSA VM to create the iSCSI to the other two nodes.

If I decide on IT mode with Freenas, will I lost the benefit of the 1GB ram cache and bbu of the raid card? I waiting for the new box to arrive so i can't do any testing atm.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
You can't flash a 1GB RAID card to HBA IT mode AFAIK.
The card is RMS25CB080 6Gb/s SAS/SATA Integrated RAID Module (LSI SAS2208 RAID with 1GB cache)
with a AXXRMFBU2 Intel RAID Maintenance Free Backup Unit and RES2SV240 SAS Expander (to support 12 drives)
Anyone know for sure if IT mode isnt supported? IF it isn't no big deal i already have an m1015 card flashed to IT mode.
So my original question still stands, which would be better approach? Hardware raid using above or freenas with m1015. Not sure which would give better overall performance.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
so i'm getting my new intel server setup to replace my cube server.
Specs:
* Intel R2312GL4GS 2U with 12 x 3.5 hdd
* RMS25CB080 6Gb/s SAS/SATA Integrated RAID Module (LSI SAS2208 RAID with 1GB cache)
with a AXXRMFBU2 Intel RAID Maintenance Free Backup Unit and RES2SV240 SAS Expander
* m1015 card flashed to it mode
* 2 x e5-2670 v1 cpu
*64 GB ddr3 1333 ecc ram (will upgrade down the road to 128GB)
* quad nics
* dual 750 psu
* 2 RT8N1 DELL 10GB ETHERNET NETWORK
*4 x 8TB WD Red
*4 x 250GB Evo SSD
* coming soon 2 x 6TB RED when i move off cube server
Server been updated and i got the fans to a manageable speed to reduce noise. As configured with esxi 6.5 and one w10 test vm it idles around 188 watts. Not to bad considering the specs

So I been playing with the raid card. I decide on raid 10 for the 4 x 8tb drives. I did two VD, one at 10TB for VM to be iscsi out to two other esxi nodes. The other is 4 tb for vm backups.

Im currently testing the setting on each VD but so far i like:

VM VD - 10TB - No read ahead, always write back, direct io.
upload_2017-5-18_15-37-40.png

Backup VD - 4TB - always read ahead, write through, direct io.
upload_2017-5-18_15-46-7.png
*Note I made the working set 2gb to bypass cache.

I dont have the ssd unlock key for the raid card to use the ssd as 2nd level cache. so I might not use the ssd at all, or i may just do a full ssd raid 0 for high performance vm.
Next on the list is to figure out how i want to share the storage out. I can either passthrough the card to a vm such as ws2012 and use starwind etc. or add the storage as datastores to the local esxi host, then create drives and pass those into the starwind linux application.
Suggestions welcome
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
Updates:
IMG_20170524_101610.jpg Picture of server ;)

So I decided to get another 64GB Ram which will bring my total up to 128GB. I also got the intel IO board for dual 10GB. So when those come in I'll remove 2 RT8N1 DELL 10GB ETHERNET NETWORK cards. Hoping the onboard card will be less power then two cards. We shall see.

So far my configuration efforts are such:
1. Internal Raid card with cache, bbu setup on first four 3.5 slots with 4x 8tb drives. That is all that will be put on the raid card for now. The disks are configured as described in the post prior.
2. I'm using the two internal sata 6g ports for two ssd drives to act as local datastores for esxi host. One will be 500GB Evo drive the other 250GB Evo drive. (eventually when i recoup from all the spending ill get some better ssd drives).
3. The MB has another two sff ports with first port active. It's 3g speeds only but that should be fine for what i plan to use it for. Right now the one active port is in pass though mode (per the bios settings) and connected to the second set of four 3.5 slots. I plan to put 1TB 3.5 WD Black HDD which is used as local backup point for vms and dumping ground for misc stuff. I also plan to put 2x 250GB 3G SSD drives in the slots as well since i have them. Each Prod VM will have it's own storage drive.
4. the M1015 card is in IT mode and connected to the last set of four 3.5 slots. I plan to move my two 6TB RED drives from my other server when i move this one into "production". The card will be pass through to WS2012E VM which keeps all my home files and backups.

I still have more to do with testing the best setup for ISCSI setup of the 14 TB storage back to the other two esxi nodes. Right now I decided to use WS2012R2 storage with Intel Raid card as pass through.I decide that is best since i can monitor the volumes using megaraid gui. I also have starwind setup on OS to provide the ISCSI. I plan to give this VM 32-64GB ram (depends on what makes sense for starwind ram cache).

So that is where I'm at. Any ideas feel free to post.
 
  • Like
Reactions: K D

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
update: Just added another 64GB of Ram to the box and also replaced the two connect x2 cards with intel iom 10GB card.
Side note you can get one of these assemblies and take it apart to get the 10gb module and place it on the server
Intel Dual Port 82599EB 10GbE I/O Module With NIC Card G23589-251 | eBay

I tested starwind, redoing the raid setup to raid 10, 14TB full drive with No read ahead, always write back, direct io. I then setup starwind with two targets, one using LSFS and the other using just an image file. The LSFS does require x amount of memory for each TB, so since i had the VM OS at 50 GB I was able to create a non dedup 7TB image. It goes as data is stored like a thin image. The other image file is just a straight image file of x amount. I created 4 tB to use as backup.

I decided to test LSFS vs using direct device target based on this pdf:
https://www.starwindsoftware.com/technical_papers/StarWind-High-Availability-Best-practices.pdf

For now I'm disabling starwind (gui trial expired and i dont want to learn the PS ATM) and testing WS2012R2 Storage iscsi to see what a difference starwind software makes.

I do need to eventually decide on what config to keep as i want to get the server in place :)
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
Still playing with configuration for storage. I'm onto freenas now. I'm thinking of staying with Freenas, just need to determine where it will live (ie as a vm on new intel server or separate box but limited with 32gb ram)

This is my pool configuration, still need to test but i went with raid 10. Each raid 1 set of 8tb has a set of raid 1 ssd drives. I then striped the two sets. See below if my example doesnt make sense.

Code:
 pool: ESXI_Storage
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        ESXI_Storage                                    ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/8TB Drive                   ONLINE       0     0     0
            gptid/8TB Drive                   ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/8TB Drive  ONLINE       0     0     0
            gptid/8TB Drive  ONLINE       0     0     0
        logs
          mirror-1                                      ONLINE       0     0     0
            gptid/250 gb ssd  ONLINE       0     0     0
            gptid/250 gb ssd  ONLINE       0     0     0
          mirror-3                                      ONLINE       0     0     0
            gptid/250 gb ssd  ONLINE       0     0     0
            gptid/250 gb ssd  ONLINE       0     0     0
On a side note i did get a new rack, not best looking but it keeps the server quiet.
upload_2017-6-19_11-7-58.png
 
  • Like
Reactions: K D

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
update:
upload_2017-6-20_13-4-35.png
I tested Freenas with raid 10 for 8tb and raid 10 for slog ssd drives. With sync =always on the pool, NFS mount datastore in ESXI. I get 1000MB on read and like 70/80MB on writes. With sync off on NFS i was getting 1000MB.
Next I tested ISCSI with the same pool setup of raid 10 as listed above. With sync = always i was getting 1000 MB read and 275MB write. With sync off, write got up to 1157 MB.

Few questions:
1. any idea why NFS wont get higher then 70MB? (When I didn't have the SSD SLOG it was like 7/8MB write.)
2. Does it make sense to keep using the 4 evo 250 gb ssd for slog? What is everyone take on that? Is there a better way to set them up as slog?
3. What would be a better SLOG device that is reasonably priced?
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
On my Windows home server running WS2012R2e as a VM on one of my intel servers, I'm looking to now replace the two 6tB Reds with two new 8TB reds i got from the BB deal. I'm also considering removing the pass-through LSI card that is in IT mode, to make VM backups easier.

My Idea is to add the two new 8TB drives to the intel raid card i have on the server, which is setup with bbu cap, 1GB cache ram and 4x250 GB SSD drives using cachecade key. (they are raid 10 setup for max of 500ish GB which is near the limit of the card). Currently that intel raid card has 3x3TB drives in raid 5 with ssd cache enabled and that storage is attached to ESXI as a datastore where i run all my prod VMs currently. About 6 vms in total. I moved my dev vms to freenas storage and run them off 2nd and 3rd servers i got.

What do you think would be the better setup? The two drives are going to replace the 2x 6tb drives in Home server. Home server stores my files and pc backups.
1. Connect two drives to intel raid card, set each drive as raid 0, create a datastore in esxi for each then create a 7.8tb drive on the vm for each.
1.1 Should i enable the ssd cache for each? I dont think i need it since the drives are plenty fast for file storage.
2. Leave the ibm 1015 card in it mode and pass through, just add new drives and copy data over. remove old 6tb drive, etc.

I like 1 because it removed another card from the server reducing power and complexity. It also makes use of the better raid card. I dont know how performance of VMFS-6 datastore is vs direct attachment of drives to vm using pass-through on the it card?

Option 2 is simple and fastest way to get the change done. but it leaves me with the same issue of vm backup, that is i need to stop the vm to use my backup software.

Thought welcome.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
With option 1 I think you are adding g another layer of complexity to the storage. If you ever want to move the drives to another system for backup or recovery or reconfiguration it will be more complex. I would recommend option 2.