Changing Home Server Configuration: Hypervisor, Linux or Windows

What to chose

  • Windows with FlexRaid

    Votes: 2 28.6%
  • Hypervisor with Windows + FlexRaid

    Votes: 1 14.3%
  • Linux with mdriad

    Votes: 4 57.1%

  • Total voters
    7
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Sidiox

New Member
Dec 22, 2015
9
0
1
27
So currently I have a small server running as media server and for some software and vm's (ubiquiti controller, PXE boot, etc)
The server has 4x4TB drives and 12GB's of ram and an Xeon E3 1220, no raid card. The OS will be (is) installed to a 120GB SSD
As you can see its pretty basic stuff.

The device is currently running Windows Server 2012R2, with the drives RAID 10 via some kind of ugly hack I found online.

Now I'd like to change this, but I'm not sure to what.
I could get a hypervisor, then the only real choice is Hyper-V since I'd need raw device mapping, because all the drives are hooked up to the mobo; OS drive and data drives. I don't have any raid cards that can handle drives larger than 2TB unfortunately (And I'd rather not buy one). Then I'd have a couple of VM's, the main one being a Windows Server 2012R2 for the file server, but then I'd either need my ugly hack or something like flex raid for the RAID10

I could go for Linux (probably Debian with Gnome) with an mdraid 10 and running VMware or VirtualBox.
Then I could chose to either have a vm manage the media server access or have the Linux OS run a samba share.

Or I could again go for Windows, and just switch the RAID from the ugly Windows hack to FlexRaid.

I don't know what would be my best choice so I'd like some advice.
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
I could go for Linux (probably Debian with Gnome) with an mdraid 10 and running VMware or VirtualBox.
Then I could chose to either have a vm manage the media server access or have the Linux OS run a samba share.
I assume you mean VMware workstation, in that case I would not go Linux.

However there is an alternative to vmware workstation and oracle Virtualbox for full virtualization: KVM (and Xen)
KVM is very easy to setup, and there are many choices if you want to manage the server remotely.
A good example of a tool that can help manage a KVM host remotely, is virt-manager (which is included in the larger distributions) that is a graphical client for Linux desktops.
 
Last edited:

Quasduco

Active Member
Nov 16, 2015
129
47
28
113
Tennessee
I look forward to seeing how bhyve works out for more simple AIO needs like this.

Personally, depending on the rest of the system specs, I would run a Proxmox setup on it, with zfs on linux. Good control of everything, easy to use. Only thing would be, need more ram.
 
  • Like
Reactions: Patrick

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
The advantage of zfs, above and beyond it being a better file system, is the flexibility in disk selection and the cheap cards that you can use to build and maintain an array. Proxmox is definitely home user friendly and you can find video tutorials galore. I would tend to pick xen/esxi for the added flexibility/security but Proxmox could be much better for many people for many reasons. Either way I would want to make my disk server/media server to do over time with other vms providing general services (smtp, lamp, .. anything that serves up files).

This could get religious so I will bow out here.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
Wouldn't ZFS eat up quite a bit of ram? or is that btrfs? I don't really know how it would give me an advantage.
ZFS would mean though that I would be running Linux as main OS then.
Does your motherboard support VT-d? If so you go could pass your motherboard's SATA controller through to a VM under ESX or KVM and setup a ZFS server with a striped/mirrored array. Then serve block storage to your VMs via iscsi.

It is a misconception that ZFS requires lots of RAM; that said adding RAM will definitely improve performance as it uses "extra" RAM to cache frequently accessed data.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
ZFS AIO (pick your poison for hypervisor, as others have stated ESXi, KVM, Proxmox, Xen, etc.)...nuff said :-D 2 vCPU/8 GB ram, be done w/ it!
 
  • Like
Reactions: CreoleLakerFan

Sidiox

New Member
Dec 22, 2015
9
0
1
27
I assume you mean VMware workstation, in that case I would not go Linux.

However there is an alternative to vmware workstation and oracle Virtualbox for full virtualization: KVM (and Xen)
KVM is very easy to setup, and there are many choices if you want to manage the server remotely.
A good example of a tool that can help manage a KVM host remotely, is virt-manager (which is included in the larger distributions) that is a graphical client for Linux desktops.
KVM? You mean Kernel based virtual machine? I've never looked into that. I will now. Thanks

I look forward to seeing how bhyve works out for more simple AIO needs like this.

Personally, depending on the rest of the system specs, I would run a Proxmox setup on it, with zfs on linux. Good control of everything, easy to use. Only thing would be, need more ram.
I'll have a look at that as well, never really heard of it.

The advantage of zfs, above and beyond it being a better file system, is the flexibility in disk selection and the cheap cards that you can use to build and maintain an array. Proxmox is definitely home user friendly and you can find video tutorials galore. I would tend to pick xen/esxi for the added flexibility/security but Proxmox could be much better for many people for many reasons. Either way I would want to make my disk server/media server to do over time with other vms providing general services (smtp, lamp, .. anything that serves up files).

This could get religious so I will bow out here.
again, I'll look into Proxmox.

Does your motherboard support VT-d? If so you go could pass your motherboard's SATA controller through to a VM under ESX or KVM and setup a ZFS server with a striped/mirrored array. Then serve block storage to your VMs via iscsi.

It is a misconception that ZFS requires lots of RAM; that said adding RAM will definitely improve performance as it uses "extra" RAM to cache frequently accessed data.
The mobo (SuperMicro X9SCM) supports vt-d I think, the cpu does, that much I know. the problem is that if I pass through the entire controller, I wouldn't be able to put my host OS/Hyper-Visor on the SSD. I could put the OS on the SSD if I hook it up to one of my raid cards. But honestly that seems like asking for trouble.

Thanks for all the great suggestions. I'll look into the solutions offered.
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
The mobo (SuperMicro X9SCM) supports vt-d I think, the cpu does, that much I know. the problem is that if I pass through the entire controller, I wouldn't be able to put my host OS/Hyper-Visor on the SSD. I could put the OS on the SSD if I hook it up to one of my raid cards. But honestly that seems like asking for trouble.
The X9SCM has separate SATA II and SATA III controllers. Use one of the SATA III ports for your SSD OS and pass the SATA II controller through to your storage VM.
 
  • Like
Reactions: Quasduco

Sidiox

New Member
Dec 22, 2015
9
0
1
27
You could setup Storage Spaces on Server 2012, run storage off the base install using the motherboard controllers. SS uses disks you assign to it for storage and doesn't care about the controller. Use Hyper-V for virtualization.

It ain't sexy, but it will work. (Server 2012 R2, Storage Spaces and Tiering)

If you want to get fancy you can even set it up to dedupe and prevent bit rot.
I think I already use Storage Spaces, I have a Storage Pool with all disks in it and 2 virtual disks ontop of that in a mirrored layout, those 2 are then stripped. Creating a RAID 10 array essentially.
But I did not like much how this works, it kinda felt like a ugly hack, and I wouldn't know how to set up a 4 drive "RAID" better with both extra redudancy and speed.
 

talsit

Member
Aug 8, 2013
112
20
18
Take a look at the thread, PigLover and compnay goes into a lot of detail on how to set things up.

You don't have to hack it, the hard part is finding a place to move your data while you reconfigure. You can set your array up any way you want. I have one small array of 2x1TB drives in RAID 0 for my non-critical VHD's (automatic backup to the backup array nightly), a 4TB array with double parity utilizing the MS ReFS for what I consider critical information (family media, backups, etc), then a 16TB, tiered, single parity storage array again, using ReFS, for serving media and as a target for non-critical stuff from the family computers.

I have a server full of HBA's (3xM1015 plus an Intel expander, 3-5x3, 1-3x2 HDD cages and a 4x1 2.5" cage) but after a lot of issues and having my media server down for a month I went with SS. For me, in a non-business, not gonna cost me my job environment, it works well. Performance is adequate to serve media files to a couple of TVs and tablets without hiccups. I bought a Se3016 JBOD and a LSI 9201-16E, connected it to my server and started building arrays.

SS did find a failing drive when I first got it going. I added a replacement drive to the array and it moved the data and repaired the disk without a noticeable impact to performance.
 
Last edited:

Sidiox

New Member
Dec 22, 2015
9
0
1
27
Take a look at the thread, PigLover and compnay goes into a lot of detail on how to set things up.

You don't have to hack it, the hard part is finding a place to move your data while you reconfigure. You can set your array up any way you want. I have one small array of 2x1TB drives in RAID 0 for my non-critical VHD's (automatic backup to the backup array nightly), a 4TB array with double parity utilizing the MS ReFS for what I consider critical information (family media, backups, etc), then a 16TB, tiered, single parity storage array again, using ReFS, for serving media and as a target for non-critical stuff from the family computers.

I have a server full of HBA's (3xM1015 plus an Intel expander, 3-5x3, 1-3x2 HDD cages and a 4x1 2.5" cage) but after a lot of issues and having my media server down for a month I went with SS. For me, in a non-business, not gonna cost me my job environment, it works well. Performance is adequate to serve media files to a couple of TVs and tablets without hiccups. I bought a Se3016 JBOD and a LSI 9201-16E, connected it to my server and started building arrays.

SS did find a failing drive when I first got it going. I added a replacement drive to the array and it moved the data and repaired the disk without a noticeable impact to performance.
Yeah, I looked at the thread, I'll read into it some more. I am currently moving all my data from the array, its 3TB and I only have a dual gigabit connection to it, it'll take a few hours.
Seems like SS is a pretty good solution. I haven't decided on going for Hyper-V or VMware yet but that isn't a very important issue.
I just hope I can keep proper speeds with SS on this 4x4TB array, but I'll test it some more before fully commiting to it.
 

Sidiox

New Member
Dec 22, 2015
9
0
1
27
After some more reading I've now started with Storage Spaces.
I'm trying to set up a two way mirror with 2 columns.
However powershell keeps failing (can't use the GUI wizard because it doesn't have an option for columns).
Code:
New-VirtualDisk -StoragePoolFriendlyName POOL -FriendlyName FileServer -ResiliencySettingName Mirror -UseMaximumSize -ProvisioningType Thin -NumberOfColumns 2
But this gives me the error:
Code:
New-VirtualDisk : Invalid Parameter
At line:1 char:1
+ New-VirtualDisk -StoragePoolFriendlyName POOL -FriendlyName FileServer -Resilien ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (StorageWMI:ROOT/Microsoft/...SFT_StoragePool) [New-VirtualDisk], CimEx
   ception
    + FullyQualifiedErrorId : StorageWMI 5,New-VirtualDisk
Now, I've worked with Bash and Cmd a bit before, and done programming, but this error message..... I have no idea what is going wrong. I'm following both a technet article on this and a blog post ( LazyWinAdmin: WS2012 Storage - Creating a Storage Pool and a Storage Space (aka Virtual Disk) using PowerShell New-VirtualDisk )

Can anyone tell me what is going wrong? I know it can find the pool, because when change the name it fails.
I'm kinda at a loss.
 

Sidiox

New Member
Dec 22, 2015
9
0
1
27
Figured it out. **** Powershells error messages. For some reason the flag -UseMaximumSize failed. I had to give it a specific number. Now it all seems to be fine.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Figured it out. **** Powershells error messages. For some reason the flag -UseMaximumSize failed. I had to give it a specific number. Now it all seems to be fine.
Can't use "usemaximimsize" on a thin provisioned virtual drive. If you changed that to fixed it would have worked or as you found out just set a size, ie 60tb or whatever number you wanted. In server 2012r2 if you did do a mirror space over 4 drives via the guidelines it would have made a two column mirror space. The gui will only limit you once you get past 8 column. So if you were doing a 20 drive mirror you would want to use powershell to specify # of columns as 10 or else the guidelines will limit you to 8.

On a side note a bit of obscure info on column count dual parity maxes out column count at 17 otherwise I am 90% positive all mirror and simple spaces have unlimited column counts. I have set up 32 and 48 column simple spaces to test absolute performance a few times.
 
  • Like
Reactions: Sidiox