FreeNAS on Hyper-V - will this work?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

katit

Member
Mar 18, 2015
372
18
18
53
I guess it's not main question :) I know it will and I have articles from Patrick on this.. But I wanted to see if this will work GOOD given my scenario.

This is for small office (couple users) network share, backups.
I already have experience with FreeNAS at home, but I built hardware from scratch for this purpose.

Right now I have following:

1. S2600CP based server with 128Gb RAM and tons of resources available. Hyper-V 2012.
2. Wonderful P4000M case with 16x 2.5 drive bays. 8x of them connected to MB SATA ports via reverse breakout cables. 8x of them - ready to go. (2x 4 drive bays can be connected via SAS cables)
3. M1015(?) card sitting on my desk, can be flashed in IT mode I guess

Everything seems just perfect. I install LSI card, flash to IT mode. Create new VM. VM will be stored where other VMs stored. I will pass-through LSI card to FreeNAS VM and will have 8x nice bays to play with. Most likely it will be just a striped mirror. We don't need ton's of storage, so..

I can give FreeNAS 16Gb, 32Gb - no problem. 4 CPUs - no problem. Plenty of resources and spaces for drives. Just begs to be done like this..

Question is - will it work?? Can I pass-through card like that(never done it before)

What bothers me is that on FreeNAS forums overall feedback is like "Yeah, it will work, just make sure pass-through storage card to FreeNAS" and they always talk ESXi. And it's like "last resort scenario"

For me it seems like what I have right now will work excellent, but don't want to waste time if someone already did it and it doesn't work :)
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
You can pass DRIVES thru with 2012/2012R2, but you cannot pass the card thru to the VM. You mark them offline on the physical host, then pass the drive thru to the VM. There is a performance penalty.

Quantify "good": What kind of read/write performance are you expecting from the system?
 
  • Like
Reactions: Patrick and katit

katit

Member
Mar 18, 2015
372
18
18
53
You can pass DRIVES thru with 2012/2012R2, but you cannot pass the card thru to the VM. You mark them offline on the physical host, then pass the drive thru to the VM. There is a performance penalty.

Quantify "good": What kind of read/write performance are you expecting from the system?
See. I didn't know about this (passing drives).

I have 2x LAN ports onboard. I'd like to dedicate 2nd port to separate virtual switch and assign FreeNAS to it. I want FreeNAS to be able to saturate gigabit network.

I don't need to do SCSI or run VMs off it. But It would be nice if VM's running on same physical host will be able to R/W 300-400Mb/s (if I use SSDs for storage). Will it work like this?
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
I've only run tests with ZFS providing storage for a Windows server VM; in situations where I am running 2012, I generally have Windows endpoints and I prefer Windows Server to do the actual file sharing, rather than FreeNAS.

This is more overhead than you're attempting, but when I did that I only managed around 150-200MB/s from the Windows Server VM that was backed by ZFS.
 

katit

Member
Mar 18, 2015
372
18
18
53
Not bad. We don't have domain, FreeNAS will do all I need for sharing. If it's less overhead then I might get speeds >200 from VMs on same hardware?

When I pass drives to VM, is it direct "low level" deal and FreeNAS will be able to access SMART and so on?

Or, in other words if I move those drives into FreeNAS "on metal" will it work?
 

DavidRa

Infrastructure Architect
Aug 3, 2015
329
152
43
Central Coast of NSW
www.pdconsec.net
You will be able to pass through PCIe devices with Server 2016, if all I've read comes to pass. That's probably due in September, I think.

The disks will be marked offline in the host (though the device will still show up in devmgmt.msc) and from what I remember (a long time since I've done it) you should see SMART data etc. I like FreeNAS - but I'm curious - what do you gain with the extra layer of complexity (i.e. FreeNAS)? I'm seeing >300MBps on non-SRIOV 10Gbps interfaces, from within VMs on E5600-era kit, and the VM storage is on another host entirely (SMB3 over 10Gbps for storage access). Lots of layers of redirection, but it's easily quick enough.

With direct access (HW host to HW host) I am seeing over 1GBps at times doing streaming writes, and >20K IOPS. For reference it's a 6 x (4D+2P) RAID60 of 3TB SATA drives on an LSI 9271, SAS expander backplaces, with a 4D+2P RAID6 SSD cache layer. It's all managed by Storage Spaces.
 
  • Like
Reactions: katit

katit

Member
Mar 18, 2015
372
18
18
53
I have 2012. Nice to know about 2016 but doubt we will purchase licenses.

I wanted FreeNAS because it's free. Running another win server 2012 VM will cost license. I don't want to setup storage on host itself, machine need to be easily rebuild-able. Also, size of Win VM will be larger

But let's say I will try win 2012 vm for storage host. Is it common config where it's in VM and I pass disks through?

I also just learned my LSI 9260 can't be flashed to IT :(
 

DavidRa

Infrastructure Architect
Aug 3, 2015
329
152
43
Central Coast of NSW
www.pdconsec.net
But let's say I will try win 2012 vm for storage host. Is it common config where it's in VM and I pass disks through?
No I wouldn't say it's common, especially now that a VHDX can be 64TB or thereabouts. There's just not enough reason to "bother" - I recall benchmarks showing 99% of bare metal performance for the VHDX in a VM, assuming you have all the right other bits and bobs like SRIOV and fast CPUs etc.

The Hidden Treasures of Windows Server 2012 R2 Hyper-V? (Channel 9) may be interesting in this context (at 21:14).
 
  • Like
Reactions: NetWise

katit

Member
Mar 18, 2015
372
18
18
53
We already use Hyper-V. And it's working good for what we do. We figured backups, replica feature works great... We will stay with this virtualization
 

katit

Member
Mar 18, 2015
372
18
18
53
No I wouldn't say it's common, especially now that a VHDX can be 64TB or thereabouts. There's just not enough reason to "bother" - I recall benchmarks showing 99% of bare metal performance for the VHDX in a VM, assuming you have all the right other bits and bobs like SRIOV and fast CPUs etc.
I want just 1 big drive for storage. How would you do it?
A. Create stripe on HOST. Create 1 big VHDX and pass it to VM
B. Create VHDX on each drive, pass them to VM and create stripe in VM

I want just couple of TB, so most likely it will be 4-5 SSDs

Most likely I'm not going to build any RAID, this will be all about protection and backups will be don daily to external drive and live to CrashPlan..
 

DavidRa

Infrastructure Architect
Aug 3, 2015
329
152
43
Central Coast of NSW
www.pdconsec.net
I want just 1 big drive for storage. How would you do it?
A. Create stripe on HOST. Create 1 big VHDX and pass it to VM
B. Create VHDX on each drive, pass them to VM and create stripe in VM
A. Definitely A.

If you were going to go A already, good job.

If you're thinking about B, I'd recommend switching to A.

And for the avoidance of doubt ................ option A is the right choice.
 
  • Like
Reactions: katit

katit

Member
Mar 18, 2015
372
18
18
53
And.. I think I know answer, but to double-check.. If I go with SSDs - speed of 1 disk is enough for this application. If I want to add more storage as needed - then I should not use stripe and instead use SPAN, correct? This way I can put 2 disks now and as they fill in - I can just add more and increase size of VHDX, right?
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
Yes, you should span, not stripe.

But the risk of failure scales as you increase the number of devices involved. Only 1 disk needs to fail for the entire VHDX to be unavailable.

If you're talking about the SSDs for the production workload, I'd start with a hardware RAID5/6, or RAIDZ1/2, depending on the number of drives you can afford to start with. I wouldn't start with a single disk unless you have application/virtualization failover/replication in place.

If this is for your 2nd or 3rd tier backup, span away.
 

Zack Hehmann

Member
Feb 6, 2016
72
5
8
What backup software are you using? Is it doing host backups or guest backups? Are you going to be running vss snapshots on the host for all of the VMs? Is there a way to exclude the freenas vm from your backup?

We are running hyper-v 2012 r2 and have a few Linux VMs (Dell KACE appliance, 3rd light DAM, turnkey Linux) and we run host backups. Every time the host would run a backup job, it would make the Linux/BSD VMs blow their brains out. Something about the snapshot during the backup breaking the storage driver for the VMs.

We have been working with Dell's engineers on trying to fix the issue from KACE. We have had 3 Linux/BSD engineers from Dell and big guys from Microsoft working on the issue. They were not able to resolve this issue and Dell ended up sending us a R720 to run KACE bare metal until the come up with a solution.

Everything was fine when we had KACE on ESXi.

That being said, I would not recommend running FreeNAS on Hyper-V. I feel that Microsoft has to come a long way for non Microsoft VMs to run on Hyper-V.

Sent from my Nexus 6P using Tapatalk
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
With drives bypassed/directly connected, there's little/no point to running Hyper-V-level backups. You'd backup FreeNAS config, then do ZFS replication/syncs to another system.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
So other than from a learning/curiosity standpoint - which is why most of us are here - my question would be why do you want to complicate it so much?

You call out "I don't want to setup storage on host itself, machine need to be easily rebuild-able." Being a consultant who regularly has to rebuild these undocumented frankensteins after the previous guy gets hit by the lottery-bus or something, this is just horrible. It's complexity for the sake of complexity. If you want it to be easily rebuildable, build an array on the hardware, run a VHDX to the VM, and backup the VM using a VM based backup tool (eg: Veeam or anything else of your preference). That's as "easily rebuildable" as it needs to be, IMHO.

"I also just learned my LSI 9260 can't be flashed to IT :(" - which gets you your RAID5/6 or RAID10 array, with BBU and cache. Perfect.

You also say "This way I can put 2 disks now and as they fill in - I can just add more and increase size of VHDX, right?". Any good hardware RAID controller can do Online Capacity Expansion. So that's moot.

"I'd like to dedicate 2nd port to separate virtual switch and assign FreeNAS to it. I want FreeNAS to be able to saturate gigabit network." - you'll probably only need a single SSD to achieve that. However, you indicated you had a handful of users, and they're going to be doing what I assume is general office NAS stuff. If so, they're probably never going to make it break a sweat, unless it's during some massive file sync or backup.

"But It would be nice if VM's running on same physical host will be able to R/W 300-400Mb/s (if I use SSDs for storage). Will it work like this?" I don't know with HyperV, but for ESXi to do this, you'd have to be two Port Groups on the same vSwitch, for the transfer to stay in memory. Otherwise, if you're hitting two vSwitches, it would go out to the cable/network to do the transfer.

Overall, my question would be why you're seeing to create complexity for the sake of complexity? What VALUE are you creating, by making it so one-off? You're not looking for a particular speed goal for a basic NAS. You're not tiering or presenting storage to other systems.

I would:
* Use Hardware RAID with BBU and cache to create a 4-5 disk SSD array, as you mention. Your RAID flavour of choice - 5/6/10, etc. You can expand this later if required.
* Assign what you need as a VHDX to the VM - you can expand this later if required.
* Any OTHER VM's on this host, can now leverage this SSD volume, which is sure to improve their performance/usability.
* Now you can utilize Crashplan _or_ a VM/Imaged based backup application.
* Restoration and maintenance by someone who isn't you, later, is so simple, it hurts. No complexity, no frankenstein, no need for crazy locating of never completed documentation, that wasn't updated through various configurations (yeah, I'm bitter about the environments I get to inherit :)). They just rebuild the host - or any host - attach the backup drive, and restore the VM. Zero hardware dependencies, as it should be.
* You retain the easy ability to do snapshots, replication, etc. Once you start doing things "in-VM" you start restricting the ability of the hypervisor layer to easily protect and manage your VM.

That's my $0.02.
 

katit

Member
Mar 18, 2015
372
18
18
53
So other than from a learning/curiosity standpoint - which is why most of us are here - my question would be why do you want to complicate it so much?
I'm all for simplicity, will explain my thinking below..

You call out "I don't want to setup storage on host itself, machine need to be easily rebuild-able." Being a consultant who regularly has to rebuild these undocumented frankensteins after the previous guy gets hit by the lottery-bus or something, this is just horrible. It's complexity for the sake of complexity. If you want it to be easily rebuildable, build an array on the hardware, run a VHDX to the VM, and backup the VM using a VM based backup tool (eg: Veeam or anything else of your preference). That's as "easily rebuildable" as it needs to be, IMHO.

"I also just learned my LSI 9260 can't be flashed to IT :(" - which gets you your RAID5/6 or RAID10 array, with BBU and cache. Perfect.
I had this card in old server and there is no battery. And it runs HOT. Ditched it partially because it is NOT easily rebuildable. More precisely, I don't have spare card with same config laying around. I'm more OS controlled RAID guy now. With SSD reliability I'm even "no raid" guy. I have 3 backups (will explain below)

I would:
...
That's my $0.02.
I understand. Here is what we have today..

I have very basic HOST setup. I can re-build it very quick. Basically it's just Hyper-V role and backup software (cloud backo pro)

All critical VMs is Linux-based. All CentOS7 and they all backup just fine via snapshots. There is 3 of them:
Asterisk
SVN
JIRA

They all <5Gb in size.

One other VM is Win 2012 R2 and it's our Dev server with SQL Server, etc. Not quick to rebuild this one, but it's rebuild-able. It's about 150Gb now...

Every day I come to work and insert USB drive into server. At 11AM backup of all VMs happen. Takes about 30-40 minutes.

Every day at 10pm another backup of all VMs happen to separate internal drive on server

Every day at 1am smaller VMs backup to my home server(FreeNAS) over internet. Uplink is slow (4Mb) so I can't backup whole thing.. Only those 3 critical VMs with data.

FreeNAS at home backs up EVERYTHING on it into CrashPlan

TODAY we expanded a little and have a need to setup network storage inside office beyond just source control and time sheets. I don't expect big volume, but we may have up to 1 TB of data. Problem with VM backups - even though I can do incremental - it's still very big. So, having storage in VM and backup from HOST would not work very well. And we need very little. Just file sharing and permissions (workgroup, no AD)

I thought about FreeNAS because I already have experience with it. I'm OK with something else but I'm not 100% about backup strategy.

As mentioned above - with FreeNAS I would just backup configuration and it's small. With another Windows VM - it will be larger. And I would have to add another backup so it will go like this:

1. HOST backs up VM (but not data)
2. VM backs up Data. Crashplan, RSync to my home FreeNAS, etc. This way I can keep backup traffic to minimum.

Any suggestions?

At this point I don't see a point in spending money on extra drives, expansions, etc for RAID.

P.S. And yeah, I forgot. I want to setup another Hyper-V machine at home and use Replica feature to sync it in. This supposedly going to be very efficient way to have "replica" of our running machine. And almost real time. Didn't do it yet, but this is a plan. Only thing stopping me - I need another server at home :) I have one but it's little overkill for just replicating data (+ electric bill)
 
Last edited:

katit

Member
Mar 18, 2015
372
18
18
53
What backup software are you using? Is it doing host backups or guest backups? Are you going to be running vss snapshots on the host for all of the VMs? Is there a way to exclude the freenas vm from your backup?
Good point.. We do run host backups and backup VMs via vss shapshots. But it works for us because CentOS supported by Microsoft, there is guest tools in CentOS kernel and it just works.

Yes, we can exclude FreeNAS from backup and it's what I would do anyway because of potential size.
 

ealvar

Member
Mar 4, 2013
55
14
8
I have 2012. Nice to know about 2016 but doubt we will purchase licenses.

I wanted FreeNAS because it's free. Running another win server 2012 VM will cost license. I don't want to setup storage on host itself, machine need to be easily rebuild-able. Also, size of Win VM will be larger

But let's say I will try win 2012 vm for storage host. Is it common config where it's in VM and I pass disks through?

I also just learned my LSI 9260 can't be flashed to IT :(
A Windows 2012 R2 Standard license grants you rights to run 1 physical & 2 virtual instances of the OS. Essentials is one physical and one virtual.