Hyper-v virtual Xpenology DSM5.0-4493 updat 3 testing

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Marsh

Moderator
May 12, 2013
2,645
1,496
113
Following http://www.xpenology.nl/hyper-v-installatie/ as guide

Physical windows server 2012r2 hyper-v host:
ASUS Z9NA-D6 with dual E5-2430 cpu, 48gb mem

Xpenology hyper-v VM
2 vcpu , 4gb mem, 1 pass-thru PNY 240gb SSD connected to SATA 2 port, virtual 10gb network interface

Test client windows server 2012r2 VM
2 vcpu, 2gb mem ( fixed ), virtual 10gb network interface

This is the baseline for 1 pass thru SSD drive using SATA 2 port

Map network drive with virtual 10gb link
I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 10

128KB write 207850(207MB/s ), read 262727(262MB/s)
256KB write 253MB/s, read 293MB/s
512KB write 236MB/s, read 283MB/s
1024KB write 240MB/s, read 302MB/s
2048KB write 257MB/s, read 278MB/s
4096KB write 231MB/s, read 298MB/s
8192KB write 227MB/s, read 298MB/s

Map network drive with virtual 10gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|7.81|7624|238|0%|
|*RealLife-60%Rand-65%Read*|8.84|6738|52|0%|
|*Max Throughput-50%Read*|6.56|9098|284|0%|
|*Random-8k-70%Read*|9.33|6388|49|0%|
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
I tried using SR-IOV virtual switch, Xpenology doesn't like it. I had to fall back to basic virtual hyper-v network.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
These number suggest, if I'm not mistaken, that in hyperv in this test full speed is achieved (more or less), right? Thanks for testing this out.
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
Physical windows server 2012r2 hyper-v host:
ASUS Z9NA-D6 with dual E5-2430 cpu, 48gb mem

Xpenology hyper-v VM
2 vcpu , 4gb mem, 2 pass-thru PNY 480gb SSD and Seagate 480gb SSD connected to SATA 3 port ( Raid zero ), virtual 10gb network interface

Test client windows server 2012r2 VM
2 vcpu, 2gb mem ( fixed ), virtual 10gb network interface


Map network drive with virtual 10gb link
I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 10

128KB write 215376(215MB/s ), read 247305(247MB/s)
256KB write 253MB/s, read 313MB/s
512KB write 251MB/s, read 338MB/s
1024KB write 320MB/s, read 311MB/s
2048KB write 282MB/s, read 310MB/s
4096KB write 274MB/s, read 339MB/s
8192KB write 302MB/s, read 302MB/s

Map network drive with virtual 10gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|7.94|7492|234|0%|
|*RealLife-60%Rand-65%Read*|8.12|7345|57|0%|
|*Max Throughput-50%Read*|7.37|8098|253|0%|
|*Random-8k-70%Read*|8.45|7042|55|0%|

Looks like hyper-v virtual network is the bottleneck
I'll try another set of SSD drives before moving on to physical testing client and physical 10gb network
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
Changed: Enable Jumbo frame 9K on both Xpenology vm and WS12R2 vm , WS12R2 vm enable RSS

Physical windows server 2012r2 hyper-v host:
ASUS Z9NA-D6 with dual E5-2430 cpu, 48gb mem

Xpenology hyper-v VM
2 vcpu , 4gb mem, 2 pass-thru PNY 480gb SSD and Seagate 480gb SSD connected to SATA 3 port ( Raid zero ), virtual 10gb network interface

Test client windows server 2012r2 VM
2 vcpu, 2gb mem ( fixed ), virtual 10gb network interface


Map network drive with virtual 10gb link
I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 10

128KB write 198593(198MB/s ), read 202093(202MB/s)
256KB write 246MB/s, read 240MB/s
512KB write 243MB/s, read 240MB/s
1024KB write 235MB/s, read 238MB/s
2048KB write 303MB/s, read 238MB/s
4096KB write 267MB/s, read 241MB/s
8192KB write 260MB/s, read 238MB/s

Map network drive with virtual 10gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|7.72|7678|239|54%|
|*RealLife-60%Rand-65%Read*|8.60|6938|54|27%|
|*Max Throughput-50%Read*|7.11|8435|263|44%|
|*Random-8k-70%Read*|8.47|7044|55|30%|

ATTO benchmark numbers is lower
IOMeter numbers is the same.
Need to verify that Xpenology is actually using Jumbo frame 9K ???
For now, change back to default to continue testing
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
These number suggest, if I'm not mistaken, that in hyperv in this test full speed is achieved (more or less), right? Thanks for testing this out.
It is interesting that network speed is more than 1gigabit but less than approx 2.5 gigabit
 
  • Like
Reactions: DolphinsDan

Marsh

Moderator
May 12, 2013
2,645
1,496
113
I like the LGA1356 platform a lot. FYI, I have few LGA2011 machines as well for comparison

What I like the LGA1356 platform is price and low power.
You could find great deals on Ebay and Newegg. I purchased 2 Supermicro dual cpu boards from ebay for about $60 and $100.
3 ASUS Z9NA boards from Newegg openbox deals range from $190 to $240 , 4 Intel dual boards with Mellanox QSFP , price range from $100 to $175.
CPU prices are great compare to E5-26xx.

I like how the E5-24xx run cool and quiet, no need for high speed fans to cool the system.
The ASUS Z9NA dual E5-2430 with 1 extra Intel E1G42ET , 1 hard drive, 3 SSD running Windows 2012R2 idle at around 61w.

2 x E5-2430 with 48gb mem = CineBench 1478cb

The only downside is the number of RAM slots, 6 to 8 total slots. Most boards have 6 RAM slots ( 6 x 8gb memory ) producing 48gb mem. 48gb memory worked out fine for homelab, not so much for production system.

Now, I have a chance to read the main site article regarding LGA1356, I agree whole heartily about the positive attributes of the LGA1356 platform. It was a articles about E5-2430L from last year started my road to search for low cost parts.
 
Last edited:

Marsh

Moderator
May 12, 2013
2,645
1,496
113
Last one,
Reason for this experiment because I want to virtualize one of my many Xpenology server. This particular physical Xpenology is for my iSCSI boot server. It only holds boot OS images for my home lab (Windows OS image and such ) , not intent for bulk storage. It is limited to 1gb network card that support iPXE boot.

For the above reason, I see no need to use pass thru disk, just trying out hyper-v dynamic vhd file as virtual disk.
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
Physical windows 8.1 hyper-v host:
Biostar Z77 motherboard with i7-3770K, 16gb memory,

Xpenology hyper-v VM
2 vcpu , 4gb mem, 240gb vhdx Dynamically expanding virtual hard disk ( SATA 3 port ), 1gb network interface

Test client windows server 2012r2
ASUS Z9NA-D6 with dual E5-2430 cpu, 48gb mem, 1gb network interface


Map network drive with 1gb link
I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 10

128KB write 117563(117MB/s ), read 116508(116MB/s)
256KB write 118MB/s, read 118MB/s
512KB write 118MB/s, read 118MB/s
1024KB write 118MB/s, read 118MB/s
2048KB write 118MB/s, read 118MB/s
4096KB write 118MB/s, read 118MB/s
8192KB write 118MB/s, read 118MB/s

Map network drive with 1gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|16.73|3552|111|0%|
|*RealLife-60%Rand-65%Read*|5.50|10714|83|0%|
|*Max Throughput-50%Read*|9.74|6088|190|0%|
|*Random-8k-70%Read*|6.03|9754|76|0%|

No problem saturating 1gb network.
Off to move my iSCSI Lun over to the virtual Xpenolgoy VM
 

Pri

Active Member
Jul 30, 2014
124
52
28
I've deployed this on my server as a VM. I'm using VMWare Workstation 10 at the moment.

I'm quite impressed with its performance. Seems like XPEnology is here to stay. I think I'm most impressed by its low ram usage.

Thanks for posting these benchmarks as it was due to this thread that I found out about XPEnology and decided to try it.
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
You are welcome.

I totally agree with you. I spent thousands of hours trying out various storage server software in past years.
My goal is really simple, just to saturate 1 single 1gb network link and decent IOPS.
Until Xpenology ( couldn't really justify a real Synology box ), I found it takes me 5 min to install the software, no worry, it will run and saturate 1gb link without trying.
The software will run on any modern hardware , from the lowly $20 Intel celeron chip and up.

Have fun.
 

Pri

Active Member
Jul 30, 2014
124
52
28
Before this I was virtualizing FreeNAS. I use Windows Server 2008 R2 at the moment (about to upgrade to 2012 R2 with new hardware) and me and my wife both use Mac laptops. Of course Time Machine works best with AFP protocol which 2008 R2 doesn't support.

So I used FreeNAS just to get Time Machine compatibility for our notebook backups but it absolutely gobbles memory. I gave it 2GB (I only have 24GB total at the moment in the server) and it constantly needed to be restarted due to ZFS needing much more memory than 2GB.

After reading your thread here I thought maybe the DSM would work better since Synology boxes usually come with 1GB to 2GB of memory and are designed to run on very low performance NAS's and that sure is the case, it runs amazingly well. I'm seriously impressed.

I'm actually thinking about buying a Synology NAS now just to support the company. Many years ago I had a Thecus NAS (ARM based, 512MB RAM, 4 Disk Drives) and it wasn't anything special, that experience really put me off pre-built NAS's but this Synology DSM operating system is so good I feel like buying a new one.
 
  • Like
Reactions: Patrick

Marsh

Moderator
May 12, 2013
2,645
1,496
113
I like to say a big Thank You to all the folks that brought us Xpenology software.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Imagine if they sold like a $50/ server license and you could virtualize.
 
  • Like
Reactions: Kristian

Pri

Active Member
Jul 30, 2014
124
52
28
I think if Synology offered DSM as a standalone for $200 with free upgrades for say three years I'd buy it.

I suppose they don't want to do it incase it dilutes their hardware sales but it's so good.
 
  • Like
Reactions: Kristian

Marsh

Moderator
May 12, 2013
2,645
1,496
113
My virtualized Xpenology iSCSI boot server is working well.

Pro:
Backing up hyper-v Xpenology VM is nothing more than copying the vhdx files to my backup server, restore the entire Xpenology VM is a snap.
Great for lab host.
Xpenology iSCSI also have snapshoot features.

Con:
Xpenology iSCSI through put is slower around 50MB/s write , 60MB/s read, Xpenology iSCSI is known to be slower than NFS, as well as due to using iSCSI file mode on top of hyper-v vhdx file as virtual disk.

Overall , it is really performing well as iSCSI server