Hypernator the Everlasting Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Wixner

Member
Feb 20, 2013
46
3
8
Build’s Name: Hypernator
Operating System/ Storage Platform: Windows Server 2016 TP3 Core
CPU: Intel Xeon E5-2620v3
Motherboard: Supermicro X10SRi-F
Chassis: Supermicro SC216E16-R1200LPB
Drives:
  • 2 Intel 520 120GiB SSD @ RAID1 - Hypernator boot
  • (?) 4 Samsung EVO 840 250GiB @ RAID0 - Storage for VM boot ("C:\")
  • (?) 6 Samsung PM830 120GiB @ RAID0 - Storage for VM storage ("D:\")
  • (?) 1 OCZ Vertex 3 - ISO storage
RAM: 64GiB Samsung DDR4 ECC 2133MHz
Add-in Cards:
Power Supply: Redundant PWS-1K21P-1R 1200W 80+ Gold Certified
Other Bits: Eaton 5P 850iR UPS

Usage Profile: Heavy Virtualization


Other information…

Backlash #1
I could swear that Windows Server 2016 TP3 had the capabilities to virtualize Windows Server 2012 R2 w/ Hyper-V but I was wrong. This is a major dealbreaker as I need the capabilities to virtualize an entire Windows Server 2012 R2-environment including hypervisors

Backlash #2
Power-LED / Activity-LED of some of my SSDs...

Option #1.1
VMware vSphere - Not my choice of hypervisor but capabilities to virtualize Hyper-V and good passthrough options. Installable on USB. Pain in the nether regions to manage.

Option #1.2
Proxmox
- Good management software out of the box. ZFS support. Not sure about passthrough and nested virtualization

Option #1.3
Windows Server 2016 TP4
- Not that I've got high hopes, but perhaps TP4 har better support for nested virtualization
 
Last edited:

Wixner

Member
Feb 20, 2013
46
3
8
There's much more to be documented, like the use of Deduplication (not available in Hyper-V Server) and despite several tries I haven't been able to boot anything MS-related from USB.
 
Last edited:

Wixner

Member
Feb 20, 2013
46
3
8
RAID-0 probably not for anything serious, this server is purely for testing I would guess.

Don't get your hopes up with that RAID card, basic card is the term best used to describe.
Indeed - Nothing on this server will be in production.
I am getting second thoughts of the RAID-Controller as well and might turn to eBay for a 9211-8i or similar and go down Storage Space-lane
 

Wixner

Member
Feb 20, 2013
46
3
8
What about a ibm raid 700 or 710? Lsi 9260?
This is a killer deal for a nice card.
LSI MegaRAID 9261-8i 8-port PCI-E 6Gb/s SATA/SAS RAID Controller Card
That is (almost) the same RAID I'm using at the moment: Intel RS2BL080 is a LSI-9260-8i and the difference is the placement of the SAS-connectors (Upwards on 9260 vs Backwards on 9261)
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
The RAID card is a RAID card rather than a HBA like some of the Intel that are out there.
Intel® RAID Controller RS2BL080 Specifications
It does do an OK job and for most purpose will be fine. They are LSI2108 based and can be cross-flashed if need be. The low RAM size may impact things if you plan to flog it though. These cards also chew through BBU's and I am sick to death of having to swap them at only 12 months old.
TRIM not supported either.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
My favorite LSI card 9217/9207 IT mode HBA , for Windows storage space as well as other storage OS.
LSI SAS2308 chip , PCIe 3.0 bus

I always buy the LSI 9217/9207 card when the price is less than $90 even I have no immediate use at the moment.

I been looking for a external version of LSI 9207/8E a while, This listing popped up on Ebay for $79 , so I hit BIN button.
LSI - SAS9207-8e 8-port 6GB/s SATA+SAS PCI-e - Host Bus Adapter
 

Wixner

Member
Feb 20, 2013
46
3
8
I've had my eyes on 9217 for a while but the cheapest I can find is 190$ from China and that might as well be a pirate product.

In Dual Link mode, the 9211-8i can manage (2 links x 4 lanes 6G) 48Gbps theoretical throughput and its PCIe 2.0 x8 has a theoretical throughput of (8 x 500MB/s) 4000MB/s to there is no real need to step up to the 9217 and its PCIe 3.0 interface.
 
Last edited:

Wixner

Member
Feb 20, 2013
46
3
8
Seems like a nice setup - a little bit over the top for me at the moment though.

I have spent way to much money on a short period of time on this build so I'll get rid of my RS2BL080 and settle for a reverse SFF8087-to-SATA cable and use the internal SATA-controller until I can find a HBA with a decent price. Using Storage Spaces wouldn't (knock on wood) be any problem to move to an HBA.

I wonder if the release of Windows Server 2016 TP4 has better nested virtualization support...
 

Wixner

Member
Feb 20, 2013
46
3
8
As you can see this build is somewhat silent at the moment, but I'm still waiting on a Reverse SFF-8087 Breakout-cable and the delivery from Germany to Sweden seems rather slow
 
Last edited:

Wixner

Member
Feb 20, 2013
46
3
8
I just received my reverse SFF-8087 breakout-cables but these do not seem to work with SAS-expanders - that's another 30$ wasted on this build.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Breakout cables and SAS expanders work fine together (assuming the correct direction of breakout cable depending on the setup).

SATA controllers (onboard or not) and SAS expanders do not work together.
 
  • Like
Reactions: Wixner

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
Late to join the party

a) why nested virtualization
b) i have always thought, and it has been my experience, that Raid 0 does not need a Raid card.
The drive has support for 10 SATA drives. Even with 10 SSDs you will not break 6 GB/s
get a dual M2 drive that has 2 SATA ports and use that as fusion IO drive for the storage
upload_2015-12-8_14-20-21.png

c) I don't agree with the poster above when using SSDs. We have tested 4 different expanders and, even when not expanding, the IO of the SSDs goes down by about 25%. it was even more when using multiple SSDs. I went as far as putting 4 ports in 1 port out and 1 drive in the cage. Our results were 550 w/o expander and 425 with the expander. with 12 SSDs (4 ports out) and 4 ports in it dropped to 325* per drive. I will put a big asterisk by those number by saying that this was before we realized that CDM was not cut out to give accurate numbers over 5GB/s. I will create a thread on this using the 36 port expander we have left.

d) in response to your backlash 2. Using the Icydock 6 or 8 drive cage for SSDs the lights worked great. In our Intel Servers it is hit or miss.

general comment. I have for years had a Hyper-V host on a laptop, set up with deduplication running up to 27 VMs and 4 SSDs, 1 for OS boot and programs and using the other 3 disks raid 0 without issues. 4 years ago i used 250GBs SSDs in my m6500. in my m6700 i used 500GB SSDs. in my M6500 I did backups to a USB drive and on the m6700 I dedicated one of 4 SSDs as storage for my backsups. 1 boot 2 raid 0 and 1 storage.

Frankly if you are willing to dedup your VHDs you can get by with much less SSD space than you think. The VHDs of the VMs dedup down massively. Even with 64 GB and assigning 1-2 GB per VM I doubt you could even fill up the 1TB of VHDs. This would allow you to use the 6 disk raid 0 for your backups and programs and file storage.
 

Attachments