Virtualization Infrastructure Rebuild

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

smokey7722

Member
Oct 20, 2015
109
23
18
So I am working on a full rebuild of all of my systems here. Putting aside the main file server (in another thread) I am trying to get the virtualization gear replaced now. Below are the three designs I am about 99% settled on at this point. Thankfully Supermicro refreshed their line and the SC113M looks to be a much better fit for me than other options. I do need to determine how loud the four included 40mm fans are and if needed replace them with quieter ones. The chassis can take up to eight 40mm fans so I could replace the four original ones with eight quieter (less effective) ones.

As the MB10-DS3 has SFP+ and gig RJ45 ports, I will connect one SFP+ directly to the VM_SAN box below for data to ESX_Core and then one RJ45 to my network for mgmt. ESX_Core however will have the one SFP+ tied up for the NFS data while the second will uplink to the network (no 1GB RJ45 being used). For now it is a small setup and with the ESX_Core and one node the direct link to VM_SAN should be fine. At some point when I add a third node, I will go back to the network and redesign the 10GB connectivity.

Build’s Name: VM_SAN
Operating System/ Storage Platform: FreeBSD 10.3
CPU: Xeon D-1541
Motherboard: Gigabyte MB10-DS3
Chassis: SC113MFAC2-605CB
Chassis Rails: MCP-290-00056-0N
Chassis Riser Card: RSC-RR1U-E16
Drives: 2x Seagate ST1000NX0323 1TB RAID1 | 4x Seagate ST91000640SS 1TB RAID10 | 1x ST91000640SS 1TB HSP
RAM: 2x M393A2K43BB1-CRC0Q (16GB modules, 32GB total)
Add-in Cards: LSI 9361-4i Raid Controller
Power Supply: Supermicro supplied in chassis

Usage Profile: VM Storage System
Other information: This system will act as the file server for VM's. The RAID1 array will be the OS installation and the RAID10 will be used for the storage. I have 6 of the ST91000640SS drives here and while I could create a 6 drive RAID10, I would be left with no spare drives. Since they are out of warranty, for now I don't need any more than ~1.8TB and the performance should be fine for the load I will throw at it I figured I would go that route. I will have one additional drive bay left over and I could put in the last 1TB for 2 HSP's but I see no reason to put hours of use on it for likely very little gain. Other than that, it will run FreeBSD and serve the ESX storage up via NFS.


Build’s Name: ESX_Core
Operating System/ Storage Platform: ESX 6.0
CPU: Xeon D-1541
Motherboard: Gigabyte MB10-DS3
Chassis: SC113MFAC2-605CB
Chassis Rails: MCP-290-00056-0N
Chassis Riser Card: RSC-RR1U-E16
Drives: 16GB USB Flash Drive | 1x Samsung 850 Pro 1TB | 4x Seagate ST2000NX0243 2TB RAID5 | 2x Samsung 850 Pro 1TB RAID1 | 1x Samsung 850 Pro 256GB
RAM: 2x M393A4K40BB1-CRC0Q (32GB modules, 64GB total for now)
Add-in Cards: LSI 9361-4i Raid Controller
Power Supply: Supermicro supplied in chassis

Usage Profile: Core ESX System
Other information: Primarily this system will run the core VM's. For now it will run all VM's until I build a second ESX chassis (same specs as this machine, just without all of the local hard drives). The local drives are purposed in the following manner:

16GB USB Flash Drive: ESX OS
1x Samsung 850 Pro 1TB: Passthrough Dedicated Live Surveillance Storage
4x Seagate ST2000NX0243 2TB RAID5: Passthrough Dedicated Archival Surveillance Storage
2x Samsung 850 Pro 1TB RAID1: Local VM Storage
1x Samsung 850 Pro 256GB: Passthrough Dedicated Disk for a VM

The reason for local storage on this box is to keep the hardware requirement as small as possible for booting. This system will run the backend VM's for core systems (virtualized router and pbx, network services, etc) and needs to be online regardless of any other systems (obviously if the switches are offline then I am screwed but if the VM_SAN goes down then whatever). A large number of VM's will be kept on local storage though the remainder will be stored on the VM_SAN in preparation of adding another ESX chassis and migrating them to that chassis.


Build’s Name: ESX_n+1
Operating System/ Storage Platform: ESX 6.0
CPU: Xeon D-1541
Motherboard: Gigabyte MB10-DS3
Chassis: SC113MFAC2-605CB
Chassis Rails: MCP-290-00056-0N
Chassis Riser Card: RSC-RR1U-E16
Drives: 16GB USB Flash Drive
RAM: 2x M393A4K40BB1-CRC0Q (32GB modules, 64GB total for now)
Add-in Cards: None
Power Supply: Supermicro supplied in chassis

Usage Profile: ESX Node
Other information: All non core VM's will be generally moved to this chassis and if needed additional ram will be installed. As needed additional nodes will be built and installed and vMotion enabled to handle loads better.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Everyplace you have "Samsung 850 Pro" replace with the new Samsung 863 if you want to stick with samsung :)
If not then go to Intel S3500, S3700, etc... :)
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
You're using hardware RAID, correct? Xeon D-1541 is probably overkill for your VM_SAN. Are there cheaper options available?
 

smokey7722

Member
Oct 20, 2015
109
23
18
Everyplace you have "Samsung 850 Pro" replace with the new Samsung 863 if you want to stick with samsung :)
If not then go to Intel S3500, S3700, etc... :)
Thanks, I'll take a look at the 863 though I am not really married to Samsung. I have 1 1TB here already I was planning on using so I just need to purchase a pair which I can get Intel's if the price is comparable.

You're using hardware RAID, correct? Xeon D-1541 is probably overkill for your VM_SAN. Are there cheaper options available?
Well there are and there aren't. The boards may be cheaper, but once I add in a cpu thats low TDP and cooler for it and then a dual 10GB SFP+ nic I am pretty much back up to the cost of the Gigabyte. I think it was within $70 when I priced things up that way?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Thanks, I'll take a look at the 863 though I am not really married to Samsung. I have 1 1TB here already I was planning on using so I just need to purchase a pair which I can get Intel's if the price is comparable.



Well there are and there aren't. The boards may be cheaper, but once I add in a cpu thats low TDP and cooler for it and then a dual 10GB SFP+ nic I am pretty much back up to the cost of the Gigabyte. I think it was within $70 when I priced things up that way?
If you end up with the consumer samsung be sure to over provision them or the write performance drops significantly.

If you've got time you can snag the Intel S3700 400GB on ebay for $200 or less and they'll walk circles around the samsung for VM usage :)
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
Well there are and there aren't. The boards may be cheaper, but once I add in a cpu thats low TDP and cooler for it and then a dual 10GB SFP+ nic I am pretty much back up to the cost of the Gigabyte. I think it was within $70 when I priced things up that way?
Really? The X10SDV-4C-7TP4F (for example) gives you more expansion ports, and onboard SAS for less. It might be worth that tradeoff to give you more expansion slots.
 

smokey7722

Member
Oct 20, 2015
109
23
18
If you end up with the consumer samsung be sure to over provision them or the write performance drops significantly.

If you've got time you can snag the Intel S3700 400GB on ebay for $200 or less and they'll walk circles around the samsung for VM usage :)
I need roughly 600GB of usable space for the existing VM's that will be stored on those drives so a 400GB definitely won't cut it. It looks like the 800GB S3710 (current model) is about 2.5 times the cost of those Samsungs and even thats going to be tight with the existing storage needs in terms of having much room for growth. I don't mind if I need to spend more for the right drives and if they will perform better but to go up to the 1.2TB drives - thats about the cost of every other bit of hardware in the system to get two of them. I'll start searching but this could be an issue finding something thats affordable to be an option.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I need roughly 600GB of usable space for the existing VM's that will be stored on those drives so a 400GB definitely won't cut it. It looks like the 800GB S3710 (current model) is about 2.5 times the cost of those Samsungs and even thats going to be tight with the existing storage needs in terms of having much room for growth. I don't mind if I need to spend more for the right drives and if they will perform better but to go up to the 1.2TB drives - thats about the cost of every other bit of hardware in the system to get two of them. I'll start searching but this could be an issue finding something thats affordable to be an option.
I'm not sure your # of VMs or intended usage or usage at once but I wouldn't have the goal of 1 single drive for all my VMs. I try to do a minimum of 4 drive raid10 / 2 mirrored vdevs if on ZFS. 4x 400GB S3700 or HGST SLC or MLC could run anywhere from $400 to $800 depending on the deal you find :) for 800GB usable in that case.

All samsung until the 863 (latest) have latency issues IMHO (compared to intel), and the consumer (even older ent) have much worse IOPs compared to Intel drives. You can find Intel S3500 300GB for $90 get 6 of those for 900GB usable and you combine the performance of 3 drives... not as fast as S3700 on writes but can be had much easier for cheap prices and reads are awesome fast :)
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
Agreed. And going down from the D-1540/1541 on the SAN node to a slightly slower CPU would probably get you some budget back towards better flash, which I think is a worthwhile option to consider.
 

smokey7722

Member
Oct 20, 2015
109
23
18
Intel DC S3710 1.2 TB Internal 2.5" SSD, Enterprise-class SSDSC2BA012T4 would you give 1.2TB for $700. I'd also recommend that over the consumer Samsungs.
Ah nice - I was going by new prices from my distributor as I never had good luck with warranties on stuff like this if purchased from ebay. How is Intel's warranty handled if there is an issue?

Really? The X10SDV-4C-7TP4F (for example) gives you more expansion ports, and onboard SAS for less. It might be worth that tradeoff to give you more expansion slots.
Its going in the SC113M chassis so I will only ever have a single expansion slot at my disposal. Onboard SAS doesn't help me as its SAS2, not SAS3 and I plan on doing hardware raid not ZFS. Thats why I didn't consider that board (I know it was just an example though). I know the Gigabyte's CPU is overkill as I originally wanted to go with the MB10-DS4 using a D-1521 but that board is Built To Order I am being told by my distributor as well as my Gigabyte partner rep and I can't get just one without ordering a large quantity of them.

I'm not sure your # of VMs or intended usage or usage at once but I wouldn't have the goal of 1 single drive for all my VMs. I try to do a minimum of 4 drive raid10 / 2 mirrored pools if on ZFS. 4x 400GB S3700 or HGST SLC or MLC could run anywhere from $400 to $800 depending on the deal you find :) for 800GB usable in that case.

All samsung until the 863 (latest) have latency issues IMHO (compared to intel), and the consumer (even older ent) have much worse IOPs compared to Intel drives. You can find Intel S3500 300GB for $90 get 6 of those for 900GB usable and you combine the performance of 3 drives... not as fast as S3700 on writes but can be had much easier for cheap prices.
I understand. Unfortunately there is only two drive bays left on that chassis to use. The local VM storage will ultimately be a virtualized router, pbx, backend vm (dhcp, dns, very small mysql databases, ubiquiti unifi server, a few other services), nagios server and a windows vm to run the surveillance NVR (which writes its data to dedicated drives). So its not a VERY intensive load for the core systems. Theres a bunch more that will be temporarily running on ESX_Core but not for storage, only for cpu/memory resources and their storage will be on the RAID10 array on the SAN.

I did just go back and it looks like my estimate of 600GB requirement was wrong, its 400GB. So I could get away with 800GB S3700 or S3710's. Unfortunately though I still only have 2 drive bays to put them in. I could possibly pull the 256GB drive out as that ultimately would get moved to an ESX node anyway but that would only give me 3 total drives bays I could use. The surveillance NVR takes up 5 of the 8 possible bays.
 
  • Like
Reactions: T_Minus

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Are those the SC113's from the X8 deal? If so, I believe the riser slot is not for the mITX PCIe slot. I built a few machines with mITX and FlexATX motherboards two weeks ago so I might be wrong. Can check tomorrow.
 

smokey7722

Member
Oct 20, 2015
109
23
18
Are those the SC113's from the X8 deal? If so, I believe the riser slot is not for the mITX PCIe slot. I built a few machines with mITX and FlexATX motherboards two weeks ago so I might be wrong. Can check tomorrow.
Not sure what the X8 deal was so I am not sure? I was waiting on my rep to confirm the spacing tomorrow as well as it looked like it possibly would fit. And if it didn't then I would think a cable based riser would work instead of the fixed circuit board based one I have included in the design now? I know when I was looking at using an iStarUSA chassis they had specced one of their cable based risers for this board however that was for the M-140-ITX chassis which was built specifically for mITX.

The DD-666-C5 and DD-666-C7 were the ones that iStarUSA said would work with the MB10-DS3 (again for their chassis). However I would think that the DD-666-C9 with a 9cm cable should be able to make its way over to the chassis slot on the SC113M (assuming you are right and the fixed riser I had specced won't work due to positioning).
 
Last edited:

smokey7722

Member
Oct 20, 2015
109
23
18
So originally I was planning to use a mITX chassis and thats why I had chosen the MB10-DS3's. In theory with this Supermicro SC113M chassis I am no longer limited to mITX so I could go with Flex (thanks for pointing that out everyone!). Assuming a riser and an I/O shield could be found to work it looks like the following may work:

X10SDV-2C-TP4F (D-1508)
X10SDV-4C+-TP4F (D-1518 w/active cooling)
X10SDV-TP8F (D-1518 w/passive cooling and 6 gig nics)

Of course my distributor doesn't have stock on any (they don't have stock on the SC113M anyway either).

Unfortunately for the ESX nodes, I don't see a board with SFP+ and a higher end cpu like the D-1541 (X10SDV-7TP4F is the closest with a D-1537). The X10SDV-7TP4F is cheaper than the Gigabyte though so I could go that route for them assuming the D-1537 performs good enough.
 

smokey7722

Member
Oct 20, 2015
109
23
18
MBD-X10SDV-7TP8F-O costs (a lot) more but is the D-1587 in FlexATX and meets your requirements.
Yup it would however at $2200 its a bit out of the price range. And a bit overkill for ESX_Core for now as that in the end will only run a small subset of the VM's here.

Edit: It looks like it would cost about the same amount of money to go with that board for ESX_Core and upgrade to 128GB of ram compared to building ESX_Core and one ESX node. So in theory it may actually be worth looking at going with the D-1587.
 
Last edited:

smokey7722

Member
Oct 20, 2015
109
23
18
Build’s Name: VM_SAN
Operating System/ Storage Platform: FreeBSD 10.3
CPU: Xeon D-1508
Motherboard: Supermicro X10SDV-2C-TP4F
Chassis: SC113MFAC2-605CB
Chassis Rails: MCP-290-00056-0N
Chassis Riser Card: must research to find appropriate riser
I/O Shield: must research to see if I need to do anything special
Drives:
2x Seagate ST1000NX0323 1TB RAID1 | 4x Seagate ST91000640SS 1TB RAID10 | 1x ST91000640SS 1TB HSP
RAM: 2x MEM-DR432L-HL01-ER21 (16GB modules, 32GB total) - 32GB is overkill but its only $40 savings to drop to two 8GB modules
Add-in Cards: LSI 9361-4i Raid Controller
Power Supply: Supermicro supplied in chassis
Usage Profile: VM Storage System
  • 2x Seagate ST1000NX0323 1TB RAID1: FreeBSD OS
  • 4x Seagate ST91000640SS 1TB RAID10: VM Storage Array (drives onhand already) - at some point these will get replaced with SSD but for now they should work fine for the load we plan
  • 1x Seagate ST91000640SS 1TB HSP: VM Storage Array HSP (drives onhand already)


Build’s Name: ESX_Core
Operating System/ Storage Platform: ESX 6.0
CPU: Xeon D-1587
Motherboard: Supermicro X10SDV-7TP8F
Chassis: SC113MFAC2-605CB
Chassis Rails: MCP-290-00056-0N
Chassis Riser Card: must research to find appropriate riser
I/O Shield: must research to see if I need to do anything special
Drives:
16GB USB Flash Drive | 1x Samsung 850 Pro 1TB | 4x Seagate ST2000NX0243 2TB RAID5 | 2x Intel S3710 1.2TB RAID1
RAM: 4x MEM-DR416L-HL02-ER21 (32GB modules, 128GB total for now)
Add-in Cards: LSI 9361-4i Raid Controller
Power Supply: Supermicro supplied in chassis
Usage Profile: Core ESX System
  • 16GB USB Flash Drive: ESX OS
  • 1x Samsung 850 Pro 1TB (This board supports M.2 so if I could find a decent 800GB-1TB M.2 to use then I will swap this): Passthrough Dedicated Live Surveillance Storage
  • 4x Seagate ST2000NX0243 2TB RAID5: Passthrough Dedicated Archival Surveillance Storage
  • 2x Intel S3710 1.2TB RAID1: Local VM Storage

Based on a bunch of discussion here is a slight rework of the systems... Ok a big rework :) For now I am dropping the design of the ESX nodes as ESX_Core has been upgraded enough that we wouldn't need to worry about expansion for a bit. There are a few points I need to figure out though still:
  1. What riser card is needed for these Flex ATX boards on the SC113M chassis
  2. Do I need to do anything for the I/O shield?
  3. Need to find a good 800GB-1TB M.2 drive to be passed into a single VM. If I can do this then I free up another drive bay and could in theory create a RAID10 array vs a mirror for the Local VM Storage (would require purchasing 2 more drives though however I could also decrease their sizes to save money as I only need about 800GB usable). That 1.2TB S3710 auction though is priced lower than 800GB's I see so I might still end up going with 1.2TB drives if it comes to that. *Update: Going to forget M.2 and when needed will put a S3710 1.2TB drive inside the chassis on a motherboard SATA port.
  4. Ingram Micro (distributor) has basically no stock of any of the Supermicro equipment. They might be able to drop ship from Supermicro but when I spoke with them they are demanding I pay for the drop shipping even if the order is well above their free shipping policy. I did look at WiredZone and they seem to have some of this stuff in stock - are there any other vendors that people have been using thats priced reasonably? I have other distributors but they either don't carry Supermicro or their prices are so outrageous I would not even consider them.
 
Last edited:

smokey7722

Member
Oct 20, 2015
109
23
18
In regards to the M.2 I think I am just going to secure another Intel 1.2TB inside the case and use a motherboard port when needed. So that knocks #3 out of concerns. Just need to figure out the riser card and I/O shield now.

@Patrick, any chance you have any experience with the SC113M chassis yet with this board? I saw the review of the board on STH but haven't seen anything on the chassis. I'm going to send an email off to my Supermicro rep in hopes they can provide any information but that usually takes a few days in my experience to get a response.
 

smokey7722

Member
Oct 20, 2015
109
23
18
Spoke to my rep today and sent the details of the builds over to him to see if he can provide additional information. I'll keep this thread updated as I hear back.

Update 7/21 - spoke with Rep on Monday and an engineer yesterday. They are sending my chassis/motherboard combination request up to a team to validate them and provide the relevant requirements (i/o shield, riser, cooling changes, etc) shortly.
 
Last edited: