Newb Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Gadgetguru

Member
Dec 17, 2018
41
14
8
Build’s Name: Newb Build
Operating System/ Storage Platform: Windows Server 2019
CPU: 2x Xeon E5520 2x Xeon x5680
Motherboard: SuperMicro X8DTI-F SuperMicro X8DTH-6F
Chassis: Thermaltake Core x9
Drives: 2x Micron S630DC 800GB, 2x WD 10TB, 1x Seagate 5TB, 2x 3TB, 1x 2TB
RAM: 48GB Hynix 96GB Hynix PC3-10600 DDR3-1333MHz ECC Registered
Add-in Cards: Intel RS3DC040 (LSI 3108) RAID Controller 12 Gb/s SAS/SATA PCIe x8 Gen3
Lenovo (Avago) 530-81 (LSI SAS3408) RAID Controller 12 Gb/s in IT mode
Power Supply: Seasonic Prime Ultra Titanium 750w
Other Bits: 2x Deepcool Gammaxx 400 CPU coolers w/ 120mm fans (added 2x of the Arctic F12 fans for a push/pull config)
2x Arctic P12 PWM PST 120mm pressure fans (intake)
4x Arctic F12 PWM PST 120mm airflow fans (exhaust)
2x Arctic P14 PWM PST 140mm pressure fans (intake)
2x Thermaltake Pure 20 200mm airflow fans (intake)

Usage Profile: Plex, VMs, Oracle/Apex, ???

Other information… I just discovered this site about a week ago and have been reading a lot. I'm not an IT professional, just a computer geek since I got my Commodore 16. So please be gentle, I'm definitely a newb to the server world and trying to learn!

I have been running a Plex Media Server for quite some time. I decided to build a dedicated computer for it and install Windows Server just for fun and to learn. I used old parts I had laying around to include an AMD A10 and 8GB of RAM. Hard drives installed are 1x 5TB, 2x 3TB, and 1x 2TB. I had quite a bit to learn and a lot of trial and error with WS. I also set it up to backup all of the other computers in the house, created a shared drive, and ran some VM's.

While that system has been good, it's not great. I also do not have any redundancy or backups of anything. So I've decided to build a "real" server for my home use. Mostly just because I want to try it and tinker with it and, again, see what I can learn. I've started to piece it together and I've already pulled the trigger on the main components. Then I found this site and thought I could get some advice before I go any further. I was going to do it as cheap as I could but knew I wouldn't be completely happy with that. Which means I'll probably end up spreading the purchases over the course of a month or two.

I purchased the motherboard, RAM, and processors with heatsinks together. I think I'm probably going to swap out the E5520's for two X5690's if/when I can find a good deal on them. The heatsinks are SuperMicro passive and I'm looking for a solution to add fans. Maybe just screw some 60mm's on them? I might go nuts too and add another 48GB of RAM. UPDATE: Decided to go with x5680's, the difference in performance between 5690's and 5680's wasn't enough to justify the $100 price difference. Found a deal on CPU coolers, 4 heatpipe with 120mm PWM fans and a provided clip to add another 120 for push/pull. Added another 48GB of RAM.

At first, I was going to stick with the 6x SATA II ports on the board and the built-in RAID but changed my mind. I looked at 3.5" SAS HDD's and realized they weren't any more expensive than the SATA drives and decided I wanted to try SAS. I looked at both 6GB and 12GB controller cards and ended up buying a new Intel RS3DC040 with LSI 3108 chip. I did a lot of research but honestly, don't know if that was a good choice. Hopefully it was! Guess I'll need cables and a battery yet. UPDATE: Got in late on the deal for Micron SSDs but picked up two of the 800GB SAS drives. Looking at getting a 3008 based HBA and selling the 3108 RAID card. Picked up an Amphenol cable.

As far as a case, I might use the Fractal Design Define XL R2 that I currently have my main system in (i7-5820k, 16GB, GTX 970, 1x 512GB 850 Pro NVMe, 3x SSD's, AIO water cooling) or I have been looking at the Thermaltake Core X9. Both are able to fit the 12" x 13" motherboard. UPDATE: Got the Thermaltake Core X9 after the price dropped $20. It's a massive and highly modular case.

Power supply, I haven't even begun to look at. UPDATE: Chose a Seasonic Titanium 750w fully modular PSU.

Drives, I'm not sure. I was looking at 4x 6TB Seagate Enterprise 12GB SAS drives (ST6000NM0034) or the 256mb equivalent. I know in that case I would be running RAID 1 and not a RAID 5 though. But that would give me 12TB of backed up data. (Assuming that is possible with the controller card; I have to learn how to use it and build arrays). Maybe throw my current drives in as well but I'm not sure if that would just end up slowing things down overall. Possibly an SSD boot drive? UPDATE: Picked up the two aforementioned 800GB Micron SSDs and then bought and shucked two WD 10TB Easystores.

So other than a Plex server, I'm really not quite sure what I'll be using this system for. I know it is overkill, but I'm ok with that. Probably set it up to do backups again. I ran some VM's on my current "server" (one at a time) which were pretty slow with the A10 and 8GB to play with Linux distros. I need the ability to run Windows programs on it, but I've just discovered Unraid. However, since I bought the controller card, I don't think I'll be going that route! Also interested in multiseat configuration which I just started reading about. UPDATE: Still unsure about this. Leaning towards Windows Server 2019, Snapraid, and VMs. Maybe play with Oracle and Apex, of course Plex, and client backups.

Sorry for the long read. Questions, comments, advice are welcome!
 
Last edited:
  • Like
Reactions: BennyT

Gadgetguru

Member
Dec 17, 2018
41
14
8
Updated original post. Fired it up for a quick test of the MB, RAM, and CPUs. Slowly getting the last of my parts and hopefully will find some time in the next week or two to get it all assembled.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
Very cool plans. I'm getting into APEX development too after i get my system built. I want to design a few custom screens for Oracle EBS 12.2.7
 
  • Like
Reactions: Gadgetguru

Gadgetguru

Member
Dec 17, 2018
41
14
8
Very cool plans. I'm getting into APEX development too after i get my system built. I want to design a few custom screens for Oracle EBS 12.2.7
I don't know much about it. I was sent to a class for APEX and even though it was way above my head I enjoyed it. We recently hired a programmer which has renewed my interest in it. I've been working with him to improve some applications that I am the end user for.
 
  • Like
Reactions: BennyT

Gadgetguru

Member
Dec 17, 2018
41
14
8
Made some progress today with the system. Installed the x5680's and new heatsinks to find that the system would not Post. After swapping processors a few times with no joy, I ended up making a bootable DOS thumb drive and flashing the BIOS. It was on a 2009 firmware and the newest version is July 2018. Once I did that the x5680's booted right up. The new heatsinks are amazing! With the passive SM heatsinks and the 5520's I could fry bacon on them in 5 minutes after booting up. The new ones and 5680's don't even get warm, it's crazy.

I also put in the additional RAM I purchased but it looks like I have two bad sticks somewhere. Now to figure that out and determine if I got screwed on that ebay deal.

Still trying to get a deal on a 3008 card and looking at a 10GE NIC and SFP+ cable. I don't think the CFO is gonna go for that as I am already waaaaay over budget. :) Can't help myself, love this stuff.
 

Gadgetguru

Member
Dec 17, 2018
41
14
8
I don't think the RAM issue is the RAM itself. Showing 80GB instead of 96GB so maybe two bad slots (12x8GB). Started process of elimination, will check motherboard standoffs, CPUs, Memtest, etc.

I can boot Linux off a live USB stick no problem. When I try to boot Windows off a HDD that has it installed, it gets to the Windows logo and freezes. Plugged in USB stick to install Windows on a bare drive and same thing. Maybe related to the RAM issue and Windows is more picky than Linux?
 

Gadgetguru

Member
Dec 17, 2018
41
14
8
Appears that there were 1 or 2 bent pins on CPU1 socket. Can't confirm if it was like that or if I did it. 90% chance it was my fault. Learned some valuable lessons on testing everything (how I should test), step by step, one thing at a time as well as the LGA sockets. I did mess around a bit trying to fix them, just made it worse. Used my phone camera and zoom which helped a lot, but need a magnifying glass to be hands free and try to fix the pins. Sigh. I accepted my error :( and ordered a "new" SuperMicro X8DTH-6F. It has two SAS2 ports on board which is a plus, so I also ordered an LSI/3Ware 8087 to 8042 cable.
Running the single CPU and six sticks of RAM, I was able to get the USB thumb drive with Windows Server 2019 install to work. It sees my RAID 0 800GB Micron SSDs and RAID 1 10TB HDDs, which I was happy about. Now just waiting for a delivery...
 
  • Like
Reactions: BennyT

Gadgetguru

Member
Dec 17, 2018
41
14
8
SITREP

Fast forward through all the testing... Got it all put together (again) with the "new" motherboard and it is running well and showing 96GB of RAM. :) Installed Server 2019 on the 800GB SSDs in a RAID 0. No issues, everything went smooth. Also did NIC teaming and have two ethernet cables running to it since my router does port aggregation.

Waiting on the 9440-8i/Lenovo 530-8i from the Great Deals post to arrive later this week. Then decide how I want to configure the drives, if I'm going to stick with Windows Server, etc. Right now I have the two SSDs and two 10TBs on the RAID (4i) card and the other four HDDs connected to one of the onboard SAS2 connections (mobo has two SAS2 and six SATA2).

Until then, just going to mess around with it a bit. Ran Crystal Disk Info but it doesn't see the SSDs, assuming because of the RAID card. Guessing if I flash IT mode on the 9440-8i it will see them? That's going to be something new for me as well but I see there is a thread on doing that. Gonna try the Storage Executive Software next.
 
  • Like
Reactions: BennyT

Gadgetguru

Member
Dec 17, 2018
41
14
8
Got the 530-8i and after a lot of research and hours of trial and error, I was able to get it flashed to IT mode and it is showing up as an HBA 9400-8i in LSI Storage Authority.

I still can't decide how to configure this server. Initially, I was just going to leave it with my two Micron SSDs in a RAID 0 and the two 10TB in a RAID 1. I'd install Windows Server on the SSDs. I was going to use the Intel RAID card.

Then, I saw that a lot of people don't bother with a RAID controller anymore and flash the cards to IT mode. So I got the 530-8i to flash to IT mode and to be able to use an NVMe drive should I want to at some point.

Looked at ZFS. I don't think that is the way I want to go. After reading some things, it doesn't seem like it is for me.
What is ZFS? Why are People Crazy About it?
The 'hidden' cost of using ZFS for your home NAS

Looked at Unraid. Appears drives need to already be formatted in ReFS, XFS, or Btrfs. I supposed I could get around this. Doesn't provide any integrity checksum without using a third-party utility.

SnapRAID looks promising. Can use filled NTFS drives and provides checksum.

Other than those two differences, am I correct that Unraid is an operating system whereas SnapRAID is a program?

It's all still a bit confusing for me yet.
 

Gadgetguru

Member
Dec 17, 2018
41
14
8
Thanks to @BennyT 's thread on his server build https://forums.servethehome.com/ind...build-to-host-my-oracle-apps-databases.22870/ I'm trying out ESXi on my server.

In the past, I've installed Windows and then used VirtualBox to create VMs. Utilizing ESXi to create VMs and remotely manage everything is pretty cool and seems like it might be a better way to go. I'm still playing around with it and learning how it all works.

I don't want to muck up Benny's thread anymore, so I'll continue my thoughts here...

I'm trying to figure out how to access other physical drives inside a VM. With VirtualBox on a Windows Host, I would share folders to be able to access files on both the Host and Guest and that was enough for what I needed. Due to the limited hardware resources I had, I only ran one VM at a time. Now I should be able to run more than one. Thanks to @Rand__ for letting me know I won't be able to have full access from multiple VMs to the physical disk at the same time however I could "attach it to one VM and then share out via (esxi internal) network (smb or whatever you prefer)". I'm thinking I should be able to work with this limitation especially if there is a way to easily mount and unmount a physical drive within a VM. I don't know if that is possible. It doesn't appear accessing a physical drive is easy at all: How to passthrough SATA drives directly on VMWare ESXI 6.5 as RDMs

I'm considering using a 64GB SSD to install ESXi on, create a RAID 0 with the two Micron SSDs for VMs, and create a RAID 1 with the two 10TB drives for media. Still leaves me with one 5TB, two 3TBs, and one 2TB. Additionally, I have a 1TB external drive. I'd run one Windows VM with Plex with access to the 10TB RAID, as long as my devices (TV) can see and access it. Then say, a second VM with access to the 5TB. If I could then unmount the 5TB from VM2 and then mount it in VM1 to access the files on it, that would work. Maybe wishful thinking?

Still bouncing around with different ways to set this server up. Not sure what I'll end up with but it's giving me the opportunity to try out new things and learn which is what I built this thing for. Thanks for everyone's help!
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Well you will have to add the drives to ESXi, then create a datastore on top and then you add a virtual disk (vmdk) to a VM.
You can attach the existing vmdk to another VM also.
Potentially there is also a way to grant Read access to a vmdk that another VM has write access to, but never looked into that honestly.

You also could setup a NAS VM with unRaid or similar (which can work with different sized drives afaik) and then hand that out to VM1/2... in the end it totally depends on what you want to do (eg why two VMs with seperate disks - just because you don't know better, because thats the way you always did it, or maybe there is a very valid reason behind it - we dont know ;))

Edit - ups only saw the very last post where you pinged me, so I did not read all the info top of it, but I see unraid was mentioned before;)

Edit2: After reading the thread I am still unclear what the result is expected to look like with the ESXi setup and Plex...
 
Last edited:
  • Like
Reactions: Gadgetguru

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
@Gadgetguru You need to step back, you are getting over excited :D

Break it down to manageable, logical blocks, and focus on each at a time...

You need storage for your VM's and a boot method. Sort that out first :)

Lots of folks just boot off a small SD card or USB thumb drive, but you can, if you wish, also install ESXi onto a dedicated datastore disk and boot from there. It's also perfectly possible to actually use your storage solution to provide your datastore space. I personally opted for the USB stick method, with a separate SSD used for my datastore to hold my VM's, this is only one example though, research what method works best for you and go with that.

Once you have ESXi up and singing, then you will be looking at creating a storage method that you can easily share out among your various VM's etc. There are a bazillion ways to achieve this as well, but again, break it down into easily digestible chunks...

Settle on an OS that will be handling your storage, then look at methods to provide some redundancy within your chosen OS. Remember, redundancy is not a backup solution, it just minimizes downtime until you can fix what's broken. This can be something already built in, or a third party solution. Then once you have that figured out, look at what method you will use to easily share that storage out in such a way that each of your VM's can access it and use it. You'll want to be flexible here, often the simplest solutions are the most complex to get your head round, but don't be put off, keep plugging away at it, you will get that "aha" moment...

I have deliberately not mentioned any particular method, or OS etc, because you need to do your own research and decide what works best for you, anything I say might colour your judgement.
 
  • Like
Reactions: Gadgetguru

Gadgetguru

Member
Dec 17, 2018
41
14
8
Well you will have to add the drives to ESXi, then create a datastore on top and then you add a virtual disk (vmdk) to a VM.
You can attach the existing vmdk to another VM also.
Potentially there is also a way to grant Read access to a vmdk that another VM has write access to, but never looked into that honestly.

You also could setups a NAS VM with unRaid or similar (which can work with different sized drives afaik) and then hand that out to VM1/2... in the end it totally depends on what you want to do (eg why two VMs with seperate disks - just because you don't know better, because thats the way you always did it, or maybe there is a very valid reason behind it - we dont know ;))

Edit - ups only saw the very last post where you pinged me, so I did not read all the info top of it, but I see unraid was mentioned before;)

Edit2: After reading the thread I am still unclear what the result is expected to look like with the ESXi setup and Plex...
All my physical drives show up in ESXi. So you are saying I have to make each physical drive a datastore? I read in the documentation that if a make a whole drive a datastore, it erases everything on the drive.

I guess my thought is to have a VM (NAS VM) running Plex (regardless of OS) and just let that alone and run (media on the 10TB array). Then create other VMs to use or play with (Windows VM, Linux VM, etc.). The other VMs don't need access to the 10TB necessarily. But say I'm using the Linux VM and download a photo and I want to be able to view it on my TV via Plex. I need to get that photo from the Linux VM to the 10TB drive where all my photos are stored.

This setup may be adding a layer of complexity I don't need. I don't know. I'm not set on using ESXi, just investigating options.

Before I had Windows running Plex, access to all drives of course. Then ran a VM in VirtualBox and used shared folders to transfer files if needed. I used Windows Remote Desktop to access the Windows machine and then the VM through that. Seemed like that wasn't a very efficient way to do things (both Windows machine and VM were very slow (AMD A10, 8GB RAM, all spinny disks)). And now with better hardware, I can run a couple VMs.

I'm sure it is because I don't know better! LOL
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
If you attach the drives to a HBA then you can attach them directly to a (single vm), if you only have them in ESXi you can't.
Then you'd have to have a datastore on the drive to be able to use it for VMS.

Your use case seems simple though - just add a SMB share to your NAS (or a share VM, whatever you prefer), or get something like owncloud up and running to sync files. The share can then be used to copy files over to the Plex box and share that out to the TV (either directly or after importing into plex [manually or automated via scheduled job])
Its similoar to the shared folder approach, just that you share via (internal) network
 

Gadgetguru

Member
Dec 17, 2018
41
14
8
@Gadgetguru You need to step back, you are getting over excited :D

Break it down to manageable, logical blocks, and focus on each at a time...

You need storage for your VM's and a boot method. Sort that out first :)

Lots of folks just boot off a small SD card or USB thumb drive, but you can, if you wish, also install ESXi onto a dedicated datastore disk and boot from there. It's also perfectly possible to actually use your storage solution to provide your datastore space. I personally opted for the USB stick method, with a separate SSD used for my datastore to hold my VM's, this is only one example though, research what method works best for you and go with that.

Once you have ESXi up and singing, then you will be looking at creating a storage method that you can easily share out among your various VM's etc. There are a bazillion ways to achieve this as well, but again, break it down into easily digestible chunks...

Settle on an OS that will be handling your storage, then look at methods to provide some redundancy within your chosen OS. Remember, redundancy is not a backup solution, it just minimizes downtime until you can fix what's broken. This can be something already built in, or a third party solution. Then once you have that figured out, look at what method you will use to easily share that storage out in such a way that each of your VM's can access it and use it. You'll want to be flexible here, often the simplest solutions are the most complex to get your head round, but don't be put off, keep plugging away at it, you will get that "aha" moment...

I have deliberately not mentioned any particular method, or OS etc, because you need to do your own research and decide what works best for you, anything I say might colour your judgement.
LOL :)

I like the way you laid things out, this is helpful.

"look at what method you will use to easily share that storage out in such a way that each of your VM's can access it and use it." I'm stuck on this one. More research is required.
 

Gadgetguru

Member
Dec 17, 2018
41
14
8
If you attach the drives to a HBA then you can attach them directly to a (single vm), if you only have them in ESXi you can't.
Then you'd have to have a datastore on the drive to be able to use it for VMS.

Your use case seems simple though - just add a SMB share to your NAS (or a share VM, whatever you prefer), or get something like owncloud up and running to sync files. The share can then be used to copy files over to the Plex box and share that out to the TV (either directly or after importing into plex [manually or automated via scheduled job])
Its similoar to the shared folder approach, just that you share via (internal) network
A little bit over my head at this point, but I'm working on it. Hmm, ownCloud, kinda like Tonido? I'm looking at it, giving me ideas. :)
 

Marsh

Moderator
May 12, 2013
2,642
1,496
113
Before I had Windows running Plex, access to all drives of course. Then ran a VM in VirtualBox and used shared folders to transfer files if needed.
If you have Windows running already, just enable hyper-v feature
Then, you have virtual machine host and fileserver host.

both Windows machine and VM were very slow (AMD A10, 8GB RAM, all spinny disks
The reason is because Windows host and VMs needs IOPS. SSD would solved the slowness issue.
 
  • Like
Reactions: Gadgetguru

BennyT

Active Member
Dec 1, 2018
166
46
28
If you like ESXi and if you are not commercial/production environment, you can get full editions/enterprise of just about all vmware products including ESXi, vCenter Server Appliance, workstation pro etc by subscribing to VMUG advantage. Only limitation is to keep at 6 sockets or less, but there are no RAM limitations. I subscribed for 3 yrs which brought the yearly subscription rate down from $200 to $170 per yr. $510 USD for 3 years. I applied a 10% off coupon to that which brought it down to $459 for three years. It's a nice package deal and the workstation pro can be used as remote console to the ESXi guest VMs, works almost like a VNC.

MS Hyper-V Standalone edition on the other hand is completely free. I've not tried it but sounds like it installs outside of windows (windows server is not needed for Hyper-v standalone). Or if you have Windows Server 2016 I think you can simply enable Hyper-V inside of server 2016 and bam, you are ready to virtualize.
 
Last edited: