New here and needing some direction... Haswell build... mobo?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ZeroOne

Member
Sep 13, 2013
52
6
8
New over here, and would appreciate someone looking over my shoulder on something. Long-time gamer / builder, but stepping away from watercooling and overclocking to the server side.

It's about time to do this server rebuild right and go the route of a true 24/7 setup. Really wanting ECC memory and a solid motherboard / CPU foundation. Not sure about mixing raid cards in the same system also. I'll try to keep this short...

Home server currently:
P8Z68M-Pro
2600K
16GB Vengeance DDR3
Samsung 830, 60GB, Win7 Pro
Aquaero 5 managing cooling, keeping drives at exactly 37-38C

Internal (6 drives):
3xRAID 1 arrays on a HighPoint 2720SGL
WD RE3 WD5002ABYS - important data (with offsite backups)
WD Blue WD6400AAKS - unimportant local data / linux distros / iso storage
WD Green WD20EARS - backup destination for CrashPlan, all server drives and inbound from 4 other systems. Weekly OS images.

External:
4x 2.5" hot swappable, iStar bay device
4x 3.5" hot swappable, iStar bay device
Samsung HD204UI in hot swap, via Intel onboard, receiving syncback copies from all data drives.


Basically... wanting to move to a decent server board, cpu, and ecc memory, intel lan, keeping the internal basic raid 1 arrays, have easy-to-swap front bays, and be able to power them all. (out of motherboard ports for those hot swap bays). Also confused about video implementation on some of these boards vs. "workstation" boards that take full advantage of the Haswell GPU.

Here's the quirks of it:
- Don't "NEED" 8 drive hot-swap up front... but since the chassis has that, I'd like it hooked up.
- Front hot-swap bays would be nice to NOT have to go into raid controller software to disconnect. (I do use HotSwap! in the system tray) The highpoint beeps insanely if you don't alert it of a disconnect first via web-interface. Wanting something that functions like a motherboard port but can't find a decent board that supports Haswell with more than 6xSATA3 (C226).
- Worried about mixing the HighPoint card in with other RAID cards like LSI. Are conflicts common where you can't mix and match in one system? Maybe moving to the LSI card is better anyway, replacing the HighPoint?
- I typically keep 2 slots free on the RAID card and only buy drives in pairs because of the timing of upgrades and lack of commitment to one model in fear of simultaneous failure or a bad model. I don't have a ton of data. 5-600GB max, really.


I'm thinking of the following and wanted to get a second opinion as to whether this would work well together or if there are better parts for the $$.

Motherboard: ASUS P9D-E/4L - Newegg.com - ASUS P9D-E/4L ATX Server Motherboard LGA 1150 DDR3 1600/1333
CPU: Xeon E3-1245V3 - Newegg.com - Intel Intel Xeon E3-1245V3 Haswell 3.4GHz 8MB L3 Cache LGA 1150 84W Quad-Core Server Processor BX80646E31245V3
2nd storage card: LSI 9207-8i - LSI LSI00301 (9207-8i) PCI-Express 3.0 x8 Low Profile SATA / SAS Host Controller Card - Newegg.com
Keeping 1st storage card: HighPoint 2720SGL - HighPoint RocketRAID 2720SGL Controller Card - Newegg.com

Saw the ASRock board but wasn't sure of the quality on this. Has some nice "desktop" features: Newegg.com - ASRock C226 WS ATX Server Motherboard LGA 1150 Intel C226 DDR3 1600/1333

Thank you for reading.... it wasn't that short. Sorry!! Any input is very much appreciated!!
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
you only use TLER drives with raid controllers unless you are doing ZFS type raid. I'd suggest using the SAS variants. Ditch the junky highpoint as one LSI card will handle 8 drives plenty. You can still boot off sata for the ssd.

You will want to use ECC ram - otherwise you'll never get the reliability of 24x7 due to random bit flips.

What's the endgame here? Gaming on a server is a sure way to destabilize a server.

Cad? pro AV? Audio?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,809
113
If you are going to use a C224 board, you cannot use the onboard Haswell GPU. Knowing that, you would want an E3-1240 V3 instead of the E3-1245 V3. The ASrock would allow you to use the E3-1245 V3 and the onboard GPU. It does not have IPMI so that is a big difference. Probably want to figure out if you want IPMI or onboard video.

Also, if you wanted to get a LSI SAS 2308 based card, maybe just get a motherboard like this one: http://www.servethehome.com/Server-...erboard-review-haswell-intel-xeon-e3-1200-v3/ ? You lose two NICs but gain the controller and save a few dollars.
 

33_viper_33

Member
Aug 3, 2013
204
3
18
As a former gamer (8+ years ago…) you are speaking near and dear to my heart. Your description is exactly how I started off, building powerful machines for gaming. The one reason I was less successful in my early server builds was thinking like a gamer when building a server. I wanted a server that would do it all to include gaming and specialized task that utilized weird drivers. I want to reemphasis mrkrad’s point, in a 24-7 server, you are looking for stability. Installing and uninstalling games with their drivers and other programs repeatedly increases your servers stability and your data’s vulnerability. If you are looking for a server that does many functions similar to the all-in-one boxes that are commonly discussed here, look into virtualization. This will segregate your data and tasks between VMs increasing your safety. You mess up an OS, just reinstall the VM. Your data and critical tasks should have safety built in. Virtualization is your friend in this case.

Another mistake I made was using sub-par hardware. I used old gaming rigs for my server. Gaming hardware often uses crap controllers to include highpoint. Look into either software raid such as the various ZFS installs, or a good RAID controller like the forum’s favorite LSI or my favorite Areca. If you use ZFS, you still want a good HBA such as the various LSIs outlined toughly on this forum.

Use the KISS principle. Water cooling and overclocking generally are not your friends in this world. Simplicity and stability are. If you don’t need a piece of hardware/driver, get rid of it for stability reasons.

For video, Patrick is absolutely correct as always. You need to analyze your intent with this server. Generally the onboard Haswell GPU is overkill for server duties but can be nice in a database if you are like me and have digitized you movie collection. I originally was ripping on my server. Over time, I discovered and started to live by the advice outlined above and now do all my rips on my Media PC, laptop or desktop as all have good CPUs and GPUs and this is where this task belongs. It’s about convenience which one I choose. Fast network cards become your friend in this case. With this in mind, I find IPMI much more useful and convenient than high speed graphics. Remember, it’s a server… Good graphics are generally not important. There are exceptions to this rule pending your intended use.

Think long term! Unlike gaming rigs, server hardware doesn’t move or become incapable of accomplishing your intent nearly as quickly. Note, hardware continues to advance quickly and has left software in its dust in recent years. Virtualization is an exception here. Once you start down the server road, it becomes addictive. You realize just how much can be done by a dedicated server and the tendency is to want more. INVEST in good hardware both for stability and with future expandability. You may decide to do the My movies or another form of movie database in the future. If this is the case, you will want a high capacity HDD case and good controllers. If you want to utilize professional grade, opensource router software like pfsense or smoothwall, think hardware (Motherboard and CPU) that supports virtualization (VT-x, VT-d) to run the all-in-one concept. High speed network devices are my most recent evolution which I’m still learning. I’m finding GBE too slow and am experimenting with 10GBE and infiniband now. This becomes useful when vitalizing with data stores on a centralized database or backing up and transferring large amounts of data.

With stability in mind, I don’t like to hot plug and pull drives on my production server unless it’s replacing a failed drive or for backup purposes. Also, I try to keep the “spare ports” off my array’s controller in favor of the motherboard’s controller. This may be a bit over proactive/anal, but it’s a safety thing in my mind. On a similar thought process, I had a drive get pushed in too hard on a backplane which disconnected the adjacent drive. Granted this was a junky backplane, but it degraded my array, gave me a good scare and pissed me off.

And for goodness sakes, your production backup/data server is not the place for experimentation. Get a C6100 or utilize old hardware. Definitely violated this once or twice and nearly had a catastrophe to pay for it.

Sorry for the long winded post. Just some lessons learned over the years. I learned on my own at a time when home servers were uncommon and thus made many mistakes. Do your research and read through applicable threads on this forum. There are a lot of experts here, pick their brains. The home use cases are now well documented and the community is large and growing.
 
Last edited:

ZeroOne

Member
Sep 13, 2013
52
6
8
Thanks everyone for the replies! Really appreciate the input...

you only use TLER drives with raid controllers unless you are doing ZFS type raid. I'd suggest using the SAS variants. Ditch the junky highpoint as one LSI card will handle 8 drives plenty. You can still boot off sata for the ssd.

You will want to use ECC ram - otherwise you'll never get the reliability of 24x7 due to random bit flips.

What's the endgame here? Gaming on a server is a sure way to destabilize a server.

Cad? pro AV? Audio?
The goal here is a storage server with both RAID arrays that are simple, and not all based on one brand or type of drive. I have no bandwidth need for greater than RAID1 and never buy that many drives at a time to do RAID5 or 6. With a 5-6 year upgrade cycle, I also worry about having something like a RAID 5 or 6 and by the time I need a "spare" drive, there are none available of that model.

No gaming or anything like that. I thought with the iGPU on Haswell, I could get better display performance vs. some of these onboard devices, but not for anything specific. Most software will run from windows, so Win8 Pro will be the OS. CrashPlan, Tonido, PogoPlugPC, Hamachi, file shares.

Never had a drop-out with the Green or Blue drives, though I realize this isn't ideal. They will eventually be replaced with drives that have TLER enabled. The RE3 drives have TLER, and house the important data. The Green drives were modified to reduce head parking with wd-idle. Been running for around 1.5yrs with no issue. The Blues were left over from another machine and added in as extra storage for less than important files.

I REALLY like the idea of ZFS with the way it can heal and prevent bit-rot or whatever you guys usually call it. (Been googling on silent data corruption.) Most of what I do revolves around Windows and wanted to keep this server on Windows though, and I'm not sure that's compatible.

I just use Acronis to do snapshots weekly of the OS, and didn't really want to go the esxi route as this is a single role server. Wasn't sure how it would affect some of my programs that make virtual adapters under windows and was afraid of complicating things by adding that extra layer when I might not be the best candidate to use it. Just thinking out loud, and still getting used to that idea though.


If you are going to use a C224 board, you cannot use the onboard Haswell GPU. Knowing that, you would want an E3-1240 V3 instead of the E3-1245 V3. The ASrock would allow you to use the E3-1245 V3 and the onboard GPU. It does not have IPMI so that is a big difference. Probably want to figure out if you want IPMI or onboard video.

Also, if you wanted to get a LSI SAS 2308 based card, maybe just get a motherboard like this one: http://www.servethehome.com/Server-...erboard-review-haswell-intel-xeon-e3-1200-v3/ ? You lose two NICs but gain the controller and save a few dollars.
I didn't realize C224 didn't support the GPU on Haswell. Thanks for the heads up! I don't want to install any video cards and waste PCIe slots/lanes, and just need simple VGA I guess. Completely unfamiliar with how these boards implement video, but still reading. I thought having the integrated GPU on the CPU would take care of all that. Two Intel NICs would be fine. I've never had a board with IPMI but like the idea of true server management features over audio / video. I see the Asus board has an add-on you have to get for management.

As for controllers... the goal is to have one storage card run the internal RAID drives and one run the front hot-swap bays, keeping the two cards separate. Not sure if 2 different cards conflict though or if they should at least be the same brand? The original idea was to have the RAID card for the internal drives, and run all the front hot-swap from the motherboard. But no motherboard seems to have 8 SATA-3 ports AND all of the server features, hence, the storage second card. I was afraid committing to a board that had a built in card could limit which cards could be used as add-on (again, compatibility?).. so was nervous about selecting something that couldn't be "removed" if that makes sense. At this point I'm leaning swapping all over to LSI cards / chipsets. It would be great to have a card that can handle hot swapping as easily as motherboard headers can. Not sure how that works on LSI cards. I thought I read somewhere also that if there are two LSI cards in a system, they both have to be in the same "mode." I need one card doing RAID and the other doing hot-plug.


As a former gamer (8+ years ago…) you are speaking near and dear to my heart. Your description is exactly how I started off... <snip>
Thanks for sharing your experience moving over to the server side! Definitely hear you... not wanting to blend the two systems at all. Have a de-lidded Ivy @ 4.8 on water here that's rocking along... but stable as it is (LinX 20-run), it's no place for data. The server's task is to be a tank, not a sport-bike. Definitely want to use quality components this time around. Using old gaming hardware is like a stepping stone to realizing you need a server in the first place. Now all my save-games (along with the rest of my digial life) automatically back up to this "server" and it needs to be pretty solid. Starting with the motherboard, cpu, memory, and now LSI cards most likely as the highpoints don't seem to inspire much confidence. Later will be some upgraded hard drives.

Thanks again for all of the replies! Your forum seems to have a lot more going on in this department with some very knowledgeable people.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
LSI controllers support 4k sector drives, I have one attached to mine.

I think 2x 10 drive RAID volumes is max
Not really delved into this much.
HBA's are much better off in IT mode and the OS does all the striping (ie ZFS)
 

33_viper_33

Member
Aug 3, 2013
204
3
18
ZFS is not compatible with windows in that it will not run under windows. This is why virtualization becomes awesome. Run a ZFS VM (vmware ESXI is free or if you have a hyper-v capable os) and share it over to your windows vm via iSCSI or share. One more lesson I learned, be careful with hardware RAID. I love the simplicity of it which is why I started out this way. After having a card fail (repaired under an almost expired warranty) I started realizing having a single point of failure is a very bad idea. Using some of the older cards aids you in that replacements are cheap. But I’m starting to like the idea of having arrays not hardware dependant at all (Software RAID). If you server dies, take the drives out and slap them into any other machine running that version of RAID software, rebuild, be happy. While my card was out for repair, I had no access to the data on that array. If I were to do hardware RAID again, I would make sure I had a spare card lying around just to be on the safe side.
Don’t be intimidated by ESXI. It takes a couple days of playing to understand it, but is very easy once you get the feel. It adds an enormous amount of flexibility and possibilities. Another option is to use vmware server under windows. I played with this for a while under 2011 server essentials since the os didn’t have hyper-v. It worked, but ESXI is so much more powerful. Snapshots and creation of virtual appliances is awesome for backup purposes.
 

ZeroOne

Member
Sep 13, 2013
52
6
8
ZFS is not compatible with windows in that it will not run under windows. This is why virtualization becomes awesome. Run a ZFS VM (vmware ESXI is free or if you have a hyper-v capable os) and share it over to your windows vm via iSCSI or share. One more lesson I learned, be careful with hardware RAID. I love the simplicity of it which is why I started out this way. After having a card fail (repaired under an almost expired warranty) I started realizing having a single point of failure is a very bad idea. Using some of the older cards aids you in that replacements are cheap. But I’m starting to like the idea of having arrays not hardware dependant at all (Software RAID). If you server dies, take the drives out and slap them into any other machine running that version of RAID software, rebuild, be happy. While my card was out for repair, I had no access to the data on that array. If I were to do hardware RAID again, I would make sure I had a spare card lying around just to be on the safe side.
Don’t be intimidated by ESXI. It takes a couple days of playing to understand it, but is very easy once you get the feel. It adds an enormous amount of flexibility and possibilities. Another option is to use vmware server under windows. I played with this for a while under 2011 server essentials since the os didn’t have hyper-v. It worked, but ESXI is so much more powerful. Snapshots and creation of virtual appliances is awesome for backup purposes.
Checking out ESXI now.

Never used software RAID before. Always configured a hardware card. Software was once looked down upon, or so I thought, but seems to make more sense. Are you just using Windows to create mirrored volumes from disk mgmt?
 

33_viper_33

Member
Aug 3, 2013
204
3
18
Checking out ESXI now.

Never used software RAID before. Always configured a hardware card. Software was once looked down upon, or so I thought, but seems to make more sense. Are you just using Windows to create mirrored volumes from disk mgmt?

Software raid has matured a lot over the past few years. I was under the same impression when I purchased my Areca controller. After my scare, I started doing more research and found that linux and ZFS RAIDs are in many ways better than hardware. your entire server can get fried. As long as your disks are intact, plop them in to any other computer running the same software and import. No hardware dependencies! One big draw back to ZFS is lack of online drive migration. IE, you can't add a single drive and expand the raid set.


I use Openindiana for ZFS and a package called Napp-it to manage ZFS. Napp-it gives you a web based GUI to manage vs having to use command line. Gea just created a virtual appliance of OmniOS with everything pre-installed which makes life easier. Just import into ESXI, customize your password and settings and you are up and running.

You will want to use ZFS for the raid. ZFS does protect against bitrot. You have all the same raid levels in ZFS, but you will find raid5 = raid z1 and raid6 =raid z2.

Once you get your drives in a pool and your raid levels are set up, you can share it out via windows share, NFS, iSCSI, or FCoE. If you only want one machine to have access to it, iSCSI is my preferred choice. iSCSI appears to be a local HDD to the OS and can be managed accordingly through windows.
 

ZeroOne

Member
Sep 13, 2013
52
6
8
Software raid has matured a lot over the past few years. I was under the same impression when I purchased my Areca controller. After my scare, I started doing more research and found that linux and ZFS RAIDs are in many ways better than hardware. your entire server can get fried. As long as your disks are intact, plop them in to any other computer running the same software and import. No hardware dependencies! One big draw back to ZFS is lack of online drive migration. IE, you can't add a single drive and expand the raid set.


I use Openindiana for ZFS and a package called Napp-it to manage ZFS. Napp-it gives you a web based GUI to manage vs having to use command line. Gea just created a virtual appliance of OmniOS with everything pre-installed which makes life easier. Just import into ESXI, customize your password and settings and you are up and running.

You will want to use ZFS for the raid. ZFS does protect against bitrot. You have all the same raid levels in ZFS, but you will find raid5 = raid z1 and raid6 =raid z2.

Once you get your drives in a pool and your raid levels are set up, you can share it out via windows share, NFS, iSCSI, or FCoE. If you only want one machine to have access to it, iSCSI is my preferred choice. iSCSI appears to be a local HDD to the OS and can be managed accordingly through windows.
Thank you! Starting to make sense now. Coming over from the gaming side, I didn't realize how much I was missing, treating this home server like any other Windows machine and not taking advantage of virtualization. Thanks for taking the time to explain it. It really only needs to be one machine, and not multiple virtual machines, but using this method to get ZFS available to Windows seems worth it.

Pretty familiar with Linux itself (Ubuntu mostly, day to day use and webserver apache/mysql) but wasn't quite sure about iSCSI, ESXI, or setting up ZFS. Some of the software required on the Windows side cannot work with network-mapped drives, so these have to appear as physical disks to the OS, which it sounds like iSCSI can handle. Prior to venturing down this rabbit hole, everything was going to be installed under Windows 8 Pro. If I'm honest... this new way of setting this server up sounds more risky for the data, in that, it requires more OSes, custom configuration between the two, more to manage, etc. Shutting down now involves shutting down the Windows VM, then the Linux / OmniOS VM, then powering down the hardware. (This only happens once or twice a year, during violent weather when the UPS is low, which I guess is another concern, having an auto shutdown via the UPS software). Also concerned with how some of the software (Hamachi and Pogoplug for example) that rely on creating virtual network adapters will work. Not sure if fact that it's in a VM will mess that up.

I'm sure my lack of knowledge on the subject is the reason for these questions, but it's definitely different than just having a pc with data on it and maybe a RAID card. Better to rely on the new OS / ZFS setup than a cheap (or even expensive) raid card though, I get that.

I think I have a lot more reading (and eventual testing) to do. A couple of other questions do come to mind, like the network-security of the ZFS drive if the Windows VM can just connect straight to it via network... and also does iSCSI add a bottleneck (network adapter) to a machine that could have this all connected via straight up hardware if it weren't separated by being 2 virtual machines and an iSCSI share.

Looks like the following will be possible for this build, in a nut shell:

-new server grade hardware
-ESXI
-OmniOS with ZFS volumes
-Windows 8 Pro VM with "hard drives" mounted via iSCSI from the OmniOS VM
-Continue on as normal configuring the Windows 8 VM with all apps and software from before, which should have no problem accessing the disks as local hard drives for sharing out, offsite backups, inbound backups, VPN virtual network connections, etc.

Time to do some ordering and testing.... this was originally going to be a mobo/cpu/mem swap and OS reinstall, to get ECC memory and better 24/7 hardware. Ha!
 

33_viper_33

Member
Aug 3, 2013
204
3
18
Time to do some ordering and testing.... this was originally going to be a mobo/cpu/mem swap and OS reinstall, to get ECC memory and better 24/7 hardware. Ha!
Welcome to the dark side!

As far as risk to the data, enterprises have been doing iSCSI to servers for years. ZFS was originally built on and by Solaris which has a very positive track history. My university use to use a mixture of Linux and Solaris RAID with SAN storage and this was 7 years ago. iSCSI is quite mature with security built in.

For speed, the internal vSwitch is extremely fast. I can easily max out my SATA II bus between VMs pushing around 400MB/s. I’ve been meaning to install ram disks to test true throughput. If you are looking to move data between multiple hosts quickly, 10gbe or infiniband is the answer.

For OS shutdown order, you are absolutely correct. However, ESXI can do orderly shutdown and startup of VMs waiting for other VMs to complete prior to moving onto the next. You will need to install VMtools IOT get the host to shutdown the hosts nicely. I have been using my UPS’ scheduled shutdown and startup to shutdown and boot my home ESXI host for power saving measures. It takes a bit of work to get working but is awesome. The host starts about 30 minutes prior to my waking up and coming home and shuts down 30 minutes after I leave for work and go to bed. It usually only runs for about 6 hours during the week and 18 hours on the weekends.

For VPN, I don’t see any reason it would be any different than direct on host. I’ve never used those programs before, so I can’t speak with experience. However, the host gets its IP from the router. The physical NIC appears to be a switch to the VMs and network. Therefore, there is no effective difference than directly connecting the windows box to the network. If it does prove to be a problem, you can always use pass through to give one NIC to the gust OS. The guest then owns that NIC which can’t be used by any other VM.

Just so we are clear, are you planning on using a separate machine for ZFS? If so, you will likely want high-speed (10GBE+) network cards which are expensive. I would virtualized both ZFS and windows on the same machine so you can take advantage of the virtual switches speed.
 
Last edited:

ZeroOne

Member
Sep 13, 2013
52
6
8
Welcome to the dark side!

.....

Just so we are clear, are you planning on using a separate machine for ZFS? If so, you will likely want high-speed (10GBE+) network cards which are expensive. I would virtualized both ZFS and windows on the same machine so you can take advantage of the virtual switches speed.
This will all be in the same machine... ZFS VM and Windows 8 VM. Main goal is to have Windows 8 Pro hosting file shares, backup software, remote access, etc., with the added benefit of ZFS to protect the data... In one machine, as it is now.

Just wanted to say thank you again for helping a fellow gamer out, in designing a proper server setup! This has taken quite the turn from just selecting a Haswell board.