Home Server Vacillation : L5520 vs e3-1231v3

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

HeBeCB

New Member
Dec 5, 2014
21
0
1
59
After thinking about this for a few months, I'm still a bit torn about which of the above platforms I should base my new build on (replacing a Proliant I've outgrown speed and sizewise).

I've seen some (largely thanks to tips from this forum... thanks!) SuperMicro systems with L5520's installed that are pretty appealing price-wise. I imagine they will perform adequately for my use cases (see below) and will be relatively modest in terms of power consumption. I am slightly worried about how long such a box would last given that it's likely close to 4 years old already.

Prior to reading the threads in this forum, I was all but ready to pull the trigger on an e3-1231v3 build very much like the one mentioned in this thread: low-power-e5-2609-v3-esxi-build. I had planned to use a case and PSU I already have. Obvious advantages here are the higher core speed and faster memory as well as the newer, presumably more efficient haswell CPU.

Right now I'm thinking I'll be running a main VM (or host) with WHS/FlexRaid (considered other options but this feels right for my needs... I'm gonna take up this conversation separately) along with JRiver and/or MB3. I *might* run MB3 in a VM. Also thinking about pfsense but for now i'm sticking with my router/firewall.

Oh yeah... I *might* set up a MineCraft server for my kid :)

I'm not super price conscious but I would like to keep the costs under control. Reliability (and sufficient speed) is more important to me. Thoughts/Advice/Therapy?
 

BlueLineSwinger

Active Member
Mar 11, 2013
181
71
28
Unless you're really budget-constrained, go with the new Haswell-based system. Alternately, a mildly used Sandy/Ivy bridge system could also be good. For basic home VM hosting/NAS usage you should be able to get many years out of such a system.

The old L5520 (and L5530, L5639, etc.) systems are OK. They can often be found cheap, but I really think they represent false economy:
  • They're nearly five/six years old at this point. There's no telling how much more life you'll be able to get out of one.
  • They're slower (The current Atoms roughly match them on many benchmarks).
  • They typically require more power. The TDP may be a bit lower, but in reality they'll probably have to run above idle more often than a Haswell would, upping overall consumption. Also, the support components (e.g., south bridge, RAM) typically run hotter and draw more power relative to modern counterparts.
  • They're going to be dropping off of HCLs soon, if they haven't already.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
There's nothing wrong with old parts, unless you are in an area where power is very expensive. I'm just in the process now of upgrading my old NAS to a dual-L5520 based system, from a Netburst-era 32-bit dual-socket single-core system. And the old system still had plenty of CPU power to run a NAS - the I/O bus bandwidth was the thing that made it too slow for me. I would expect there is still many years of life left in an L5520, and plenty of performance for the use-cases you listed here, it just won't be the most power-efficient way to get it done.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,826
113
Oh yeah... I *might* set up a MineCraft server for my kid :)
Generally with MineCraft servers you want high single-thread performance. That would push me towards the E3 setup.

What you lose in terms of expand-ability may be made up for in terms of giving your kid bragging rights that they have a MC server at home.
 

HellDiverUK

Active Member
Jul 16, 2014
290
52
28
47
I love this forum, one minute we're talking about HBAs, chipset bandwidth, and architecture, next post we're discussing MineCraft. :)
 

HeBeCB

New Member
Dec 5, 2014
21
0
1
59
There's nothing wrong with old parts, unless you are in an area where power is very expensive. I'm just in the process now of upgrading my old NAS to a dual-L5520 based system, from a Netburst-era 32-bit dual-socket single-core system. And the old system still had plenty of CPU power to run a NAS - the I/O bus bandwidth was the thing that made it too slow for me. I would expect there is still many years of life left in an L5520, and plenty of performance for the use-cases you listed here, it just won't be the most power-efficient way to get it done.
I finally got off my butt and pulled the trigger. FWIW, our electricity (SF Bay Area) is about 50% higher than the national average. In the end I decided to go with the following:
  1. e3-1241v3 (only a few bucks more than the 1231@The Egg)
  2. x10-sl7
  3. 16GB (2x8 CT2KIT102472BD160B)
  4. For the time being going to use my previous desktop case (big arsed ThermalTake) and PSU.
I'm mildly concerned about the number of lanes the e3 platform can handle but I will be fine with the dozen drive capacity the board provides (really 14 IIRC but I'll use one for optical and one for SSD for VM images) plus the two extra PCI slots (maybe one for an additional HBA and the other for a NIC if I decided to play with pSense). I'm not gonna go nuts with VMs on this box... mainly for media serving and a couple dedicated apps/servers). I've been happily using my desktop for VMs... perfectly adequate for the tinkering I do (mostly playing with various apache tech).

My only regret (so far ;-) ) is that I didn't do this a couple weeks ago... probably could have saved about $50 or so with various discounts. Ah well... at least I know what i'll be doing during my time off!
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
... but I will be fine with the dozen drive capacity the board provides ...
Nothing stopping you from using a reverse-breakout cable or two to wire the onboard SAS ports into a SAS expander - if you chain a few expanders together you could have a couple hundred drives connected without using any of your PCIe slots.
 

HeBeCB

New Member
Dec 5, 2014
21
0
1
59
I guess I have a fundamental misunderstanding about expanders. I didn't realize you could chain them together. I thought after I used all 8 on-board SAS ports I would have to put a PCI card in to gain additional SFF-8644 ports and those would hook up to N (on the order of 16) additional drives. Now I see cards are capable of accessing a couple hundred drives (presumably by daisy chaining). Thanks for setting me back on track.

Here's a question then.... say I start with what the board offers now by directly wiring drives to those ports. If I want to add capacity. what happens to the original drives? For Snapshot-based systems I can use already existing data (and regenerate the parity when apropos). What about systems (e.g. ZFS) that don't support already-populated drives? Is in-place expansion possible?
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Assuming that you are using some type of software on the drives (whether MS storage spaces, ZFS, Linux md-raid, snapraid, etc.), you just turn everything off, unplug the drives from the onboard SAS ports, connect the expander to SAS ports, and then reconnect the drives to the expander. When you turn it back on everything will be just like it was before you added the expander, except you will also have ports to add a bunch more drives to the system. If you need even more, either run a second expander off the other 4 SAS ports from the MB, or start daisy-chaining expanders.

Once you go SAS you can pretty much just stop worrying about maximum drive counts (except hardware raid cards which bring their own limits). In the case of your onboard LSI controller, you have 8x6G = 24G of SAS bandwidth to connect as many drives as you want. Only add more controllers if you want more bandwidth (or IOPS, but chances are you won't need that for home use).
 

HeBeCB

New Member
Dec 5, 2014
21
0
1
59
Thanks TD

Assuming that you are using some type of software on the drives (whether MS storage spaces, ZFS, Linux md-raid, snapraid, etc.), you just turn everything off, unplug the drives from the onboard SAS ports, connect the expander to SAS ports, and then reconnect the drives to the expander. When you turn it back on everything will be just like it was before you added the expander, except you will also have ports to add a bunch more drives to the system. If you need even more, either run a second expander off the other 4 SAS ports from the MB, or start daisy-chaining expanders.
I had assumed it would assign different devices (e.g. /dev/sda, /dev/sdb, etc) . My (possibly faulty) recollection from windows is, if I plug a drive into a different port, it's a different device from it's perspective. Does linux 'sticky' a raw device to a particular drive id rather than the physical connection? Just trying to get a more concrete sense of how that works.

Once you go SAS you can pretty much just stop worrying about maximum drive counts (except hardware raid cards which bring their own limits). In the case of your onboard LSI controller, you have 8x6G = 24G of SAS bandwidth to connect as many drives as you want. Only add more controllers if you want more bandwidth (or IOPS, but chances are you won't need that for home use).
Thanks for setting another misconception to rest. My guess is the biggest bandwidth sucker will be the on-machine snapshot or parity generator/validator. I shouldn't be pulling from many drives at once except for those kinds of activities I'm guessing. Backups will also take up some of that bandwidth but obviously would be constrained by the NIC's bandwidth.

I'll be starting to play around with different storage solutions and drive topologies over the next few weeks. I'll stop back for further advise (though it seems like there are other forums on STH more suited to those sorts of questions).
 

flecom

New Member
Oct 28, 2014
12
5
3
39
Unless you're really budget-constrained, go with the new Haswell-based system. Alternately, a mildly used Sandy/Ivy bridge system could also be good. For basic home VM hosting/NAS usage you should be able to get many years out of such a system.

The old L5520 (and L5530, L5639, etc.) systems are OK. They can often be found cheap, but I really think they represent false economy:
  • They're nearly five/six years old at this point. There's no telling how much more life you'll be able to get out of one.
  • They're slower (The current Atoms roughly match them on many benchmarks).
  • They typically require more power. The TDP may be a bit lower, but in reality they'll probably have to run above idle more often than a Haswell would, upping overall consumption. Also, the support components (e.g., south bridge, RAM) typically run hotter and draw more power relative to modern counterparts.
  • They're going to be dropping off of HCLs soon, if they haven't already.
I wish I could live in your reality... we still have customers running Netburst Xeons in our DC :(

also I have a workstation with a pair of X5675s that beats most OC'd modern i7's on frybench, so don't dismiss socket 1366 just yet
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I had assumed it would assign different devices (e.g. /dev/sda, /dev/sdb, etc) . My (possibly faulty) recollection from windows is, if I plug a drive into a different port, it's a different device from it's perspective. Does linux 'sticky' a raw device to a particular drive id rather than the physical connection? Just trying to get a more concrete sense of how that works.
It's very possible that after re-arranging things it will end up with a different device name - that is why you should always try to refer to devices by names that won't change. On Linux use /dev/disk/by-id/xxxxx instead of /dev/sdx. On Windows the device identifier doesn't matter - the preferred drive letter is saved in the NTFS filesystem. I'm not very familiar with the solaris-y type systems but I'm pretty sure theres a set of names to use there too that will not change after re-arranging devices.

And its not just expanders that can cause all your device names to get re-arranged. Adding a PCIe HBA to your system could easily have the same effect - it might be scanned first on the PCIe bus and all of its drives would then be sda, sdb, etc. while the onboard drives get pushed back to sdg, sdh, etc.

Personally, I refer to things (drives, partitions, filesystems, etc.) by UUID in my config files wherever possible. My mdadm.conf file is all built from array UUIDs, and my fstab is all filesystem UUIDs. For identifying physical devices, I use serial-number - from software I can read them with a few different tools (smartctl, hdparm), the drives have them printed on the label at the factory, and I printed labels that I stuck on the drive sleds when I installed them. I stopped caring what /dev/sdx name a device gets assigned a long time ago.
 

Mike

Member
May 29, 2012
482
16
18
EU
Al
It's very possible that after re-arranging things it will end up with a different device name - that is why you should always try to refer to devices by names that won't change. On Linux use /dev/disk/by-id/xxxxx instead of /dev/sdx. On Windows the device identifier doesn't matter - the preferred drive letter is saved in the NTFS filesystem. I'm not very familiar with the solaris-y type systems but I'm pretty sure theres a set of names to use there too that will not change after re-arranging devices.

And its not just expanders that can cause all your device names to get re-arranged. Adding a PCIe HBA to your system could easily have the same effect - it might be scanned first on the PCIe bus and all of its drives would then be sda, sdb, etc. while the onboard drives get pushed back to sdg, sdh, etc.

Personally, I refer to things (drives, partitions, filesystems, etc.) by UUID in my config files wherever possible. My mdadm.conf file is all built from array UUIDs, and my fstab is all filesystem UUIDs. For identifying physical devices, I use serial-number - from software I can read them with a few different tools (smartctl, hdparm), the drives have them printed on the label at the factory, and I printed labels that I stuck on the drive sleds when I installed them. I stopped caring what /dev/sdx name a device gets assigned a long time ago.
Alternatively, you could rely on Udev to name devices according to their IDs. In some cases this may come in handy.
 

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
Right now I'm thinking I'll be running a main VM (or host) with WHS/FlexRaid (considered other options but this feels right for my needs... I'm gonna take up this conversation separately) along with JRiver and/or MB3. I *might* run MB3 in a VM. Also thinking about pfsense but for now i'm sticking with my router/firewall.
Where are you on the JRiver v. MB3 decision? I'm in the planning/design phases ... For the longest, I was sure I was going to go with JRiver, because the idea of using the HTPC as a preprocessor and outputting directly to power amps is such an awesome project. But I want 3D, and maybe Atmos (I have speakers in the ceiling already) and will likely want a game console for entertaining, and Jriver + switching HDMI inputs gets really messy and expensive, if not impossible.

Curently leaning toward MB3 with the ServerWMC plugin. ServerWMC can change channels on my DirecTV boxen via http which is a lot cleaner than IR blasters everywhere. I also find the media scraper and Theaterview in JRiver lags behind MB3. I have to complete my storage backend and ventilate my media closet first, so I have a couple of months of vacillation left, myself.
 

HeBeCB

New Member
Dec 5, 2014
21
0
1
59
Where are you on the JRiver v. MB3 decision? I'm in the planning/design phases ... For the longest, I was sure I was going to go with JRiver, because the idea of using the HTPC as a preprocessor and outputting directly to power amps is such an awesome project. But I want 3D, and maybe Atmos (I have speakers in the ceiling already) and will likely want a game console for entertaining, and Jriver + switching HDMI inputs gets really messy and expensive, if not impossible.

Curently leaning toward MB3 with the ServerWMC plugin. ServerWMC can change channels on my DirecTV boxen via http which is a lot cleaner than IR blasters everywhere. I also find the media scraper and Theaterview in JRiver lags behind MB3. I have to complete my storage backend and ventilate my media closet first, so I have a couple of months of vacillation left, myself.
My primary reason for going with JRiver was the size of my music library (about 50k tracks). JRiver is (or at least was) the only real option for dealing with such large libraries. The play doctor is also a fun touch. The DSP side of JRiver is compelling... i've heard a lot of folks praising that in JRiver but obviously you have to spend some time getting it right.

Not sure how relevant 3d is to the discussion. There are limited options for free 3d players (look on AVSForums/HTPC... can't recall but I think there may only be one stereoscopic viewer). When I want to use 3d, I end up using TMT or PowerDVD... i've been able to play ripped ISOs with both.

Reading about the state of MB3 on AVSForums... seems like the video management has come a long way in the past couple of quarters. Given the pace of development, I'd guess the catalog management and eye candy of MB3 has to get the nod. The built in madvr [in Jriver] is certainly a nice touch but I'm not sure how much I care at this point about that... I've got the new OPPO in the chain with a Roku... I may just end up using the roku as the client and render through the Oppo. The more I think about what I want my system to be, the more I'm liking the Roku (esp given the ability to navigate stuff on my tablet and "cast" to the roku).

I didn't know about ServerWMC being able to control my DirecTV box... that's a nice tip and I will look into it.

Lastly, regarding complexity... more and more I've been thinking whether I need my AVR... I have a nice integrated amp for my front two and I can drive that directly from the Oppo since it has a decent pre-pro (and a very nice DAC built in). I could replace the HDMI switching with a dedicated switch (and you can always swap that out for HDMI 2/atmos when you feel confident at a fraction of the price of a new AVR!). That only leaves the other channels and zone2 (the patio).

These are just some random thoughts I've had about this topic. Happy to discuss more in this thread or via PM
 
Last edited:

HellDiverUK

Active Member
Jul 16, 2014
290
52
28
47
Plex Media Server works well for me, even with a lot of music. We have about 16k music files, and just over 2.5TB of video files (BRRips and TV shows). Plex handles it all very well - I have the Plex library (thumbnails, etc) on a SSD, and the files themselves on a DrivePool. Browsing on Plex Home Theater on the HTPCs is pretty much instantaneous. That's when Plex is running on the Xeon server.

It even works well on a Synology DS214Play - again the Plex library is on a SSD and the media files are on a HDD RAID1 in a DX513. Browsing is very, very marginally slower, maybe half a second pause when opening the Music library.

The power consumption of the Xeon is about 22W when the HDDs are spun down, 36-38W when they're running. The DS214Play uses less when the HDD are down, but more when they're up due to the power consumption of the DX513 itself.
 
Last edited: