Home Server Build

Discussion in 'FreeBSD and FreeNAS' started by Visseroth, Feb 21, 2018.

  1. Visseroth

    Visseroth Member

    Joined:
    Jan 23, 2016
    Messages:
    55
    Likes Received:
    1
    So my X8DTH MB is dying, apparently slowly but now it's to the point that it reboots every day to 6 days. Sometimes a couple times in a day and I don't have anything to swap it out with atm. So I'm looking to re-build it.
    I'm thinking of re-using my SC847 case. I don't want to waste a good 36 bay case. I'm thinking of swapping the backplanes out with BPN-SAS2-847EL (The rear backplane I don't know the model number for yet) or a SAS3 if funds permit.

    The motherboards I'm currently looking at is a A2SDi-H-TP4F or a X10SRH-CF
    My server's mostly used for storing files. I sometimes do VERY large file transfers. Anything from 500GB to 4 or 6TB when doing a backup of a machine. I stream plex and I like to keep a virtual machine (Linux) on the server for remote access so I don't have to leave another machine on. At some point I'd also like to setup a syslog server and that would be hosted on there as well. But that's about the extent of it. The plan was to have 64GB of RAM. Currently I have 72TB of storage. One pool at 24TB total for offsite backups, the other 48TB for everything else.

    I plan to re-use or get another 10Gb Chelsio fiber NIC so obviously I'd like to get as close to 10Gb as possible.

    Another question I have is if I can connect this board to both backplanes?

    My goal is also to keep power utilization as low as possible within reason. Currently my server pulls about 300 to 350W I'd like to cut that about in half. Which is why I'm considering the Atom board. I think it'll give me the performance I need without over doing it or under doing it.

    Anyone have any input, thoughts, suggestions? I'm open to constructive criticism. If you think my idea is dumb and have a better idea, I'm all ears. I'm pretty much figuring on spending about 2.5k.
     
    #1
    Last edited: Feb 21, 2018
  2. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,235
    Likes Received:
    264
    1. Why do you need 6x10GBe ? (4 are onboard already for the A2 board)
    2. You have 2 SAS HD connectors - if you get 2 SAS Expander backplanes (SAS2/SAS3) then yes you can connect one port to each, but might be limited in your total bandwith to 4 lanes each.

    3. The X10SRH o/c has significantly more expansion options so should cover most aspects, depending on CPU choice as well

    4. Can you evaluate the current power usage of your drives? That might be a significant part of your system and not easily remediated.
    I.e. If half of you power usage comes from drives then a board/cpu upgrade will never be able to cut power in half alone...
     
    #2
  3. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    823
    Likes Received:
    82
    the 847 rear Backplanes are the same sku like 826, so bpn-sas2-826el1 or bpn-sas3-826el1 would fit.
    if you have an expander backplane in the front you can chain from front to rear backplane, where total bandwidth for both will be limited by the front/first backplane in the chain. so, if you have sas2 in Front and sas3 on the rear it might be clever to chain from rear to front (what is also possible) and put sas3 Drives/ssd's (if there are any) on the First/faster backplane. or just use different hba for each backplane.
     
    #3
  4. Visseroth

    Visseroth Member

    Joined:
    Jan 23, 2016
    Messages:
    55
    Likes Received:
    1
    1. I don't need or really want 6 10Gbe's. Keeping the Chelsio was an idea for the X10. Not for the A2SDi.

    2. That was my thought as well. I just wanted to be sure that I could expand from the SAS 3 connector on the board and chain them together. Maybe it might be worth wild to go to SAS3 to eliminate the bottle neck? I've never chained backplanes together, so this is new to me.

    3. I'm not sure what each drive pulls. Currently I have 12 2TB Samsung Drives which are quite old. Strictly for backup for now and 12 4TB HGST NAS drives. But I agree. I figure I just want to make it as efficient as possible. And you're probably correct. If I figure an average of 8W each then that comes to 192W. That leaves about 158W consumed by the board, which sounds about right. So it's likely the lowest I can get is about 250W. Obviously I don't need a 200W CPU. I figured since the Atom CPU is 2Ghz with 16 cores and a 16M cache that it should do the job quite will. Most the time it seems my server is idle minus spikes when using Plex or when the drives are scrubbing.

    But at the same time if the X10 can offer more performance without a LOT more power consumption then that would give me more performance headroom if I needed it. Such as if I decided to do something crazy like use the server for mining or prime finding.

    _alex: I vaguely remember reading something like that. Good to know, I will for sure start digging around to see if I can find one. I do agree with your thought process. A SAS3 in the back for speed as that expander would cost less due to size and SAS2 in the front for less used drives.

    BTW, thanks a TON for the feedback guys. I appreciate this forum because of all the great feedback I see on here.

    So, thoughts?
     
    #4
  5. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,639
    Likes Received:
    1,032
    Just for reference - the X10SRH-CF with a modestly powerful CPU (E5-2630 v3) and 128GB of ram will draw ~45-55w at idle, rising rapidly as it does work. Just thought I'd share since i have one and have recently measured it. Note that it doesn't really go down all that much when idling with a v4 CPU.
     
    #5
    Visseroth and _alex like this.
  6. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    823
    Likes Received:
    82
    i'd really prefer an x10 over the atom, if you ever want to extend with 40gbe, a GPU or such you will be out of luck. also, for my understanding, the Atom Board doesn't have sas at all (but SATA in Mini-SAS HD), so your only choice to drive the backplanes would be via sata and an hba in the x4 slot.

    given the number of 3.5" Spinners and all the fans in the 847 you really wont notice the difference on the plug.
     
    #6
    PigLover likes this.
  7. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    823
    Likes Received:
    82
    i could measure my x10srh-cf with 2683v3, 4x dimm, 8x (or is it 12) 2.5" 500gb hdd, 3x Mixed SSD (SATA,sas2 and sas3), an additional 8i 2308 based hba, i think currently two connectx-3 and a 900p in sc213 tomorrow if interested. guess it should be ca. 100w idle.
    Not finished with the setup, should be my general purpose proxmox Node - but (ab)using this Box for Storage experiments for a while now as it's fast/easy to swap disks around :D
     
    #7
  8. Visseroth

    Visseroth Member

    Joined:
    Jan 23, 2016
    Messages:
    55
    Likes Received:
    1
    Yea, if you could. I'm interested! And you all make very valid points.
    I was hoping SuperMicro had a x10 with a SPF+ socket as well, but it is what it is.
    Knowing that the X10 only draws 50W at idle is pretty impressive. I've seen servers idle down to 60 or 80W but have yet to see 50.
    So You've convinced me, it's a bit more expensive but there are more perks. Now to build a parts list as I plan to start piecing this together over the next few months or so. I don't have the cash to drop on it right now.
     
    #8
  9. Visseroth

    Visseroth Member

    Joined:
    Jan 23, 2016
    Messages:
    55
    Likes Received:
    1
    So here's what I'm thinking for a parts list...

    X10SRH-CF
    M386A4K40BB0-CRC4Q X2 for 64GB of RAM
    BPN-SAS2-846EL1
    BPN-SAS3-826EL1
    Intel Xeon E5-2690 v2
    SAS HD to SAS 8087 x2 (One connecting to the rear backplane the other to the front backplane)
    Intel LGA-2011 Heatsink w/fan

    Suggestions or thoughts?
     
    #9
  10. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,639
    Likes Received:
    1,032
    Thoughts:

    C2 CPU won't work on that MB - you need an X9-series board for V1/V2 CPUs.
    E5-2600 CPUs use a 4-channel memory architecture. You won't get full performance from it with 2 DIMMs installed. You need all four populated or your RAM bandwidth will be cut in half.
    If you go from Motherboard to rear (SAS3) backplane and then from backplane to backplane you need 1x SAS-HD to SAS-HD and 1x SAS-HD to 8087. The SAS3 backplane will have a SAS-HD connector.
     
    #10
    Visseroth likes this.
  11. Visseroth

    Visseroth Member

    Joined:
    Jan 23, 2016
    Messages:
    55
    Likes Received:
    1
    Good catch. I overlooked that detail until you pointed it out. And didn't realize the v3 and v4 CPUs were quad channel. Likely wouldn't have noticed if someone didn't say anything.

    I take it I can only plug in one backplane into the board, not both backplanes? I'll need to basically chain them together to get them to the board?
     
    #11
  12. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,639
    Likes Received:
    1,032
    You can plug each backplane in separately. There are two SAS-HD connectors on the MB you are looking at. You can run one to the front backplane and one to the rear one. You do not have to daisy chain the backplane.

    Sent from my VS996 using Tapatalk
     
    #12
  13. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    823
    Likes Received:
    82
    Yes, running the backplanes independent is for sure easier to cable, but only gives 4 ports of bandwidth to each.
    Daisy-chaining would give bandwidth of 8 ports to the first, so this really depends on the disks and the bandwidth requirement.
    Yesterday I run badblocks on a bunch of sas hdd's via sas3 x4 external to an 846 jbod with sas2 expander and got around 2gb/s writes - so x4 should be enough in most cases, but different game when there are some ssd's on the backplane.
     
    #13
  14. pc-tecky

    pc-tecky Member

    Joined:
    May 1, 2013
    Messages:
    154
    Likes Received:
    21
    Consider the age of the chassis and the form factor of the motherboard. I have had more than one occasion where something was not right between the motherboard and case/chassis, whether it be the soldered components of the ECC RAM module touching the fixed metal drive cage (and limiting ease of service), or outright misalignment of standoffs (affixed and unmovable, or simply non-existent).
     
    #14
  15. Visseroth

    Visseroth Member

    Joined:
    Jan 23, 2016
    Messages:
    55
    Likes Received:
    1
    I know the EL2's support dual connections there by doubling your lanes, but I wouldn't think that having only one cable connected to the board vs two would cause a bottle neck.
    Are you saying if I chain them together I'm more likely to get more bandwidth?
    Is it worth getting the EL2 for the extra lanes?

    And I agree PC-Techy, I'm just trying to use what I already have and if it doesn't fit I'll mod what I have to or just get a new case if need be but I'd like to not have to spend money on something that should work. I'll just have to be careful to double check that everything aligns and joints are not touching the casing.
     
    #15
  16. Visseroth

    Visseroth Member

    Joined:
    Jan 23, 2016
    Messages:
    55
    Likes Received:
    1
    Any answers on the lanes and bandwidth questions?
     
    #16
  17. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,178
    Likes Received:
    267
    I think some things were mixed up.

    El1 = one expander
    el2 = two expanders

    Two expanders are for high available symtems were you need redundant paths, for other purposes one expaner is good enough.

    @_alex & @PigLover are talking about dual linking. That's when you use two sas multilane ports (~sff8087 ports) between hba/raid controller and backplane to get more bandwidth.
    With sas2 one such port supports 4x 6gbit (24gbit/s) and with dual link 48 gbit/s. That's more than enough for hdds.
     
    #17
  18. Visseroth

    Visseroth Member

    Joined:
    Jan 23, 2016
    Messages:
    55
    Likes Received:
    1
    I'd say. But I guess I'm asking how to do it. Do I connect the two ports on the board to the backplane? Basically both connectors, two cables to the one backplane?
    With all that available bandwidth, chaining them together wouldn't be an issue.
     
    #18
  19. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,639
    Likes Received:
    1,032
    On the SuperMicro SAS/SAS2/SAS3 backplanes with a single expander (EL1) you will find 3 SAS multi-lane connectors.

    While these are loosely called "input 1", "input 2" and "output" they actually just present SAS lanes from the expander and can be used for any legitimate purpose in a SAS topology, sorta like ports on your Ethernet switch labelled "uplink" that can generally be used as just a port.

    So - you can connect a 4-lane connector from your HBA to any of these three ports. If you do this you will get 4-lanes of "capacity" to distribute to/from your drives. Or you can connect two 4-lane ports from your HBA to two of these ports and you will get 8-lanes of "capacity" (this is what people mean when they say "dual link" to the backplane). In fact, you could connect 3 4-lane connectors from a 12-port HBA if you really wanted to - though this would be silly because there is no reasonable expectation that you could saturate more than an 8-lane connection.

    You can also connect other Backplanes to these connections in a daisy-chain configuration. So you might connect one 4-lane cable from your HBA to one backplane, and then connect one connector from that backplane to another one. Or you might "dual link" to the backplane and then connect a single 4-lane connection to the next backplane. Or you might single-link from your HBA and then connect to two other backplanes using a single 4-lane connection to each.

    In fact, because the SAS connectors on the backplane are just SAS lanes on the expander, you could even connect SAS forward breakout cables to them and just connect 4 drives stashed someone in your chassis if you wanted to...though this gets pretty ghetto pretty fast :)

    One the EL2 backplanes you actually have two sets of 3 4-lane connections and whole separate expander (so a total of 6 SAS 4-lane connections on the rear of the backplane). This is only useful when your drives are actually SAS - and dual-ported SAS capable. The "second" expander is connected to the second SAS port on the drives and is used for more sophisticated Enterprise configurations. In fact, if you are using SATA drive in the chassis the second expandera's presence is completely moot because you can't use it.

    As for the question "should you dual link", its just simple math. Lets assume SAS2:
    - Each "lane" can transmit 600MB/s
    - A four-lane group can transmit 2400MB/s
    - Each of your SATA-III drives can (theoretically) transmit 600MB/s
    - However, realistic limits for spinning drives are 100-180MB/s depending on the drive
    - So, as long as you have Raid/RaidZ-groups of 12 really fast spinning drives or less you won't saturate the 4-lanes from expander to HBA
    - If you have SSDs in the backplane in Raid/RaidZ groups of more than 4 drives then dual-linking might be helpful (and even that is speculative)

    Its easier with pictures, but I hope that helps.
     
    #19
    Last edited: Feb 24, 2018
  20. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,235
    Likes Received:
    264
    Shouldn't that be 'other backplanes in a daisy-chain'? ;)
     
    #20
Similar Threads: Home Server
Forum Title Date
FreeBSD and FreeNAS Home ZFS setup: Ideas on RAIDZ configuration Apr 13, 2018
FreeBSD and FreeNAS Advice on the perfect Freenas ZFS build @home - Mini-itx Jul 11, 2016
FreeBSD and FreeNAS Choosing hardware/software for home lab use case May 18, 2015
FreeBSD and FreeNAS Plex Media Server - Getting out of Jail Mar 25, 2018
FreeBSD and FreeNAS FreeNAS Server configuration Mar 4, 2018

Share This Page