Greetings,
I have been reading the site/forums for a few years now, but am finally posting!
Linux sysadmin by trade, my first NAS was a Thecus N7700. After outgrowing it, I decided to do a 15-drive ZFS build (#1) which a friend copied with a different motherboard. My wife said it would be full by the time I finished it, so I built a 20-disk unit (#2). That filled up and, I started using the N7700 again, and I am finally deploying a 36-drive Supermicro build (#3) now! Between builds 2 and 3, I almost deployed an inexpensive 8-11 drive unit (#4) and convinced a friend with 2 Drobos to consolidate onto it.
All of these have the same usage profile, so I'm listing it first:
Usage Profile: General SAMBA share for media storage and streaming from, plus random PC backups. Squeezeboxserver eventually (that is still on the Thecus, but I need to docker-ize it or something)
Build’s Name: #1 - 15-drive custom trayless hotswap tower
Operating System/ Storage Platform: Nas4Free 9 since Freenas was only 8.x at the time, and he didn't want to go with ZOL at the time
CPU: Xeon e3-1220L 20W 2C/4T
Motherboard: Supermicro X9SCM-F for me, an Asus workstation board for my friend
Chassis: AZZA Solano 1000 ATX Full Tower Computer Case CSAZ-1000R, 10x 5.25in Bays fitted with 5x iStarUSA 2x5.25in to 3x3.5in SAS / SATA Trayless Hot-Swap Cage, Model: BPN-DE230SS-RED
Drives: 15x 5400RPM Hitachi 4TB plus an SSD for the OS
RAM: 32GB ECC unbuffered DDR3 1333
Add-in Cards: 2x IBM M1015 IT mode
Power Supply: Seasonic S12-330 originally, but I switched to a Kingwin LZP-550 modular when I was considering adding it to the current 20 as a JBOD for a 35-drive backup array since the amazing Kingwin shaves about 6W off the base AC draw of the already great Seasonic!
Other Bits:
Great quiet case, looks great with red inside and red trayless hotswap aluminum fronts! I routed the power cables under the motherboard tray. Use the case cover fan to blow on the LSI cards since they don't come with fans. I set the drive cage fans to low and this thing runs cool with those fans the big top fan, the big side fan and 1 of the exhaust fans running. My friend's top bay or 2 pop open easily, but don't disconnect the drives in them, mine are fine.
Build’s Name: #2 - 20-drive custom trayless hotswap tower
Operating System/ Storage Platform: ZOL prerelease on Centos 6
CPU: Xeon e3-1260L 45W 4C/8T
Motherboard: Supermicro X9SCM-F
Chassis: Antec 1200 (since it has 12x 5.25" bays) and 4x iStarUSA BPN-DE350SS-RED trayless hotswap 3x 5.25 -> 5x 3.5 cages
Drives: 20x 5400RPM Hitachi 4TB plus an SSD for the OS
RAM: 32GB ECC unbuffered DDR3 1333
Add-in Cards: LSI 9201-8i single linked to Intel RES2SV240 SAS expander
Power Supply: Kingwin AP-550
Other Bits:
I don't like the Antec case quite as well as the AZZA, but I needed 12 5.25" for the trayless cages. I had to smash down 2 of 3 5.25" drive supports so the cages would fit in - a large C-clamp and 2 blocks of wood came in very handy for this (I think any 5x3.5 in 3x5.25 cage can't afford to have the notches.) Roughly 160W total idle draw with disks spinning.
Build’s Name: #3 - 36-drive Supermicro SC847 expander
Operating System/ Storage Platform: ZOL current on Centos 7
CPU: Xeon e3-1265L v2 45W 4C/8T
Motherboard: Supermicro X9SCM-F
Chassis: Supermicro SC847 36-bay 4U with 2x sas expander backplanes
Drives: 36x 5400RPM Hitachi 4TB plus an SSD for the OS
RAM: 32GB ECC unbuffered DDR3 1600
Add-in Cards: LSI 9207-8i see bandwidth notes, may change card
Power Supply: Supermicro PWS-920P-1R, also tested with PWS-920P-SQ
Other Bits: generic Renesas USB3 card
Great case, expander backplanes makes cabling super easy, I am only using 4 of 7 possible fans in the middle of the case, and they are connected to the motherboard and run quietly for a server after initial whoosh, but nowhere near as quiet as the towers above. Trays are a PITA compared to trayless, but cheaper and higher density. See below for bandwidth notes on these builds. Roughly 250W idle draw with disks spinning.
Build’s Name: #4 - 8-11 drive cheap NAS build
Operating System/ Storage Platform: FreeNAS 9.3
CPU: Pemtium G620T 35W 2C/2T
Motherboard: Supermicro X9SCL-F
Chassis: NZXT Source 210 ELITE Midtower - 8x tool free 3.5 bays inside, 3x 5.25, under $50 shipped after tax!
Drives: 10x HGST 5400RPM 4TB (started with old Seagate 3TB disasters)
RAM: 8GB ECC DDR3 1066
Add-in Cards: IBM M1015 IT mode
Power Supply: Seasonic S12-330 or an Antec Earthwatts 380, I forget which
Other Bits:
Moved an exhaust fan to the case side for air to the LSI card. Drives are a tight slide out past the CPU cooler, etc. - great value overall.
General notes:
Looks: Builds #1 and #2 look great, and are very quiet (*not* silent though.) Build #4 is also very quiet, and super-cheap to put together.
LSI Firmware versions: Build #4 was throwing read CRC errors on many disks and reading incredibly slowly with the now infamous P20 AVAGO/LSI firmware - at first I though it was the junk Seagate 3TB drives, but it turned out to be the controller firmware. If you hunt around, Freenas suggests P16 firmware since it includes the P16 drivers and some have reported issues in FreeBSD with P19 firmware under P16 drivers. P16 drivers are also in Centos 6 and 7, so I have flashed P16 on all of my cards with no issues yet.
Bandwidth: All of these get close to 10GBe speeds even with z3! Build #2 gets almost 1.7GB/s write of a single file to ZFS stripe, 1.4GB/s to a Z1, 1.2GB/S to a Z2, and just under 1GB/S to a Z3. For #3, the 9207-8i card is connected to the 24-drive backplane, which daisy-chains to the 12-drive backplane. Single linked, it gets very similar numbers to #2, although as much as 5% lower in some cases, probably due to inefficiencies of daisy-chaining backplanes. Double-cabled, it gets 2.78 GB/s write onto a pure ZFS stripe, but that drops to numbers only 5-10% above #2's bandwidth at Z1/Z2/Z3, because of my next topic.
CPU: A single 36-drive z3 pool in #3 is CPU bound on writes at ~1.15GB/s with a 4C/8T Intel low power Xeon. A stripe of 2x 18 drive Z2 vdevs brought the bandwidth back up to 2.2ish GB/S write, and 4x 9-drive Z1 vdevs was maybe 0.1GB/S faster with only about half the overall CPU. When I built #2 years ago, I tested it with a split 2x 10-drive Z2, but saw no performance increase vs a single z2 vdev zpool. Other than backups onto a DAS JBOD, this is still ridiculous bandwidth for home use, and although I have not thoroughly tested #4, it definitely gets at least 400MB/S write on a 10-drive z2 pool with it's 2C/2T CPU. CPU use seems most dependent on the number of disks per vdev with a small penalty for double or triple parity. Maybe hyperthreading should be off.
RAM use: In my typical basic usage on #2, (at most crawling through the array double-checksuming files), zfs eats up about 18-19GB of 32GB available. A full scrub made it increase to about 21GB used. So far in my limited testing, #3 does not use a lot more than 22GB under light use, I will report back later as I use it more.
AMD: I need to benchmark these with large arrays when I have time. As mentioned at the top, I have a Fujitsu microserver, a couple of HP Microservers, and an Asus desktop board with DDR3 unbuffered ECC RAM support. Even though I strongly dislike Intel, I did not end up going with any of the AMD options before for several reasons:
- Fujitsu microserver has a great AM3 board and is a well designed server, but the PSU is proprietary and the board seems to run on just lots of 12V lines from it, so it is not easy to transplant into another case. I have a bunch of low power AM3 CPUs I want to benchmark in it going to an external JBOD for comparison purposes.
- HP Microserver's board is great, but also proprietary mounting - if it just had standard mounting holes I would have used it in builds #1 and #2 above. If I only needed 6 drives max, I would just use an HP Microserver as-is.
- My Asus micro ATX desktop AM3 board is only supposed to support up to 16GB of RAM (must test it with 4x 8GB) and would be hard to replace if it fails.
- I really wanted an AMD Supermicro board, but the only socket AM3 one is impossible to find and larger than micro ATX. AMD went mid to high power massively multicore for their main server stuff and has no competitors to the low power small board Xeon e3 and Atom 2xxx series. A 16-core+ Opteron or 4 would be great for ZFS, but I leave this on 24/7...
More musings / overall conclusions:
I have also tried some higher power Intel dual proc boards which can take a lot more RAM, but it looks like more than 32GB of RAM is not needed, so I'll stick with the X9SC* series.
ZFS on Linux does all I need it to do, but Freenas is very good recently, so I am recommending it to less technical friends who just need a reliable NAS. If you install the current FreeNAS, be SURE to dd /dev/zero over your target flash drive before installing onto it - we had lots of mysterious GPT related bootloader failures until we figured that out!
I have been reading the site/forums for a few years now, but am finally posting!
Linux sysadmin by trade, my first NAS was a Thecus N7700. After outgrowing it, I decided to do a 15-drive ZFS build (#1) which a friend copied with a different motherboard. My wife said it would be full by the time I finished it, so I built a 20-disk unit (#2). That filled up and, I started using the N7700 again, and I am finally deploying a 36-drive Supermicro build (#3) now! Between builds 2 and 3, I almost deployed an inexpensive 8-11 drive unit (#4) and convinced a friend with 2 Drobos to consolidate onto it.
All of these have the same usage profile, so I'm listing it first:
Usage Profile: General SAMBA share for media storage and streaming from, plus random PC backups. Squeezeboxserver eventually (that is still on the Thecus, but I need to docker-ize it or something)
Build’s Name: #1 - 15-drive custom trayless hotswap tower
Operating System/ Storage Platform: Nas4Free 9 since Freenas was only 8.x at the time, and he didn't want to go with ZOL at the time
CPU: Xeon e3-1220L 20W 2C/4T
Motherboard: Supermicro X9SCM-F for me, an Asus workstation board for my friend
Chassis: AZZA Solano 1000 ATX Full Tower Computer Case CSAZ-1000R, 10x 5.25in Bays fitted with 5x iStarUSA 2x5.25in to 3x3.5in SAS / SATA Trayless Hot-Swap Cage, Model: BPN-DE230SS-RED
Drives: 15x 5400RPM Hitachi 4TB plus an SSD for the OS
RAM: 32GB ECC unbuffered DDR3 1333
Add-in Cards: 2x IBM M1015 IT mode
Power Supply: Seasonic S12-330 originally, but I switched to a Kingwin LZP-550 modular when I was considering adding it to the current 20 as a JBOD for a 35-drive backup array since the amazing Kingwin shaves about 6W off the base AC draw of the already great Seasonic!
Other Bits:
Great quiet case, looks great with red inside and red trayless hotswap aluminum fronts! I routed the power cables under the motherboard tray. Use the case cover fan to blow on the LSI cards since they don't come with fans. I set the drive cage fans to low and this thing runs cool with those fans the big top fan, the big side fan and 1 of the exhaust fans running. My friend's top bay or 2 pop open easily, but don't disconnect the drives in them, mine are fine.
Build’s Name: #2 - 20-drive custom trayless hotswap tower
Operating System/ Storage Platform: ZOL prerelease on Centos 6
CPU: Xeon e3-1260L 45W 4C/8T
Motherboard: Supermicro X9SCM-F
Chassis: Antec 1200 (since it has 12x 5.25" bays) and 4x iStarUSA BPN-DE350SS-RED trayless hotswap 3x 5.25 -> 5x 3.5 cages
Drives: 20x 5400RPM Hitachi 4TB plus an SSD for the OS
RAM: 32GB ECC unbuffered DDR3 1333
Add-in Cards: LSI 9201-8i single linked to Intel RES2SV240 SAS expander
Power Supply: Kingwin AP-550
Other Bits:
I don't like the Antec case quite as well as the AZZA, but I needed 12 5.25" for the trayless cages. I had to smash down 2 of 3 5.25" drive supports so the cages would fit in - a large C-clamp and 2 blocks of wood came in very handy for this (I think any 5x3.5 in 3x5.25 cage can't afford to have the notches.) Roughly 160W total idle draw with disks spinning.
Build’s Name: #3 - 36-drive Supermicro SC847 expander
Operating System/ Storage Platform: ZOL current on Centos 7
CPU: Xeon e3-1265L v2 45W 4C/8T
Motherboard: Supermicro X9SCM-F
Chassis: Supermicro SC847 36-bay 4U with 2x sas expander backplanes
Drives: 36x 5400RPM Hitachi 4TB plus an SSD for the OS
RAM: 32GB ECC unbuffered DDR3 1600
Add-in Cards: LSI 9207-8i see bandwidth notes, may change card
Power Supply: Supermicro PWS-920P-1R, also tested with PWS-920P-SQ
Other Bits: generic Renesas USB3 card
Great case, expander backplanes makes cabling super easy, I am only using 4 of 7 possible fans in the middle of the case, and they are connected to the motherboard and run quietly for a server after initial whoosh, but nowhere near as quiet as the towers above. Trays are a PITA compared to trayless, but cheaper and higher density. See below for bandwidth notes on these builds. Roughly 250W idle draw with disks spinning.
Build’s Name: #4 - 8-11 drive cheap NAS build
Operating System/ Storage Platform: FreeNAS 9.3
CPU: Pemtium G620T 35W 2C/2T
Motherboard: Supermicro X9SCL-F
Chassis: NZXT Source 210 ELITE Midtower - 8x tool free 3.5 bays inside, 3x 5.25, under $50 shipped after tax!
Drives: 10x HGST 5400RPM 4TB (started with old Seagate 3TB disasters)
RAM: 8GB ECC DDR3 1066
Add-in Cards: IBM M1015 IT mode
Power Supply: Seasonic S12-330 or an Antec Earthwatts 380, I forget which
Other Bits:
Moved an exhaust fan to the case side for air to the LSI card. Drives are a tight slide out past the CPU cooler, etc. - great value overall.
General notes:
Looks: Builds #1 and #2 look great, and are very quiet (*not* silent though.) Build #4 is also very quiet, and super-cheap to put together.
LSI Firmware versions: Build #4 was throwing read CRC errors on many disks and reading incredibly slowly with the now infamous P20 AVAGO/LSI firmware - at first I though it was the junk Seagate 3TB drives, but it turned out to be the controller firmware. If you hunt around, Freenas suggests P16 firmware since it includes the P16 drivers and some have reported issues in FreeBSD with P19 firmware under P16 drivers. P16 drivers are also in Centos 6 and 7, so I have flashed P16 on all of my cards with no issues yet.
Bandwidth: All of these get close to 10GBe speeds even with z3! Build #2 gets almost 1.7GB/s write of a single file to ZFS stripe, 1.4GB/s to a Z1, 1.2GB/S to a Z2, and just under 1GB/S to a Z3. For #3, the 9207-8i card is connected to the 24-drive backplane, which daisy-chains to the 12-drive backplane. Single linked, it gets very similar numbers to #2, although as much as 5% lower in some cases, probably due to inefficiencies of daisy-chaining backplanes. Double-cabled, it gets 2.78 GB/s write onto a pure ZFS stripe, but that drops to numbers only 5-10% above #2's bandwidth at Z1/Z2/Z3, because of my next topic.
CPU: A single 36-drive z3 pool in #3 is CPU bound on writes at ~1.15GB/s with a 4C/8T Intel low power Xeon. A stripe of 2x 18 drive Z2 vdevs brought the bandwidth back up to 2.2ish GB/S write, and 4x 9-drive Z1 vdevs was maybe 0.1GB/S faster with only about half the overall CPU. When I built #2 years ago, I tested it with a split 2x 10-drive Z2, but saw no performance increase vs a single z2 vdev zpool. Other than backups onto a DAS JBOD, this is still ridiculous bandwidth for home use, and although I have not thoroughly tested #4, it definitely gets at least 400MB/S write on a 10-drive z2 pool with it's 2C/2T CPU. CPU use seems most dependent on the number of disks per vdev with a small penalty for double or triple parity. Maybe hyperthreading should be off.
RAM use: In my typical basic usage on #2, (at most crawling through the array double-checksuming files), zfs eats up about 18-19GB of 32GB available. A full scrub made it increase to about 21GB used. So far in my limited testing, #3 does not use a lot more than 22GB under light use, I will report back later as I use it more.
AMD: I need to benchmark these with large arrays when I have time. As mentioned at the top, I have a Fujitsu microserver, a couple of HP Microservers, and an Asus desktop board with DDR3 unbuffered ECC RAM support. Even though I strongly dislike Intel, I did not end up going with any of the AMD options before for several reasons:
- Fujitsu microserver has a great AM3 board and is a well designed server, but the PSU is proprietary and the board seems to run on just lots of 12V lines from it, so it is not easy to transplant into another case. I have a bunch of low power AM3 CPUs I want to benchmark in it going to an external JBOD for comparison purposes.
- HP Microserver's board is great, but also proprietary mounting - if it just had standard mounting holes I would have used it in builds #1 and #2 above. If I only needed 6 drives max, I would just use an HP Microserver as-is.
- My Asus micro ATX desktop AM3 board is only supposed to support up to 16GB of RAM (must test it with 4x 8GB) and would be hard to replace if it fails.
- I really wanted an AMD Supermicro board, but the only socket AM3 one is impossible to find and larger than micro ATX. AMD went mid to high power massively multicore for their main server stuff and has no competitors to the low power small board Xeon e3 and Atom 2xxx series. A 16-core+ Opteron or 4 would be great for ZFS, but I leave this on 24/7...
More musings / overall conclusions:
I have also tried some higher power Intel dual proc boards which can take a lot more RAM, but it looks like more than 32GB of RAM is not needed, so I'll stick with the X9SC* series.
ZFS on Linux does all I need it to do, but Freenas is very good recently, so I am recommending it to less technical friends who just need a reliable NAS. If you install the current FreeNAS, be SURE to dd /dev/zero over your target flash drive before installing onto it - we had lots of mysterious GPT related bootloader failures until we figured that out!