Dual Norco 4224 (re) build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MER

New Member
Nov 3, 2019
5
0
1
Sydney
Hi all,
First up: Thanks so much for all the great work you all do here. Some of the information available is invaluable, especially for an EX IT person like myself that doesnt have the time these days to keep their finger on the pulse of everything that is going on.

I was hoping for a quick sanity check.
Im about to start the process of consolidating 2 servers. One of the servers is getting old in the tooth and I just dont need 2 servers anymore. I would prefer to have one system.

I currently have in operation
Server1 (Server 2013)
Norco 4224
Gigabyte P55A-UD4P
16GB RAM
Intel Core i7 860
Areca ARC-1261ML (running 16 Sata Drives)
Onboard SATA w/ reverse breakout (running 8 drives)
124TB Storage
SSD Boot / SSD Swap Drive and Media Cache
Storage managed via Stablebit Drivepool
Plex Server (only used for Music and Photos)

Server2 (Windows 10 64bit Pro)
Norco 4224
Gigabyte GA-Z170X-Gaming 7
32GB RAM
Intel i7 6700 4Ghz
Areca ARC-1261ML (running 16 Sata Drives)
Onboard SATA w/ reverse breakout (running 8 drives)
80TB Storage
SSD Boot / SSD Swap Drive and Media Cache
Storage managed via Stablebit Drivepool
2 x VM's (production)
Emby Server w/ Hardware Decoding - Large Library

What I would like to do is to consolidate server1 with 2.
Ive been happy for sometime running windows 10 as the platform as I just need it to be easy to maintain and allow me to run whatever addition apps I have without any issue.

Ive heard that Windows 10 Pro can only support 24 drives per Controller which is OK.

So here's my design (inspired by information I have found at this forum)
If someone could be so kind to sanity check this as I know a few of the articles and reviews are old on here, but I'm sure a lot of them still relevant for home usage.

Server1
Remove CPU / CPU Fan (?) from Server1 & all RAID Cards and use chassis / MB to power HDD's and PCIe slot
Install HP SAS Expander (36ch) - Connect 24 channels to Norco backplane
Install CableDeconn Dual Mini SAS SFF-8088 to SAS36P SFF-8087 Adapter in PCI Bracket (https://www.amazon.com/CableDeconn-SFF-8088-SFF-8087-Adapter-Bracket/dp/B00PRXOQFA) and connect to HP SAS expander

Server2
Install LSI SAS 9211-8i into spare PCIe8 slot - Dual link to Server 1 using 2 x SFF 8088 connectors
Retain Areca 1261ML
Retain Breakout connection

Does this appear to be OK?
I think I have covered most things, but as I said, I am not really across things as much as I used to be.

the idea is to hopefully have a server that I can easily swap drives when faulty, or when I need to upgrade. Drivepool gives me the advantage of multiple copies stored on multiple disks, not using parity, BUT with the disadvantage of using LARGE amounts of space in the process.

I would love your feedback

Thankyou!

Matt
 

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
I don't know your motherboard in server 1 but I think you will need the CPU (and fan) in there for the power to be maintained when you hit the on switch. Question is would that burn too much power for you? Its handy using the motherboard to power the HP SAS expander. do update the firmware in that to the latest though. If you have sata drives you'll do a max of SATA2 with the expander and if you have SAS drives you should be able to get SAS2 speeds. The SAS expander needs to be the latest hw revision and updated to the latest code (IIRC 2.10 or something) this is well documented here and other places. IIRC avoid the yellow colored HP SAS expanders.

If the motherboard is too much of a draw with the cpu and cpu fan consider a low power cpu that works in that motherboard *or* use a jbod power board which will work with the front switch or use a power supply tester plug (cheap) and use the on/off switch on the power supply.
With the JBOD board or power supply tester you'll need a single slot mining adapter and power cable from the PSU to power it up for the SAS expander. Also means you may have drill/tap holes for the mining adapter to mount it securely to the chassis. but that should be pretty easy in that case (I have a 4220) and looked at doing just that.

On the windoze side (and I'm not a storage spaces kinda person and do most of storage in dedicated vm's or baremetal)
I *thought* the number of drive letters was the real determination? That said you can have mount points etc so maybe the real answer is 24 drive letters but you can mount drives into that? Idle speculation on my part so someone more informed please help me learn too!

uhmmm are you using the 9211 in jbod mode or raid mode? my DELL H310 can do raid 5 with the DELL raid firmware but I think the number of drives is limited per volume. Are you using the Areca as a raid controller (and maybe 1 big volume) or as JBOD ? If you are using raid mode all around I think number of volumes you have is the real question.
 
Last edited:
  • Like
Reactions: MER

MER

New Member
Nov 3, 2019
5
0
1
Sydney
Thanks so much for your response itronin !

If the CPU and fan needs to reside in Motherboard 1, I can live with that. I would prefer to reduce power, but I've been running this configuration for so long now, Im used to it. Im lucky enough to have a 10kw solar system with 2 x Tesla Powerwall batteries, so most of my power is green and free these days.

Thanks for the advice on the firmware and yellow cards. I will ensure I follow that.
All drives are SATA. Should be OK though as I don't have huge amounts of data happening at once. Maybe only a 4K stream or 3, going at any given time.

I will definitely investigate the JBOD power board. I didn't realise those existed. that would probably be my preferred method to keep things simple and to keep the heat load as low as possible.

As far as windows goes, from the limited information I can find, I believe there MAY be a limit of 128 drives, but I cant see that clearly anywhere.
You are right about the 24 drives, but that's only when drive letters are utilised.
As I only use the drives as a mount and let Stablebit drivepool take care of the lettering, i may only only end up with 3 drive letters of the 24 occupied.

As I'm using drivepool, i'm avoiding RAID and windos storage spaces all together. I want to easily be able to expand storage (preferably hot swappable) and drivepool lets me do that with no issues. So in answer to your question, both the motherboard and ARECA raid cards will all be running in JBOD mode.
 

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
sm jbod power board on the bay

that's the version 2 board. I might have one laying about and I'll know when I get home tomorrow. Might be able to help you out with that if you go that route but you will most assuredly need to drill/tap if you want to use standoffs to mount it - or I suppose duct-tape! ;)
 
  • Like
Reactions: MER

MER

New Member
Nov 3, 2019
5
0
1
Sydney
sm jbod power board on the bay

that's the version 2 board. I might have one laying about and I'll know when I get home tomorrow. Might be able to help you out with that if you go that route but you will most assuredly need to drill/tap if you want to use standoffs to mount it - or I suppose duct-tape! ;)
Thanks again
One question tho: how will I power the sas expander? As I believe the HP doesn’t have a molex connector, it only powers via pcie (?)
 

Dave Corder

Active Member
Dec 21, 2015
296
192
43
41
Thanks again
One question tho: how will I power the sas expander? As I believe the HP doesn’t have a molex connector, it only powers via pcie (?)
Use a 1x to 16x (mining) adapter, like this: https://www.amazon.com/HEYFIT-Powered-Extension-Graphics-Mining-Ethereum/dp/B07FN9WHWN/ (just an example; you can easily find them on Amazon and the bay with SATA, GPU, or Molex power inputs). Works great (source: used them myself before I switched expanders).

Edit: don't actually need to plug in the 1x end to a MB, either. Just the 16x end and power.
 
  • Like
Reactions: MER

MER

New Member
Nov 3, 2019
5
0
1
Sydney

MER

New Member
Nov 3, 2019
5
0
1
Sydney
This was an interesting post I found:


Can anyone confirm if this is the case?
If so, it may save the expense of a JBOD PSU and mining adapter as the MB is already installed.
If the CPU and memory can be removed, I wouldnt imagine much power would be drawn.
 

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
I'd say give it a go. it will either work or it won't. If it works the worst I can think of is that if it works you might get a speaker buzzing at you momentarily at power on meaning that no memory / no cpu found... it will certainly do no harm to the HP SAS Expander. I am actually really curious to try this on a SM IPMI board no cpu with a 4gb stick to see if I can induce the board to power on via ipmi and stay on.