How are these specs?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

smccloud

Member
Jun 4, 2013
325
12
18
I just proposed this to IT at work since they proposed a $100k storage upgrade. I just want to make sure I have enough RAM. Currently thinking of OpenIndiana + napp-it but I cannot remember how to figure out how much RAM you need for a L2ARC so you don't starve your ARC. It will be setup in a ZFS pool of 2 9 disk Z3 arrays shared out via NFS to ESXi (unless 9 Mirrors would be faster?).

  • Norco RPC-4224
  • Super Micro X9DR3-F-O EATX Motherboard (I've read elsewhere the RPC-4224 will fit an EATX motherboard even though it says it won't)
  • 2 Intel Xeon E5-2620 Hex Core CPUs
  • 16 Kingston 16GB ECC Registered DDR3 DIMMs (256GB total)
  • Athena Power 800W Mini-Redundant PSU
  • 3 Quad molex to molex power splitters
  • 3 Adaptec 7805H SAS HBAs
  • 6 Adapteec HDmSAS to mSAS cables
  • 18 Western Digital RE 4TB HDDs
  • 2 80GB Intel 335 Series SSDs (OS Mirror)
  • 2 240GB Intel 520 Series SSDs (ZIL)
  • 2 240GB Intel 520 Series SSDs (L2ARC)
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Personally, I don't think that the Norco chassis and Athena power supply are well built enough for non-home use. I'd step up to a Supermicro chassis that has been designed for that motherboard. Also, if this is a storage server, and you don't plant to run something on it that requires tons of processing power, you don't need two of those CPUs.
 

smccloud

Member
Jun 4, 2013
325
12
18
So I could step down to a single CPU motherboard and a single CPU or are you just saying 2 lower power CPUs so I can still have 16 DIMMs? And sadly, the Norco case is probably better built than the last HP MSA60 we got in.
 

smccloud

Member
Jun 4, 2013
325
12
18
Updated case is the Super Micro SC846A-R1200B & CPUs are Intel Xeon E5-2603.

I would like to be able to enable deduplication so I'm guessing dual quad cores should be fine for that.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Since you want 16 DIMMS, I now see that you do need dual CPUs, if only for the memory slots. As for CPU, de-duplication does count as a workload that uses lots of processing power, so I'd spend the extra $400 and stick with the 2620s. You get more GHz, more cores, way more threads, and more memory bandwidth for the extra money. By the way, I do not have a good mathematical model of exactly how much CPU ZFS uses for de-dupe, so I'm being conservative in wanting to have perhaps more than is necessary.

Also, we all know the story: "I bought chassis X, but I really wish that I had bought chassis Y with more drive slots". Given that, have you seen the SC847 series of chassis? I do worry about vibration when putting too many drives into one box, but most of these storage servers don't really end up being ultra high performance anyway. Even worst case, I don't see the 36-drive chassis vibrating itself to be slower than the 24-drive chassis.

Updated case is the Super Micro SC846A-R1200B & CPUs are Intel Xeon E5-2603.

I would like to be able to enable deduplication so I'm guessing dual quad cores should be fine for that.
 
Last edited:

smccloud

Member
Jun 4, 2013
325
12
18
Since you want 16 DIMMS, I now see that you do need dual CPUs, if only for the memory slots. As for CPU, de-duplication does count as a workload that uses lots of processing power, so I'd spend the extra $400 and stick with the 2620s. You get more GHz, more cores, way more threads, and more memory bandwidth for the extra money.

Also, we all know the story: "I bought chassis X, but I really wish that I had bought chassis Y with more drive slots". Given that, have you seen the SC847 series of chassis? I do worry about vibration when putting too many drives into one box, but most of these storage servers don't really end up being ultra high performance anyway.
The extra bays aren't worth the cost right now. Hell, its also just an attempt to get VMs off the slow ass OpenFiler system they are on now. And I am not sure IT will go for it. I'm doing a "in case they do" right now. Also looks like my HBAs are not a good choice for OpenIndiana (or zfsguru, etc...) so what is a good supported 2 port HBA? How about the LSI 9207-8i?
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
For ZFS and that motherboard/chassis, LSI 9207-8i. Really even a 9211-8i if fast enough for 18 spinny drives, but the 9207 is a far better card, the price difference isn't huge, and with enough SSD drives you'd see the performance difference.

Also, for VMs - if you have more than one or two - you could consider a separate all-SSD disk group. Has anyone does this along with de-duplication enabled? How is ZFS de-dupe performance with VM workloads?
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
ZFS will use all of the RAM you throw at it, but does not "require" that much at all... until you turn on de-dupe of course. See: How To Size Main Memory for ZFS Deduplication

The Newegg list looks pretty good to me. Two things:

1) How long will those 20 drives last you given your anticipated data growth rate?
2) All that data and just two gigabit Ethernet links? Is that enough network bandwidth for you?

Excessively long data mining run complete - 6TB - getting back to work now. Good luck!

Newegg doesn't list the 9200-8i so the 9207-i it is.

Here is the link to the Newegg wishlist, easier than redoing all the items manually. Is 256GB of RAM enough for 480GB of L2ARC?

Newegg.com - Once You Know, You Newegg
 
Last edited:

smccloud

Member
Jun 4, 2013
325
12
18
Hmm, looks like I would need more than 256MB of RAM but not sure how much more yet. I guess dedup could be left off. zlib compression would be on of course, although given that most files would be vmdks I doubt much compression would occur.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I agree dba. If you are commercial and if you are spending $100k then why skimp on the chassis/PSU? Go with quality here, something like the SuperMicro SC84x series chassis. You might double the cost of the chassis/psu doing that but in the long run you will be MUCH happier with it (redundant PSUs actually purpose built for your MB, LEDs that work for years, backplanes that don't break just because, better fan/cooling integration with the MB, etc, etc, etc).

Norco is awesome for home users where cost is king. For commercial your time fixing it later is worth MUCH more than the few $$$ you will save up front.
 

smccloud

Member
Jun 4, 2013
325
12
18
I agree dba. If you are commercial and if you are spending $100k then why skimp on the chassis/PSU? Go with quality here, something like the SuperMicro SC84x series chassis. You might double the cost of the chassis/psu doing that but in the long run you will be MUCH happier with it (redundant PSUs actually purpose built for your MB, LEDs that work for years, backplanes that don't break just because, better fan/cooling integration with the MB, etc, etc, etc).

Norco is awesome for home users where cost is king. For commercial your time fixing it later is worth MUCH more than the few $$$ you will save up front.
Well, IT basically said good thinking outside the box but they want more NetApp units. Naturally, I suggested a Synology RS10613xs+ :D
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Well, IT basically said good thinking outside the box but they want more NetApp units. Naturally, I suggested a Synology RS10613xs+ :D
Need OnCommand :) Become more box logo indifferent whilst giving IT NetApp they want
 

smccloud

Member
Jun 4, 2013
325
12
18
Need OnCommand :) Become more box logo indifferent whilst giving IT NetApp they want
I just want more money in the software development budget to get us new computers. Our first gen Core i5s are starting to show their age.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
random comments, dont cost cut too deep

I just proposed this to IT at work since they proposed a $100k storage upgrade. I just want to make sure I have enough RAM. Currently thinking of OpenIndiana + napp-it but I cannot remember how to figure out how much RAM you need for a L2ARC so you don't starve your ARC. It will be setup in a ZFS pool of 2 9 disk Z3 arrays shared out via NFS to ESXi (unless 9 Mirrors would be faster?).

  • Norco RPC-4224
  • Super Micro X9DR3-F-O EATX Motherboard (I've read elsewhere the RPC-4224 will fit an EATX motherboard even though it says it won't)
  • 2 Intel Xeon E5-2620 Hex Core CPUs
  • 16 Kingston 16GB ECC Registered DDR3 DIMMs (256GB total)
  • Athena Power 800W Mini-Redundant PSU
  • 3 Quad molex to molex power splitters
  • 3 Adaptec 7805H SAS HBAs
  • 6 Adapteec HDmSAS to mSAS cables
  • 18 Western Digital RE 4TB HDDs
  • 2 80GB Intel 335 Series SSDs (OS Mirror)
  • 2 240GB Intel 520 Series SSDs (ZIL)
  • 2 240GB Intel 520 Series SSDs (L2ARC)
From my experience if you underbid too much it won't be taken seriously even if it would be a better choice. Its some kind of mental block people have.

If this is a work computer, they can afford a real supermicro chassis, trying to save a few hundred there isn't worth the quality drop.
Norcos are great when you are paying out of your own hobby budget, after that no way.

For pure HBAs why the adaptec? Look at LSI 2308 based ones, features should be the same as that model.

If they were prepared to spend $100k, use true SAS hard drives. (similar to your model: WD Seagate)

Use ssds with capacitors for zil (e.g. intel 320 series) and under-provision them on purpose. If your zil goes toast, you go toast.

I would look at 840 pros for L2ARC if going for bang/buck, or maybe step up to intel's DC series.