Norco vs. other solutions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jcl333

Active Member
May 28, 2011
253
75
28
I just realized you can buy the 6-series now. May end up picking up a card in the near future. Thanks for the heads up.
Yeah, they look really interesting.

One thing to remember on the PSUs and those enclosures is that ideally you have a single rail PSU. I once tried a sixteen drive, Core 2 Duo system with a 1000w multi-rail PSU and had huge issues starting the thing due to splitting the 4-pin Molex connectors too many times into 5-in-3 enclosures. Swapping to single rail meant I powered the same system easily on a 750w PSU.
Yes, this is what I was saying up above but could not rememeber the exact issue.

-JCL
 

jcl333

Active Member
May 28, 2011
253
75
28
If you need to replace the PSU, it is nice to have it be fully modular, that way you do not have to unbundle/untie/untangle any cables going to the motherboard.
Right, again, not THAT much trouble if you only have to do the ATX power connector don't you think?

As for backing up my movies on my fileserver, the optical discs themselves are the backup. You can fit a whole lot of them in a fireproof safe if you take them out of the jewel cases.
Well, most of the stuff I have on ISO is from Netflix, so I would have to re-get all of it, and even if that were not the case, that is alot of time to re-rip, especially if you have several hundred. I am not so much worried about a DR solution, more of a losing the array situation...

Also, if you use SnapRAID or FlexRAID to provide redundancy, rather than a striped RAID, then even if you lose more drives than can be recovered by parity data, you still have most of your movies intact. You only lose the ones on the failed drives.
Now see, this is why these forums are good. I was sortof looking for this solution but did not know what the product was called. Thanks for suggesting this. I was wondering what the people running large JBOD on Windows were actually doing, this is probably it. Do you use one of these? There are alot of people using standard RAID, I wonder how many are doing JBOD and using it with one of these instead?

-JCL
 

PigLover

Moderator
Jan 26, 2011
3,215
1,571
113
Well, most of the stuff I have on ISO is from Netflix, so I would have to re-get all of it, and even if that were not the case, that is alot of time to re-rip, especially if you have several hundred. I am not so much worried about a DR solution, more of a losing the array situation...

-JCL
Poor form there. DMCA might suggest that all rips are illegal, but most of us here have no problem ripping/storing DVD/BR that we own. Ripping and storing things we have rented or borrowed is still pretty much frowned upon...even by us 'scofflaws'.
 

jcl333

Active Member
May 28, 2011
253
75
28
Poor form there. DMCA might suggest that all rips are illegal, but most of us here have no problem ripping/storing DVD/BR that we own. Ripping and storing things we have rented or borrowed is still pretty much frowned upon...even by us 'scofflaws'.
Fair enough. I own 200+ Blu-rays and 1000+ DVDs, and I was running out of places to put them all, although initiallly I was doing it to avoid spending $20-30 per Blu-ray for something that is either bad or that I would probably only watch once (you know it is bad if you won't even waste the space for the ISO). The DMCA definately frowns on ripping, I still don't know how AnyDVD hasn't been shut down, even being in another country and all.

-JCL
 

jcl333

Active Member
May 28, 2011
253
75
28
Also, if you use SnapRAID or FlexRAID to provide redundancy, rather than a striped RAID, then even if you lose more drives than can be recovered by parity data, you still have most of your movies intact. You only lose the ones on the failed drives.
OK, I have been reading up on FlexRAID and SnapRAID and related products. Seems like an interesting solution, but I have to say I am not impressed with the websites, seems like a very non-mainstream solution to me. I don't know any companies using anything like this in production, most are using traditional backups. The forums on the topic here don't seem too active to me.

I see people mentioning using JBOD with non-hardware RAID controllers, presumably to go this route. Seems you would need to decide if you are going this way or not at the start.

At this moment I am inclined to go with hardware RAID....but of course that still leaves the backup issue and the silent corruption issue. I don't know how big of a problem this is, but it seems worth thinking about.

I was thinking of just building two arrays, on two different raid controllers. And then just backup up the first one to the second, say on a weekly basis. With 24 slots I could easily do this or put it in an external box or something. And if I only needed it weekly I could leave it off or in a low power/sleep state in the meantime.

Patrick - I guess I am starting to get off topic here since I have decided to go ahead and try the Norco case, so I can move this to the appropriate forum if you like.

-JCL
 

john4200

New Member
Jan 1, 2011
152
0
0
SnapRAID is new and written by a non-native English speaker, so the website is not spectacular.

FlexRAID has been around a while, but the author seems to like to force users to be beta testers, and has never been really conscientious about documentation. But he does reply to questions in his forums. Sometimes, if he is not on an extended sabbatical from FlexRAID.

Obviously, they are both niche products. People with TeraBytes of static data are not a large percentage of the population. I prefer SnapRAID, since it is open source. FlexRAID is more mature, but it also depends on the author, who has been erratic in the past. With SnapRAID, if the author disappeared, it is likely some other developer would take over, since it is open source. With FlexRAID, the author says he would release the code if he abandoned it, but I take that with a grain of salt since he has disappeared for months with no word in the past.

I am running a linux server, so I did not have to choose between hardware RAID and snapshot RAID at the beginning. Linux has good software RAID with mdadm, and that can exist alongside snapshot RAID -- all I needed was HBAs with enough ports for 24 hot swap bays. With Windows, you may need to decide earlier, since Windows software RAID is not very good. Although most hardware RAID cards can be configured to pass-thru drives, so you could change your mind later, albeit at the expense of wasting money on hardware RAID cards when you only needed HBAs.

I agree that this conversation should probably be moved to a different thread and/or group.
 

shanester

New Member
May 18, 2011
17
2
3
This is my first build of this type of complexity, so call me a noob. I have been working on the specs for my All-in-One storage server and although the Norco 4220 is overkill for me at the moment (initially using 6-8 drives for data - RAIDZ), it does give me future expandability and the overall cost is manageable. My big concern is noise and power. I don't want anymore jet engine's in my basement. I am leaning towards the SM X9SCM-F with ESXi 4.1 and Open India/Nappit for the storage OS. I am going to replace the rear 80mm fans and use 3 - 120mm fans in the middle with PWM quiet fans (not sure which brand yet and I am open to suggestions). Here are my noob questions of the day (no drinking while reading, I am not going to clean your screen :))
1. The MB supports 5 PWM fan headers. The 4220 has 5 (2 - 80s & 3 - 120s) and the CPU would require one. Would I need to purchase a "Y" adapter cable for the two of the fans?
2. How are the speed of fans controlled? Automatic? Software controlled?
3. I purchased a IBM M5015 SAS card. It is my understanding that I would use 2 SFF-8087 Mini SAS Reverse breakout cables to support the 8 drives that are connected to the backplane. Is this correct?

TIA
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,535
5,860
113
1. IIRC The RPC-4220 has molex power connectors so you probably want to add a fan controller anyway (the 80mm rear fans are loud.) I have the RPC-4020 and RPC-4224 so I am guessing it is similar.
2. The 4-pin PWM fans connected to the motherboard basically use ambient temps to manage fan speeds if you enable the option on a server motherboard. This is not ultra granular so you may still end up with a fan controller.
3. You should be able to use SFF-8087 to SFF-8087 standard cables. Both card and backplane use those connectors.
 

apnar

Member
Mar 5, 2011
115
23
18
3. I purchased a IBM M5015 SAS card. It is my understanding that I would use 2 SFF-8087 Mini SAS Reverse breakout cables to support the 8 drives that are connected to the backplane. Is this correct?
Assuming you mean IBM M1015, I concur with Patrick that you just need normal SFF-8087 to SFF-8087 cables (you would need the breakout cable if you went with the Norco 40xx as opposed to the 42xx which use SFF-8087). One thing to note is that on the IBM card the SAS ports point upward and they are near the back of the case. In combination with the 3x120 fan plate in the Norco (which moves the gap for cables off center) you will likely need cables slightly longer then the seemingly standard .5m. I found using the Supermicro CBL-0281L cables which break out into 8 smaller cables between the ends allowed me to make the needed bends and were long enough. They aren't the best looking cables though.
 

shanester

New Member
May 18, 2011
17
2
3
1. IIRC The RPC-4220 has molex power connectors so you probably want to add a fan controller anyway (the 80mm rear fans are loud.) I have the RPC-4020 and RPC-4224 so I am guessing it is similar.
2. The 4-pin PWM fans connected to the motherboard basically use ambient temps to manage fan speeds if you enable the option on a server motherboard. This is not ultra granular so you may still end up with a fan controller.
3. You should be able to use SFF-8087 to SFF-8087 standard cables. Both card and backplane use those connectors.
Thanks Patrick. Do you have any recommendations for a fan controller that would be used for my config?



Assuming you mean IBM M1015, I concur with Patrick that you just need normal SFF-8087 to SFF-8087 cables (you would need the breakout cable if you went with the Norco 40xx as opposed to the 42xx which use SFF-8087). One thing to note is that on the IBM card the SAS ports point upward and they are near the back of the case. In combination with the 3x120 fan plate in the Norco (which moves the gap for cables off center) you will likely need cables slightly longer then the seemingly standard .5m. I found using the Supermicro CBL-0281L cables which break out into 8 smaller cables between the ends allowed me to make the needed bends and were long enough. They aren't the best looking cables though.
Apnar, you are correct, I meant the M1015. I will take a look at the Supermicro cables
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,535
5,860
113
So I have had really poor luck with fan controllers. You can see an example of the fan controller(s) I use here which you can read more about more in the update. I haven't found one that I have used for the last 2 years that has been super reliable.

On the M1015 to Norco cables: these are the Molex brand ones from iPCDirect (Norco) on Amazon.com. You do not need anything fancy and at ~$25 for two, it is not overly expensive if you decide you want shorter/ longer cables.