NetApp Storage Array Help and Server OS Recommendations

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

t0ny84

New Member
Apr 17, 2024
5
0
1
Hi,

Firstly to Moderators - apologies if this is located in the wrong section of the forum. Happy for it to be moved (or I can re-add it into the correct section).

I am new to the world of storage arrays and am looking to buy a NetApp Class 3650 Model 0834 Storage Array and using this with an IBM System X Qlogic QLE2562-IBMX 8Gb 2-port PCIe HBA 00Y5629 Card.

I am hoping someone might be able to answer or guide me where to find the answers to the following;
1) Will this NetApp Storage Array and IBM HBA Card work without (or limited) issues?

2) Does anyone know where to locate datasheets \ manuals for either of these items? I haven't been able to find any documentation, setup manual, etc for the Storage Array.

3) On the back of the NetApp Storage Array unit it appears to have dual NetApp Drive Module I/F-4 100120-113 FRU PN 45822-00 units and power supplies.
a) Do I need to plug both of these units into the HBA Card or is this only required to add a level of redundancy?​
b) Same as the first question but for the power supplies?​

4) Recommendations for software to use for my server computer?
Currently have Proxmox installed on my server computer just to play around on, am not really using it for anything apart from a few Docker Images (nginx, Portainer). If I get this Storage Array I am considering swapping to either TruNas \ FreeNas or Open Media Vault. Any feedback or experience would be great.

Thanks in advance!
t0ny84
 

Attachments

Last edited:

Chriggel

Member
Mar 30, 2024
69
26
18
1) I've googled it and it seems there are FC or SAS/FC combo controllers available for this array. But that's a SAS only controller on your images. In this case, it will not work with the FC HBA. You need a SAS HBA to connect to these ports, it's SFF-8088.

2) These are enterprise equipment and fairly old, could be that there never was a public documentation available (paywalled) or it was taken down because it's obsolete.

3) It's for redundancy. I'm not specifically familiar with this device, but SAS supports dual pathing and arrays similar to this one are meant to be stacked and cabled in a way so that the host can access all drives over two different paths. You'll most likely be able to see all drives on both controllers, but it may be dependent on controller configuration. Split configurations are also a thing where you map specific drives to specific controllers, which may impact redundancy, but allow other use cases.

Same for power supplies, it's for redundancy. It will work with one, but this will trigger a fault condition, possibly an alarm and may impact fan speeds.

4) Honestly, it's a SAS array. This doesn't affect the choice of the host OS at all. It will present its drives to the host and that's it.

Overall, my recommendation would rather be to really think this through if you want or need such a device. There are much more elegant and simpler ways to connect 12 3.5" HDDs. These devices are meant for enterprise operation, probably together with a NetApp service contract, so some things may be paywalled. If you don't have that and don't even plan to use the redundancy features it offers when connected to other arrays of its kind (or two hosts or two controllers of the same host), I'd probably advice against it.

In my opinion, such an array is basically scrap at this point.

For a 12x3.5" 19" solution, you might want to look at servers with appropriate chassis configurations, which completely eliminates the need for external SAS connections and doesn't throw you into the cold water with NetApp and the likes. If you want to keep your server and only need a way to connect more drives, you could think about putting it in a new case instead and do it DIY, or build your own DIY JBOD to add external drives to your system.

All of the above will be a much better experience for someone who's new in this field.
 

t0ny84

New Member
Apr 17, 2024
5
0
1
Hey Chriggel,

Thanks so much for your input and advice.

I ended up buying the unit and am waiting for it to be delivered, the main reason I bought it was the 13 x 4TB Seagate EXOS drives (SATA) that come with it as for the price it was too hard to pass up!

It's insane that companies still hide information or have paywalls in place for redundant \ grandfathered hardware.

Now I am aware it is a SAS HBA / SFF-8088 requirement I will probably end up buying one once the unit arrives.

As the hard drives are SATA I have been considering removing the drives and then selling the case and HBA and look to using a SATA backplane to connect all the drives in a smaller home friendly case but that will definitely be a future endeavour. I would love to have either SAS to Raspberry Pi or SATA backplane to Raspberry Pi style but I don't think this will be an option until they become reasonably priced again and a PCIe setup exists.

Or even something like this 20 Port PCIE Expansion Card PCIe SATA 3.0 Controller Adapter - eBay Link

I understand that the Host OS doesn't impact the choice of the unit etc, I am trying to find possible options. At the moment I am using Proxmox but only have 1 container with Docker \ NGINX installed on it. I am leaning towards changing to either OpenMediaVault or TrueNAS as I am not doing anything with Proxmox that one of these two cannot do. Alternatively if \ when it would be possible moving to the above Raspberry Pi style above.

Thanks again,
t0ny84
 
Last edited:

Chriggel

Member
Mar 30, 2024
69
26
18
These Seagate drives may come with custom NetApp firmware and will not work on another controller, be prepared for that. There might be a way around that, but I'm not familiar with that. Custom firmware on drives for storage arrays isn't unusual though.

But the really troubling thing here is the 20 port SATA controller. Don't do it. Just don't. This is NOT a 20 port SATA controller. It uses JMB575 port multipliers that are either connected through a PCIe switch or are even cascaded. JMicron says they support cascading for up to 15 drives, so that probably means three chips but their website isn't clear on that. The manufacturer of this card either disregarded the specs, uses a 4 to 1 PCIe switch or a 2 to 1 PCIe switch and cascaded three of the JMB575s while one is standalone.

Whatever it is they did, this is crap. When I see this, I think crappy performance and unstable operation. Exactly how you want your storage arrays to be, crappy, slow and unstable. Also, why would anyone want 20 individual SATA ports on something like this? This means 20 individual cables, or reverse breakout cables. Multilane connections exist for a reason.

This product was clearly designed by someone with no knowledge of what they're doing and no common sense. Or they knew exactly what they were doing and that would be even worse.
 

t0ny84

New Member
Apr 17, 2024
5
0
1
Thanks again Chriggel for your knowledge.
After doing some more reading on the 20 SATA Port card and how it works I have defiantly ruled this out.
If I end up going down the internal path it would be with a backplane and Internal SAS HBA. I am in Australia and internal seems to be a lot more wallet-friendly than external HBAs.

Thanks so much again!
t0ny84