HBA for 10TB and larger harddisks

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Sixthofmay

New Member
Jan 15, 2019
13
1
1
There seems to be an 8TB barrier with every HBA and RAID card I've tried. Anyone found one that works with the 10TB to 14TB drives? I plan to use it with the 20 to 40TB MAMR or HAMR drives when they come out.

I would like to use 4 of these cards in one box. My motherboard has four 8x slots. 60 to 72 drives total.

Staggered spinup is mandatory.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I've never run into any size limitations with my HBAs (aside from the ye olde 2TB limitation of the pre-SATA2 days). What HBA/RAID card(s) are you using?

I'm currently using an old M1015 with 10TB drives without issue.
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle

Sixthofmay

New Member
Jan 15, 2019
13
1
1
It's not the 3.3v issue. 10TB drives I have are all first gen HGST He10 which don't have the hard reset feature. They do spinup, but the size is not correctly detected.

I have an old M1015 flashed to LSI 9211 -8i IT mode. I use a Chenbro 32 port SAS expander and use 24 of the ports with SFF-8087 SATA breakout cables. LSI firmware is old P11 but works fine with 4TB and 8TB drives (the 8TB drives did come from Easystore drives but don't have the 3.3v issue).

The 10TB drive is detected as a 1.4 TB drive. Doesn't matter if the Chenbro is used or not (drive directly connected to controller). The latest firmware does not fix the issues.

I primarily use the 3Ware 9650SE-12ML in several of my computers for RAID1. Excellent high reliability card and 3DM2 web management is light years ahead of LSI's tools. The 3Ware cards have prevented major downtime due to drive failure about 15 times so far plus I've used its autorebuild feature many times to migrate 3.5" to 2.5" drives painlessly. LSI is garbage. It doesn't tell you a drive has failed and has no autorebuild... Staggered spinup only works after reboots. Not on power up when it's useful. 3ware's staggered spinup works properly on power up. Anyway:

With the .027 firmware, the 3Ware can see and use 4TB and 8TB drives just fine. It can not use the He10 drives. Says they're 7.27 TB in size and not available to be used. Only 1 of my 20 or so computers can see the correct size in the BIOS. The drives work fine in a good USB 3.0 enclosure (Startech).

If you Google LSI 10TB or 3Ware 10TB, you'll find many folks have issues besides the 3.3v issue.

I'm trying to find a controller that has staggered spinup and at least 16 ports per card, works with 40TB drives, and under $5 per port. JBOD. The 9650SE-16ML meets all requirements except the 40TB drive support. I know about the Highpoint Rocket 750, but it's about $15 per port (I'm including cabling cost).

Case and backplane is irrelevant as I build whatever I need to mount everything and wire direct.

This is for my next HTPC media server. Initial cost must be low- basically like 3 40TB drives, the PC and HBA. Yet expandable to at least 60 drives.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I hate to be the "have you tried upgrading the firmware?" guy, but P11 for the 9211-8i is pretty ancient. You say you've upgraded to the latest firmware, but the 9211-8i is still running P11? Broadcom only list P17 through P20 on their website (so there's seemingly no complete changelog), and P17 is from 2013 so P11 pre-dates the existence of 10TB drives (which started to emerge in 2014).

Regardless, if you get the same problem with either the 3ware or the LSI in IT mode I'd take a guess your problem is elsewhere. Are these all going through the same backplane? Are you able to try connecting one of the drives to the HBA without any intervening hardware (e.g. an 8087 -> 4xSATA cable)?

LSI is garbage. It doesn't tell you a drive has failed and has no autorebuild
AFAICR that's only if you're using them in IR, not IT, but I've not used them in IR for a long time.

This is for my next HTPC media server. Initial cost must be low- basically like 3 40TB drives, the PC and HBA. Yet expandable to at least 60 drives.
Low cost and 40TB drives...? You're aware they're unlikely to exist for quite some time, and will cost at least a monk's kidney if and when they do appear...? Not to mention that a case or cases to house ~60 drives certainly won't be cheap.
 

Sixthofmay

New Member
Jan 15, 2019
13
1
1
I don't use any backplane, all direct breakout cables from the Chenbro CK23601 (SFF-8087 to SATA breakout cable, Molex 79576-3003) 1. I even tried removing the Chenbro from the equation (just unplugged the connecting cable) and plugged a 79576-3003 cable directly into the M1015. No change.

I did try the P20 firmware on a M1015 controller in a PC I have for testing stuff, but it made no difference. I have one LSI flashed M1015 in production at P11, but a few on a shelf unused. In every test on hardware it should work on, the 8TB drives work fine, yet the 10TB drives won't. That's why I suggested there seems to be a barrier above 8TB.

Oh well, I don't care about the 10TB issue as I absolutely need working staggered spinup on power up if I expand like I plan to. It doesn't work on power up as the LSI BIOS is not loaded yet (dumb design or dumb me... I dunno). Works fine on a reboot since it's in memory and running, but of course it isn't needed on reboot because all the drives are already spinning... duh who designed this thing?

I worked around the spinup issue with some Meanwell power supplies just for the drives- 5v at 45a and 12v at 50a (it's a firehazard). I made a custom star topology power harness to hook it all up (I do copper and fiber cable design for a living).

Star topology was probably overkill as SATA uses differential signaling. It was mandatory with my first HTPC server- twelve 200GB PATA IDE drives (not differential), 3Ware 7500-12, 4 more IDEs for boot and various. I had 20 PATA IDE drives on one OCZ 700 watt single rail ATX PS. Modded with all new cabling and star topology (yes I did try a bazillion Ys first, it was a disaster, killed like 4 drives... and much smoke was let out). All for 2TB on one drive letter. That was in 2002. That power supply and motherboard still work and is in my last Win2000 box (now my jukebox PC with four 2TB 2.5" Seagates in RAID1).

The lack of working staggered spinup make LSI useless for large arrays. Maybe their cards are good in a datacenter with the right hardware, but for a home PC geek, they're dog dung. All moot as the next box will be quite different.

I'm aware the MAMR and/or HAMR drives will cost a small fortune at first, but prices will come down (possibly way down).

I need a server that doesn't rely on parity for data protection and is scalable. I've considered ZFS for years, but the lack of data recovery options and having to buy a bunch of storage upfront are deal killers. Yes I do have 2 to 4 other backups (not in the house) of the data on my servers, but it takes major time to rebuild anything, and time is one thing I don't have a lot of.

The new drive technology makes the cloud data replication model realistic for home users. It's scalable on an as needed basis. Stablebit Drivepool makes it easy to implement (anyone know of other similar software for Windows or Linux?).

I've used Drivepool since 2014 to combine my storage drives into one drive letter. I almost forgot it has automatic replication (adjustable) and placement of data which I have disabled as I place data via AutoIt scripts and Windows Explorer and use Snapraid parity drives instead.

I would set Drivepool to fully automatic with double replication (3 copies) if I had lots of cheap expandable storage available.

Stablebit also has a nifty program- Scanner which continuously reads your data, forcing the drives the handle weak sectors. I need to research Hard Disk Sentinel more to see if it has auto scanning, or better yet- auto regeneration of the surface (sector read-write-read). I use both in a manual mode on 5 of my PCs currently.

I may go with the Rocket 750 after some more research. It may be the lowest cost solution for large drives and big arrays. Backblaze thinks so and that counts a lot in my book.
 

Sixthofmay

New Member
Jan 15, 2019
13
1
1
After more research, it seems most HBA and RAID card manufacturers have a compatibility chart for large drives. Why?

Isn't there a detailed specification for hard drive and controller manufacturers to follow, making compatibility charts unnecessary?

It would be dumb if every new drive has to be hardcoded into a controller's firmware.
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
First off there are standards that are generally followed. That is why vendors release lists of qualified compatible drives, because at the end of the day you can't assume everyone else will play nice with standards. A list of compatible drives just provides known good models to those who want to limit the risk of issues.

These compatible drives are not hard coded into firmware and in many if not most cases drives that aren't on the list will work just fine. All of the cards you have are getting pretty old so the qualification lists likely stopped before 10TB drives were showing up. LSI (you hate them I know) for example has a large number of 10TB drives listed as compatible for their 9300 series HBAs which while not quite at your price point are very inexpensive. They even include a number of He10 drives.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
something seems really off. I've used 10TB Easystore drives taken out of the USB case on LSI SAS2008 and SAS2308 HBAs without issue. I ran a full burn-in test 4x across the entire 10TB on 12 of these drives simultaneously; took about 6 days to complete the testing.

So, I'm inclined to conclude your problem is not with the HBA, but something might be up with your HDDs. Does it show as 10TB on anything? How about a simple USB->SATA cable, does it show up as 10TB then?
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
I can't remember if ever seeing the venerable M1015 have issues with a properly formatted disk when in IT mode (P20 firmware). I'm rocking a mix of 8TB & 10TB HGST disks right now on LSI 2008 based controllers, with no issues at all. I would conclude that this has to be something to do with either the other hardware, like a bios limitation or the disks themselves having an odd firmware or something :)
 

Sixthofmay

New Member
Jan 15, 2019
13
1
1
The HGST He10 works fine in my Startech USB 3.0 external cases. It also works fine in one of my Win7 boxes direct to a SATA port. That box does not have UEFI BIOS. It has an I5 CPU.

I have about 8 other Win7 boxes with Core2Duo and Quads, I3, I5 CPUs. When directly hooked to a SATA port, none of them correctly report the size.

I just bought on eBay an Adaptec ASR-52445. 24 internal ports, staggered spinup, JBOD, and 10TB+ support (rev C1 or higher board and latest firmware). If it tests good, I'm ordering a few more.
 

TRACKER

Active Member
Jan 14, 2019
169
48
28
I use 10 years old asus desktop mobo P5QL-EM with core2duo e8400 and ICH10 and six 10TB ironwolf drives and everything works without any issues (i use solaris 11.3)
 

Sixthofmay

New Member
Jan 15, 2019
13
1
1
The Adaptec ASR-52445 (latest firmware b18948) says the HGST 10TB is 1.13TB.

Adaptec Support said the drive is likely at fault (they were correct). They also confirmed TCA-00296-05-C controllers are rev C1 PCB (what the eBay seller has) and supports large drives. Note this controller is non-UEFI (some of the latest motherboards will not work with it).

I purchased over a period of 2 years 3 different HGST 0F27502 HUH721010ALN600. I found out it does indeed have the reset feature (confirmed by the 0F27502 code in the HGST feature PDF) when 3.3 volts is applied to the power pins. The Startech case obviously is not powering the 3.3v. My internal use has a Y splitter that omits the 3.3v wires (I use these splitters in almost every PC), so I would not be aware of the issue in normal use.

I confirmed by purchasing ten Western Digital 10TB Easystore drives (BB sale $200ea) and shucking the cases to find inside:
WD100EMAX
256MB cache
model:WD100EMAZ-00WJTA0
part num:2W10228
5400RPM my web searches told me, which is awesome. I have no use for hot power wasting 7200RPM drives. I use SSDs when I need speed.

It is a PMR drive, I presume helium, 180MB read/write when not full, about 110MB when near full (on the USB 3.0 interface). It has the reset feature. My Y splitters prevent me from having any issue (I can't believe folks use tape on the pins when these splitters are available from multiple sellers on Amazon and eBay).

This drive works fine everywhere except with the 3Ware 9650SE-12ML (latest firmware v4.10.00.027). That controller does detect the space correctly (9.09TB), but says there's an error with the drive (there is not, drives test fine with HD Sentinel).

I tried wdidle3.exe /D on the WD100EMAX, communicates fine with the drive, but it does not recognize the command (nor is it needed).

So there is an 8TB barrier with the 3Ware 9650SE-12ML RAID controller on (presumably) any larger drives.

There is an 8TB barrier with the HGST 0F27502 HUH721010ALN600 drives with some hardware. In my unlucky case, with almost every PC, RAID and HBA controllers, external cases and docking stations I own...

The Adaptec ASR-52445 works fine with the 10TB WD100EMAX, properly detecting 9.09TB per drive. The SATA breakout cables cost about as much as the controller (if you use all 24 internal ports). The controller is about $65 on eBay, 3Gbps per port, 8 lane PCIe. I bought 6 of them. It was originally about $1600. While slow by today's standards, it way faster than a 1Gbps LAN port and a single user (me).

The Adaptec Windows user interface is not great, only slightly better than the horrid LSI interface (I'm using 3Ware's nicely designed WebUI for reference). However, with enough right clicking, most of the important stuff is there. How many drives spin up at a time is adjustable- I set mine at 1 drive. There does not seem to be a way to set the stagger delay, but the finger method tells me it's 2 seconds per drive (group). I have auto-power down disabled (very bad for most drives due to head parking) plus I don't want to wait for the staggered spinup. Most RAID levels are there- JBOD too. I need to see if the CLI offers any more options (I would prefer 4 seconds for the stagger delay). The BIOS setup could be better, as some of the options there can't be changed there, it has to be done from the Windows interface.

So summarizing the Adaptec ASR-52445:
We have here a decent RAID or HBA controller, not mentioned on any HTPC server forum until now, that handles quantity 24 10TB plus drives for about $4.50 per drive (I included 6 SFF-8087 to SATA forward breakout cables in the cost) AND it has staggered spinup. All without an expensive backplane.

With the ASR-52445 and 8 of the shucked Easystores, I created an 8 drive RAID6 array (54.5TB formatted) this weekend. I have them in a Windows 7 HP DC7900 mid tower PC ($90 via eBay), 365watt HP power supply, 12amps at 12v for the drives. With staggered spinup it works fine. Cooling in this case was a challenge and took some head scratching. The ASR-52445 needs a large airflow. I have a 92mm fan mounted 1/2" above the controller blowing down onto the heatsinks. The drives run cool and don't need much cooling. I added a 120mm fan on the back blowing in which provides plenty of airflow. Most heat exits through the front and some through the power supply. I have 2 tiny 2.5in 2TB Seagates also hooked to the Adaptec in RAID1 (for Acronis images and program installers). I love these little drives, $58 shucked and they use very little power (RAID1 is a must due to the data density). There's a 512GB SSD for boot drive. First time I've used anything other than RAID1 or SnapRAID in almost 18 years but everything on the array will be be copied to a few 20TB drives soon, making a rapid recovery possible.

We have a paradigm shift happening soon with very large inexpensive per GB drives. Folks will no longer gamble with striped servers or suffer with FlexRAID/SnapRAID servers. We'll have MASSIVE amounts of cheap space to build our servers with. No one will care that replication might only give you 1/3rd of the space you purchased when a 100TB drive is $200. Yes they will be that inexpensive or less.

Next HTPC server I'm planning for 2023-2025:
48 of the 100TB drives (4800TB raw space), 2 of the ASR-52445 controllers, staggered spinup, single 850watt ATX power supply (single 12v rail). Double replication using Drivepool giving about 1400TB formatted. Inefficient use of space, but not a tragedy if something fails, and you add drives when you need more space. No huge upfront cost involved.. I dislike RAID5 and 6, ZFS, anything striped, because any failure can have you lose the whole array. Thus my current 96TB server uses SnapRAID. Everything is backed up (3 to 4 copies not in the house), but the time and effort necessary to recover is extreme.
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
It is a PMR drive, I presume helium, 180MB read/write when not full, about 110MB when near full (on the USB 3.0 interface). It has the reset feature. My Y splitters prevent me from having any issue (I can't believe folks use tape on the pins when these splitters are available from multiple sellers on Amazon and eBay).
Some people have hot-swap backplanes. As the pin used to be 3.3V, it is often on an inner layer of the backplane PCB and thus not amenable to being cut on the backplane.
This drive works fine everywhere except with the 3Ware 9650SE-12ML (latest firmware v4.10.00.027). That controller does detect the space correctly (9.09TB), but says there's an error with the drive (there is not, drives test fine with HD Sentinel).

So there is an 8TB barrier with the 3Ware 9650SE-12ML RAID controller on (presumably) any larger drives.
That is a long-obsolete controller.
I tried wdidle3.exe /D on the WD100EMAX, communicates fine with the drive, but it does not recognize the command (nor is it needed).
The SAS versions of the HGST He drives have complex settings for idle - everything from "do nothing different" to "unload the heads but keep spinning at 7200 RPM" to "unload heads and spin down":
Code:
(0:1) rz1:/sysprog/terry# camcontrol modepage /dev/da15 -m 0x1a
PM_BG_PRECEDENCE:  0
STANDBY_Y:  0
IDLE_C:  0
IDLE_B:  0
IDLE_A:  0
STANDBY_Z:  0
IDLE_A Condition Timer:  20
STANDBY_Z Condition Timer:  0
IDLE_B Condition Timer:  6000
IDLE_C Condition Timer:  0
STANDBY_Y Condition Timer:  0
CCF Idle:  1
CCF Standby:  1
CCF Stopped:  2
There is an 8TB barrier with the HGST 0F27502 HUH721010ALN600 drives with some hardware. In my unlucky case, with almost every PC, RAID and HBA controllers, external cases and docking stations I own...
It would be interesting to see if that is also true for SAS drives on a SAS controller, or a SATA or SAS drive that is 4Kn rather than 512e (since it will have 1/8 the number of LBNs due to the larger sector size). Of course, 4Kn is unlikely to work as a boot device unless you are using a UEFI controller in a UEFI system.
We have a paradigm shift happening soon with very large inexpensive per GB drives. Folks will no longer gamble with striped servers or suffer with FlexRAID/SnapRAID servers. We'll have MASSIVE amounts of cheap space to build our servers with. No one will care that replication might only give you 1/3rd of the space you purchased when a 100TB drive is $200. Yes they will be that inexpensive or less.
Have you seen industry data predicting that capacity and price? I'm less optimistic about that, because overall shipping numbers in the HDD segment are shrinking due to the small (< 2TB) capacity segment having mostly changed over to SSD. Giant SSDs are not yet cost-competitive with a similar capacity built from HDDs, but I suspect that is coming (128-layer QLC, anyone?). And the larger capacities require more and more esoteric designs - "back when", Helium-filled drives were esoteric. Now we're talking about laser-assisted HAMR and other such things. Esoteric technologies and lower production quantities generally means higher prices.

Having said that, giant disk pools are fun: :cool:
Code:
(0:2) rz1:/sysprog/terry# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
storage   109T  21.9T  86.9T        -         -     0%    20%  1.00x  ONLINE  -
That pool does around 950MB/sec serving Windows clients with SAMBA on 10GbE. And that's with write caching disabled - I could probably get it up to wire speed if I enabled cache and disabled synchronous SMB, but it isn't worth the risk for the small additional speed gain.