The Adaptec ASR-52445 (latest firmware b18948) says the HGST 10TB is 1.13TB.
Adaptec Support said the drive is likely at fault (they were correct). They also confirmed TCA-00296-05-C controllers are rev C1 PCB (what the eBay seller has) and supports large drives. Note this controller is non-UEFI (some of the latest motherboards will not work with it).
I purchased over a period of 2 years 3 different HGST 0F27502 HUH721010ALN600. I found out it does indeed have the reset feature (confirmed by the 0F27502 code in the HGST feature PDF) when 3.3 volts is applied to the power pins. The Startech case obviously is not powering the 3.3v. My internal use has a Y splitter that omits the 3.3v wires (I use these splitters in almost every PC), so I would not be aware of the issue in normal use.
I confirmed by purchasing ten Western Digital 10TB Easystore drives (BB sale $200ea) and shucking the cases to find inside:
WD100EMAX
256MB cache
model:WD100EMAZ-00WJTA0
part num:2W10228
5400RPM my web searches told me, which is awesome. I have no use for hot power wasting 7200RPM drives. I use SSDs when I need speed.
It is a PMR drive, I presume helium, 180MB read/write when not full, about 110MB when near full (on the USB 3.0 interface). It has the reset feature. My Y splitters prevent me from having any issue (I can't believe folks use tape on the pins when these splitters are available from multiple sellers on Amazon and eBay).
This drive works fine everywhere except with the 3Ware 9650SE-12ML (latest firmware v4.10.00.027). That controller does detect the space correctly (9.09TB), but says there's an error with the drive (there is not, drives test fine with HD Sentinel).
I tried wdidle3.exe /D on the WD100EMAX, communicates fine with the drive, but it does not recognize the command (nor is it needed).
So there is an 8TB barrier with the 3Ware 9650SE-12ML RAID controller on (presumably) any larger drives.
There is an 8TB barrier with the HGST 0F27502 HUH721010ALN600 drives with some hardware. In my unlucky case, with almost every PC, RAID and HBA controllers, external cases and docking stations I own...
The Adaptec ASR-52445 works fine with the 10TB WD100EMAX, properly detecting 9.09TB per drive. The SATA breakout cables cost about as much as the controller (if you use all 24 internal ports). The controller is about $65 on eBay, 3Gbps per port, 8 lane PCIe. I bought 6 of them. It was originally about $1600. While slow by today's standards, it way faster than a 1Gbps LAN port and a single user (me).
The Adaptec Windows user interface is not great, only slightly better than the horrid LSI interface (I'm using 3Ware's nicely designed WebUI for reference). However, with enough right clicking, most of the important stuff is there. How many drives spin up at a time is adjustable- I set mine at 1 drive. There does not seem to be a way to set the stagger delay, but the finger method tells me it's 2 seconds per drive (group). I have auto-power down disabled (very bad for most drives due to head parking) plus I don't want to wait for the staggered spinup. Most RAID levels are there- JBOD too. I need to see if the CLI offers any more options (I would prefer 4 seconds for the stagger delay). The BIOS setup could be better, as some of the options there can't be changed there, it has to be done from the Windows interface.
So summarizing the Adaptec ASR-52445:
We have here a decent RAID or HBA controller, not mentioned on any HTPC server forum until now, that handles quantity 24 10TB plus drives for about $4.50 per drive (I included 6 SFF-8087 to SATA forward breakout cables in the cost) AND it has staggered spinup. All without an expensive backplane.
With the ASR-52445 and 8 of the shucked Easystores, I created an 8 drive RAID6 array (54.5TB formatted) this weekend. I have them in a Windows 7 HP DC7900 mid tower PC ($90 via eBay), 365watt HP power supply, 12amps at 12v for the drives. With staggered spinup it works fine. Cooling in this case was a challenge and took some head scratching. The ASR-52445 needs a large airflow. I have a 92mm fan mounted 1/2" above the controller blowing down onto the heatsinks. The drives run cool and don't need much cooling. I added a 120mm fan on the back blowing in which provides plenty of airflow. Most heat exits through the front and some through the power supply. I have 2 tiny 2.5in 2TB Seagates also hooked to the Adaptec in RAID1 (for Acronis images and program installers). I love these little drives, $58 shucked and they use very little power (RAID1 is a must due to the data density). There's a 512GB SSD for boot drive. First time I've used anything other than RAID1 or SnapRAID in almost 18 years but everything on the array will be be copied to a few 20TB drives soon, making a rapid recovery possible.
We have a paradigm shift happening soon with very large inexpensive per GB drives. Folks will no longer gamble with striped servers or suffer with FlexRAID/SnapRAID servers. We'll have MASSIVE amounts of cheap space to build our servers with. No one will care that replication might only give you 1/3rd of the space you purchased when a 100TB drive is $200. Yes they will be that inexpensive or less.
Next HTPC server I'm planning for 2023-2025:
48 of the 100TB drives (4800TB raw space), 2 of the ASR-52445 controllers, staggered spinup, single 850watt ATX power supply (single 12v rail). Double replication using Drivepool giving about 1400TB formatted. Inefficient use of space, but not a tragedy if something fails, and you add drives when you need more space. No huge upfront cost involved.. I dislike RAID5 and 6, ZFS, anything striped, because any failure can have you lose the whole array. Thus my current 96TB server uses SnapRAID. Everything is backed up (3 to 4 copies not in the house), but the time and effort necessary to recover is extreme.