Asus Hyper m.2 x16

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Savant

New Member
Mar 15, 2017
28
6
3
40
Has anyone used an Asus Hyper m.2 x16 card and been able to bifurcate all 4 drive ports? I am running into a weird issue with both my SMC servers. They both have the option to change the pcie port from 16x to 4x4x4x4, but no matter what I do I can not see more that 2 drives of the 4 installed on the card.

I am really impressed with the build quality of the card and the fact that my double sided nvme drives actually fit under the heat sink. It seems like the perfect solution, if I could see all the drives.

It's almost like Asus did something weird with the wiring of the card as when I set the port to 4x4x4x4 I see drive number 1 and 3. Also putting the card into a 8x slot set at 4x4 I can only see 1 drive no matter what.

I'd love to find someone local with an x10 or x11 SMC system to see if the newer platform makes a difference.

Anyone have any ideas?
 

am4593

Active Member
Feb 20, 2017
150
35
28
44
Would love to get my hands on one of these cards, where'd you acquire yours. I've only seen pre order thus far
 

ewillis1

New Member
Jul 29, 2016
8
0
1
32
@Savant Do you know if this is able to run on a bootable raid array on any Ryzen motherboards(especially with the new upgrade to allow bootable nvme raid)? And what does linux support look like? Details are hard to find on the internet, and it's not for lack of trying!
 

Savant

New Member
Mar 15, 2017
28
6
3
40
I honestly couldn't say. I don't know of anyone with a Ryzen setup to try it on. If anyone in the Denver area would like to give it a go I am up for meeting up and testing it out.

Linux support shouldn't be an issue at all. As long as your board has the pcie bifurcation settings to set a 16x slot to 4x4x4x4 it should work as a HBA basically.

After quite a bit of messing around with my motherboard I finally got 2 drives, 1 and 3 to show up in the system. Add to that two more pcie to m.2 adapters and I'm currently running 4 x 960gb Hynix nvme in a raidz pool on a Proxmox host.

I boot the host off some sata ssds and have the nvme's within their own pool. Speeds are very satisfying now :)
Running a Server 2016 guest and crystaldisk results in 9487/478 reads and 8497/378.8 writes. I am more than satisfied with the setup. I have it currently in a single e5 system. I'm going to move it over to my dual e5 system and see if there is anything more to be gained with some addition power.
 

alex1002

Member
Apr 9, 2013
519
19
18
Anyone know if these will work with non Asus boards or server boards.

Sent from my LG-H831 using Tapatalk
 

Savant

New Member
Mar 15, 2017
28
6
3
40
Hey Alex,
They do not work properly in x9 series SMC boards. The best I could get is two drives accessable, slot 1 and slot 3. I am unable to get any more to show. I don't have any newer consumer series hardware to test on. As I stated before it should work in the latest series board that have pcie bifurcation settings to set a 16x port to 4x4x4x4.

If someone in the Denver metro area has some newer hardware I am happy to meet up with the asus adapter and 4 nvme drives to see what we can find for support. If I can't find someone by christmas I'll order some hardware for testing.
 

larryexchange

Active Member
Dec 27, 2016
86
128
33
48
  • Like
Reactions: kousuke

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Any reason this wouldn't work with an X9SRW-F? It has support for x4x4x4x4 in the BIOS...

@Savant any update on your end on this?
 

Savant

New Member
Mar 15, 2017
28
6
3
40
Any reason this wouldn't work with an X9SRW-F? It has support for x4x4x4x4 in the BIOS...

@Savant any update on your end on this?
That is actually one of the boards I was unable to get it working in. Setting the bios to x4x4x4x4 would only allowed 2 of the 4 drives to be detected. I have sense installed an AMD S7150 in to the same board without issues. So it seems to be something specifically with this card.

I would have to pull drives from my co-lo in order to do any further testing
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
That is actually one of the boards I was unable to get it working in. Setting the bios to x4x4x4x4 would only allowed 2 of the 4 drives to be detected. I have sense installed an AMD S7150 in to the same board without issues. So it seems to be something specifically with this card.

I would have to pull drives from my co-lo in order to do any further testing
Ugh...now my plans for my project are just totally bummed out. I was wanting to skip having physical drives and just do everything with add in cards.
 

Savant

New Member
Mar 15, 2017
28
6
3
40
Ugh...now my plans for my project are just totally bummed out. I was wanting to skip having physical drives and just do everything with add in cards.
Same here. I ended up using a 2p Intel server and put each drive in a 4xpcie to m2 adapter. I am running 3x~1tb m2 drives in raidz and things have been up and stable for over a year now. Between them and a 800gb pcie flash drive I have a very fast flash tier of storage and use the 24x2.5" for mass storage onto 15k sas.
 

jahsoul

Active Member
Dec 13, 2013
262
34
28
War Eagle Country
I have one of these, but Supermicro freaking crippled the motherboard, so I can't make much use of it. Want to tell me that it was only reserved for their WIO but their freaking workstation motherboard allows bifurcation. :mad:

Moving on over to Asrock Rack now.
 

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
the NVMe High Point Controller card will work in the x9. It's got it's own driver for RAID 1. Works great. You actually leave BIOS to default and it will use the PLEX chip to bifurnicate.

The x10 and x11 boards work fine with the ASUS hyper m.2 x16 but I too have issues getting the x9 to see all drives. I would double check the seat of each M.2 though before you give up. Sometimes all i had to do was reseat the M.2 drive and then it would read.
 

Perry

Member
Sep 22, 2016
66
11
8
52
We have an X10DAi/C motherboard and I'm trying to get one of the ASUS Hyper M.2 x16 v2 cards to work. Inside are two
Sabrent 1TB Rocket NVMe PCIe M.2 2280 cards. In the bios (latest version, just updated it) I have pcie bifurcation in the North Bridge->IIO0 and IIO1 sections set to x4x4x4x4 for that slot (Slot 3).

In IIO0 I see that slots 3A/3B are showing as unlinked, but 3C/3D are showing as x4 (they're set to auto, so they seem to be seeing the NVMEs).

In IIO1, 3A/3B are unlinked, 3C is showing as linked and x4, and 3D is showing as not linked.

In the OS (Windows 7 Ultimate) there are two PCI devices showing up as needing drivers in Device Manager. No 1TB drives show up in Device Manager, or in the Disk Management tool.

Any ideas? I'm pretty sure I've set everything correctly, but I'm stumped.

Thanks!
 

Netwerkz101

Active Member
Dec 27, 2015
308
90
28
In the OS (Windows 7 Ultimate) there are two PCI devices showing up as needing drivers in Device Manager. No 1TB drives show up in Device Manager, or in the Disk Management tool.
The quote above is probably the tell-all for the issue you describe.
Find drivers for Windows 7 or try Windows 10.

It does sound like everything is set up/ready to go hardware/BIOS wise.
 

Perry

Member
Sep 22, 2016
66
11
8
52
The quote above is probably the tell-all for the issue you describe.
Find drivers for Windows 7 or try Windows 10.

It does sound like everything is set up/ready to go hardware/BIOS wise.
there are no drivers. That said, from what I’ve read others who’s use this card also see the missing driver errors.

...unless the drivers are for the NVME drives themselves and not the adapter? Do NVME drives typically require a driver? I was under the impression these would show up as just another unformatted disk.