NVMe: 2.5" SFF drives working in a normal desktop

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I have one of the two AOC-SLG3-2E4 cards in right now. Will be interesting to see what happens. Got the second Intel 750 400GB so now have 2 cables to test with.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
OK both drives are in. I setup a Windows RAID 0 array.
upload_2015-6-17_15-8-12.png

This is pretty ridiculous. Xeon D-1540, 128GB RAM, and 4.2GB/s two disk storage array that is running off a 120w PicoPSU.

@dba - how much would that much throughput cost just a few years back?

AOC-SLG3-2E4 seems to be working.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I suspect that that Intel A2U44X25NVMEDK can accept up to 4 NVMe drives and/or 8 SAS drives, so long as the combination stays with 8 drives total. Having dual SFF-8643 connectors gives it enough SAS lanes to run one to all 8 bays (unless they are run dual-path to 4 bays). And there are no conflicts with the pins - a single bay could be wired for both SAS and PCIe at the same time. (Image borrwed from google image-search results). If I had the PCB in front of me I would attempt to follow the traces from the SFF-8643 connectors and see how many bays are wired for SAS.

 
  • Like
Reactions: Chuntzu

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Intel's support site clearly states support for 4 NVMe + 4 SAS/SATA (or 8 SAS/SATA). For the price its quite a deal.

Its obviously form-factor for Intel's 2U chassis only though. Not sure you'd be able to put it into a more "normal' chassis or somebody else's 2U without doing some metalwork.

Oh - and those results on the D-1540 board - AMAZING.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@TuxDude you are totally right. I wish there was one with dual port SAS option for the other 4 drives!

@PigLover despite the JD/ MBA I stopped reading at 4x NVMe card. Hopefully I can get to the Fremont DC this weekend and install it in an Intel 2U. I actually think you could retrofit it in some normal cases not too hard. The two sides are smooth. The top and bottom do have some metal sticking out. I think to hook into the Intel 2U chassis but would be extremely easy to dremel. The backplane detaches with 1 screw so you could do your dremeling away from electronic components.

Airflow would be the issue though.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@ehorn - if I can get power to that 2U intel backplane.... I can probably beat that 10GB/sec number fairly easily at this point. I have a box of much higher-end 2.5" SSDs here and can always use the AIC form factors also.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
I'm not sure what the 4 pin adapter looks like, but I see lots of options when I browse google images for "4 pin to molex".

Do any of those look like your ticket?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
If only it was that easy. "intel backplane 4 pin power" has the 2600gz platform's SAS backplane power connector visible in image search.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Awesome. Glad to see the new cards working :)

Can't wait to test mine, I got her today in my hands!! Tomorrow we test again.
 

neo

Well-Known Member
Mar 18, 2015
672
363
63
I have all the tools to create custom molex style power cable/adapters
 

neo

Well-Known Member
Mar 18, 2015
672
363
63
Exactly that one.

@neo what do you think?
Yep, it seems like the 4PIN ATX CPU/Add-on card Power cables. If the above one isn't compatible, let me know and I can pretty much customize it to almost any end connector.

The two yellow wires are 12V+ and the black are grounds.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
OK bought one... should be interesting to see what happens.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Oh well. Likely a wasted $3 but I actually had one from another Intel backplane kit in the lab. Now just need time to test (and a bigger PSU)
 
  • Like
Reactions: ehorn and neo

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
The 2x P3700 400GB drives seem to be unhappy block size wise in Raid 0. I changed to 8K and that was better. They are running at 3.9GB/s at 1024 on ATTO but then drop off after that. Writes at 2.2GB/s though.
 
  • Like
Reactions: Chuntzu

Chuntzu

Active Member
Jun 30, 2013
383
98
28
I have been trying to figure out the bottle necks with storage spaces and nvme. And as confined by both Patrick's results and and linked article there are two. First when creating storage spaces to achieve higher iops the interleave setting will need to be reduced to a smaller block since (32 is the smallest if I remember correctly) default is 256. That and it looks like I am CPU bound with a single e5-1620 to about 1/4-1/3 of the iops that the linked article on pcper had, which makes sense based on processor speed and threads available. With that said I have not installed the drives in my dual e5-2680 v2 rig but I believe 2million iops from it is more than achievable with nvme drives since I have done it with data ssds before. @Patrick sorry for the lack of follow through on the Intel chassis my wife's family stopped by with very little warning so my one day off in 10days was spent cleaning :-(. Good thing her family's fantastic!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@Chuntzu no worries. Making progress here. Hoping I can hit the datacenter this weekend. Now just trying to figure out if I can fit more than 4 of these drives into a chassis.