AM5 build, PCI-E expansion solutions. Help needed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jantee

New Member
Jan 1, 2024
14
0
1
Hi, happy new year!

im building an AM5 workstation/server/Lan rig. the main use of the build will be a Raid 6 Hardware array. comprised of 16, 16tb drives on an Adaptec 3254-16i Sas Controller. a grfx card with HDMI 2.1, and some ECC memory. On windows :p

AM5 is undeniably fast. Multi core performance, single core, and on dual CCD CPU's the ability to approach 100,000mbps DDR bandwidth. its extremely competative solution, whilst also being cost effective vs something like 5955x Threadripper CPU and the price it costs for memory, motherboards etc.
anyway my main Caveate with AM5 is the limited number of full or even half speed PCIE slots. the 2 main reasons with going with AM5 is the future upgradeability for new CPU's, and its Gaming performance against Threadripper/HEDT platform, further breathing life, value and useability into the workstation.

the main configuration i stated above is just the begining of a future Raids expansion. the case all of this is housed in is the Thermaltake WP200. With the bottom Cavity (p200) kitted to house the first 16 drive "transplantable" array. And the rest of the case has been fitted to accomodate something like 60 drives. Now being Kinetic mechanical drives i do not need the 2m IOPS and the 32gb/s Bandwidth PCIE 8x capability of the SAS controller, i could get away with less. Where im going with this is, when i choose to add further raid cards onto the AM5 platform, im going to have to get creative with adding PCIE slots, and this is where i need some guided experience with how im going to do it.

I plan to use an nVME to PCI-E 4x riser cable converter from the first 5.0 nVME slot on AM5 for the 16 drive array., And then a Riser cable coming from the chipset PCIE 4.0 4x slot (or get a new mobo and use the second 5.0 8x slot) For the Second Stationary Array thats going into Primary part of the case (W200).
i am weary that using Converter Cables or even riser cables might cause errors in the disk array, thinking janky riser and converter cable setups are unreliable.. and im kind of hoping someone here has used or is using riser cables to connect Raid controllers. Or someone who has used an nVME to PCIE slot converter before and can vouch its reliability. I kind of need to use Riser cables for the 16 drive array to make it "transplantable" and portable, so that you can just plug it into a seperate system Via the riser cable, and off you go using the array.

I know what im asking is a stretch, but i dont want to buy an extremely expensive and long riser cable, or nVME converter cable just to test this situation when its possible someone on the forums has experience in this situation. and if there is a problem with the proposed setup, ill have to buy a different Motherboard and ditch the Riser cable ideas.

I also have some Querys about software raids, that i dont think theres an solution for. Being software raid has Better Data Integrity via being able to check moving data in realtime toward a Raid. if i create a temporary part collection RAID 0 disk, in Software via the motherboard Sata ports. am i correct in assuming i would i be better off for data intergrity to use Raid software. or is RAID 0 off the motherboard for 2-4 SSD's offer data intergrity. once the Data in this RAID 0 has finished collecting its parts. it will auto move to the RAID 6 array, hence why im worried about Data Integrity on the temporary RAID 0 array. as i need it to be accurate before going into static storage. But also i need a minimum of around 8tb+ moving at a minimum rate of 1000mbp/s, thats why Raid 0 temp disk... Does software Raid, raid 0 offer the same performance as the motherboard controller would in raiding 4, 4tb SSD's offering throughput past 1000mbps?
Being as i have not experience with using Unix/linux os's or software that creates RAID like ZFS or whatever, i have another question regarding Data intergrity. If i create a Hardware Raid 6 array. is it possible to catalouge the array via third party Raid software ( in windows, im using windows) and then auto repair any corruption, kind of like patrol read, but third party to the controller. using software to do a disk wide or Data set Checksum, i believe would be too much of a resource hog, and take too much time on a mechanical 200tb RAID, hence that 3rd party catalouge idea.

Hey, Happy new year. And thanks for all the help!
 

Jantee

New Member
Jan 1, 2024
14
0
1
7900X3D on a X670E strix. Currently getting by on the Asus tuf X670E.

The tuf is ok it has 2 pcie 4.0 4x slots. The strix has an extra 5.0 8x slot + a 4.0 4x slot. So if this nVME adaptor shenanigans can't be achieved I'll have to buy the strix and sell the tuf.

If you don't need the memory bandwidth or gaming performance you can do this with a 7600x and whatever 2/3 slot X670e board U can find for cheap.

There is something I would like to point out though. Just prior to thread ripper 2023 release, most of the X670e lineup became EOL, the x670e lineup has poor pcie expandability anyway. Now that thread ripper 2023 is out, We may see new consumer AM5 chipset or just new X670E mobos with more pcie offerings. It looks like that's going to be the case anyway.
 
Last edited:

Jantee

New Member
Jan 1, 2024
14
0
1
Doing some more diligent research it seems Micro Sata Cables offers the most reliable looking solution for this NVME to PCIE case.

.

im going to email them in the next week or 2 and ask for a Complete reliable solution. it seems OCuLINK is the best way to go for reliability, and this product in the link above seems to check all the technologies and protocols of PCIE, it is expensive though. But seems it is possible.
 

Blue4130

New Member
Jan 14, 2023
10
7
3
Adaptec 3254-16i Sas Controller.

the main configuration i stated above is just the begining of a future Raids expansion. the case all of this is housed in is the Thermaltake WP200. With the bottom Cavity (p200) kitted to house the first 16 drive "transplantable" array. And the rest of the case has been fitted to accomodate something like 60 drives. Now being Kinetic mechanical drives i do not need the 2m IOPS and the 32gb/s Bandwidth PCIE 8x capability of the SAS controller, i could get away with less. Where im going with this is, when i choose to add further raid cards onto the AM5 platform, im going to have to get creative with adding PCIE slots, and this is where i need some guided experience with how im going to do it.
You know that the RAID card that you are looking at supports 256 drives, right? No need for a second card, just get SAS expanders.
 
  • Like
Reactions: nexox

Jantee

New Member
Jan 1, 2024
14
0
1
I do know that, but I can't find an Adaptec pcie 4.0, SAS 24gbps SAS expander. Will any brand negotiate with it?
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
Yes.
But the Adaptec AEC-82885T will serve your current & future needs the best. AND there is presently a glut of them available at very low price. E.g. [Link]
Use this cable [Link] to connect 2 expanders to your RAID card.
 

nexox

Well-Known Member
May 3, 2023
696
284
63
I can't find an Adaptec pcie 4.0
Expanders only use the PCIe slot for power, so there's no 4.0 option, all the data travels over SAS cables to your RAID controller. Speaking of cables, it won't be fun at all to wire 60+ SAS drives into controllers or expanders without a backplane, this is why many people opt for external hot swap disk shelves.
 

Jantee

New Member
Jan 1, 2024
14
0
1
Ok I'm learning new things here. After I connect 2 SAS expanders I only have 20 internal drives left on those cards. But I'm pretty sure I can chain SAS expanders off each other? Or am I wrong in assuming this?
 

CyklonDX

Well-Known Member
Nov 8, 2022
857
283
63
A single 9300-16i supports up-to 1024 non-raid devices. (while limited in bandwidth to 8GB/s)
You will obviously need 1-4 backplane/s supporting that many.
 

Jantee

New Member
Jan 1, 2024
14
0
1
Ok, I think I've worked out a solution. My controller device is 16gbps (pcie 4.0 8x). I'll use 2 of those cables you linked above and connect too 2, SAS expanders
Then via the external ports on the SAS expanders i'll route too another 2 SAS expanders in the second top half of the chassis to spread the lanes, and via the internal chain mini SAS plugs I'll give 1x4 mini SAS to a fifth expander to spread the throughput out evenly.

On the bottom case, where the controller and 2 SAS expanders will be. I'll connect 8 drives per expander to save bandwidth throughput per cable, for my 16 drive array. And then in the top half of the chassis I'll spread the drives evenly across the 2, full 8x input expanders and half the amount of drives on the fifth expander.

This is the only way I can get too 60 drives, with even distribution of miniSAS lanes. This is bang on a theoretical 15,000 mbps. Which is my limit on the pcie interface for the card. Although after overhead, I'm sure it will be looking more like 10,000-12,000k.

Edit: the Adaptec 82885t document states external and last 2 internal ports are daisy chain Only. Still cool.

Edit2: this is all assuming that to get even bandwidth distribution is the result of the capacity of the mini SAS cable, and requires a spread like this. Unl ess I'm incorrect and the controller has magic ✨ inside
 
Last edited:

Jantee

New Member
Jan 1, 2024
14
0
1
Yeah so I think I'm gonna scrap the multi controller card idea and run it off pcie 4.0 riser cables from the PC 4.0 8x slot. And then power some expanders wherever I need them. Solves heaps of headaches I was trying too figure out by re-directing pcie lanes from nVME etc.

You guys have helped me alot today and I appreciate it.

This is gonna cost more in cables than it is in expanders lol.

I'm gonna fasten/mount the mating part of the pcie riser cable to each part of the respective chassis, and then have a third riser 500mm or whatever between them. Same with the 24pin PSU power.
 
Last edited:

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
I tried but wasn't able to make sense of your last two messages, sorry! I'm not clear on how many chassis you have and how many drives per chassis, but you only need one expander per chassis. Those AEC-82885 are so named because they can do 28 drives in the internal ports (7x 8643, 4 drives each) plus 8 on external (2x 8644). All ports are symmetric; there's no difference between "daisy chain" and "regular" ports. It's like a network switch but for SAS.

To get from the main system to the DAS chassis, you'd ideally use 8644-8644 cables. There are brackets to adapt 8643-8644 passively, though they're probably not much cheaper than a whole expander.
 

Jantee

New Member
Jan 1, 2024
14
0
1
4 of the mini SAS connections on the 82885T are not for hard drives, I wrote that above. According to thea manual 1-5 are for HDD only. The rest are for linkage. I have 2 cases/chassis. 1 of them will be capped at 16 Drives.

To get as many mini SAS lanes as I can from the 16 drive case to the other, I will use 2 SAS expanders from the first case. To 2 expanders (maybe 3 SAS expanders) to the second case.

I'll do it like this because those ports are only 4x capable. And putting 20-40 drives on an expander connected via 1 4x mini SAS sounds like a bottleneck.

Sorry man. I hope his clears it up.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
And the second chassis holds 40 drives? Is this like a 45drives Storinator, or a SSG-5049P, or perhaps a 36-bay CSE-847? If the second chassis has its own backplanes, that makes a difference.

Spinners at roughly 250MB/s sequential would saturate 4 lanes of 12Gbps SAS3 at about 24 drives.
 

Jantee

New Member
Jan 1, 2024
14
0
1
the case is a thermaltake wp200. the top part, with a motherboard installed on one side, has the capacicity of 70 drives, 5x 14drive sleds. the bottom has capacity for only 20 drives, with a PSU in there with them. witouth psu its 24, but im putting the controiller and a standalone psu in the bottom half.
 

Jantee

New Member
Jan 1, 2024
14
0
1
i know everyone says its gonna suck to wire it all, but i dont belive so, i have ideas for the wiring, labeling and stuff like. and i think its gonna be fun to know the build back to front myself.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
the case is a thermaltake wp200. the top part, with a motherboard installed on one side, has the capacicity of 70 drives, 5x 14drive sleds. the bottom has capacity for only 20 drives, with a PSU in there with them. witouth psu its 24, but im putting the controiller and a standalone psu in the bottom half.
WHY ???
(from earlier: )
Yeah so I think I'm gonna scrap the multi controller card idea and run it off pcie 4.0 riser cables from the PC 4.0 8x slot.
(from even earlier [#2 & #3]:
Question: What motherboard/ processors are you looking at?
Your answer: 7900X3D on a X670E strix. Currently getting by on the Asus tuf X670E.
)
Since the TUF motherboard doesn't have a x8 slot, have you committed to the STRIX? (If so, you will be gaming with a video card on x4 PCIe lanes.)
the case all of this is housed in is the Thermaltake WP200. With the bottom Cavity (p200) kitted to house the first 16 drive "transplantable" array.
Have you now abandoned this plan for "transplantable" arrays?
And, are you using SATA or SAS HDDs? (or a mix? [what ratio?])
 

nexox

Well-Known Member
May 3, 2023
696
284
63
i know everyone says its gonna suck to wire it all, but i dont belive so, i have ideas for the wiring, labeling and stuff like. and i think its gonna be fun to know the build back to front myself.
Labeling isn't very important, especially once you have SAS expanders in the mix, the real issue is just the quantity of wire, the 4 lane breakout cables take up a fair amount of space and then you also need one SATA or Molex power connector per drive, everything is a different length, SAS cables often have a minimum bend radius so they're difficult to wrap up neatly, I would definitely set a machine on fire and toss it out a window if I attempted more than 12 or 16 SAS drives with that kind of cabling in one chassis. SATA is slightly better because power connects directly to the drive rather than to a second connector on the breakout cable, but still, it would probably end up out the window.