Hi, happy new year!
im building an AM5 workstation/server/Lan rig. the main use of the build will be a Raid 6 Hardware array. comprised of 16, 16tb drives on an Adaptec 3254-16i Sas Controller. a grfx card with HDMI 2.1, and some ECC memory. On windows
AM5 is undeniably fast. Multi core performance, single core, and on dual CCD CPU's the ability to approach 100,000mbps DDR bandwidth. its extremely competative solution, whilst also being cost effective vs something like 5955x Threadripper CPU and the price it costs for memory, motherboards etc.
anyway my main Caveate with AM5 is the limited number of full or even half speed PCIE slots. the 2 main reasons with going with AM5 is the future upgradeability for new CPU's, and its Gaming performance against Threadripper/HEDT platform, further breathing life, value and useability into the workstation.
the main configuration i stated above is just the begining of a future Raids expansion. the case all of this is housed in is the Thermaltake WP200. With the bottom Cavity (p200) kitted to house the first 16 drive "transplantable" array. And the rest of the case has been fitted to accomodate something like 60 drives. Now being Kinetic mechanical drives i do not need the 2m IOPS and the 32gb/s Bandwidth PCIE 8x capability of the SAS controller, i could get away with less. Where im going with this is, when i choose to add further raid cards onto the AM5 platform, im going to have to get creative with adding PCIE slots, and this is where i need some guided experience with how im going to do it.
I plan to use an nVME to PCI-E 4x riser cable converter from the first 5.0 nVME slot on AM5 for the 16 drive array., And then a Riser cable coming from the chipset PCIE 4.0 4x slot (or get a new mobo and use the second 5.0 8x slot) For the Second Stationary Array thats going into Primary part of the case (W200).
i am weary that using Converter Cables or even riser cables might cause errors in the disk array, thinking janky riser and converter cable setups are unreliable.. and im kind of hoping someone here has used or is using riser cables to connect Raid controllers. Or someone who has used an nVME to PCIE slot converter before and can vouch its reliability. I kind of need to use Riser cables for the 16 drive array to make it "transplantable" and portable, so that you can just plug it into a seperate system Via the riser cable, and off you go using the array.
I know what im asking is a stretch, but i dont want to buy an extremely expensive and long riser cable, or nVME converter cable just to test this situation when its possible someone on the forums has experience in this situation. and if there is a problem with the proposed setup, ill have to buy a different Motherboard and ditch the Riser cable ideas.
I also have some Querys about software raids, that i dont think theres an solution for. Being software raid has Better Data Integrity via being able to check moving data in realtime toward a Raid. if i create a temporary part collection RAID 0 disk, in Software via the motherboard Sata ports. am i correct in assuming i would i be better off for data intergrity to use Raid software. or is RAID 0 off the motherboard for 2-4 SSD's offer data intergrity. once the Data in this RAID 0 has finished collecting its parts. it will auto move to the RAID 6 array, hence why im worried about Data Integrity on the temporary RAID 0 array. as i need it to be accurate before going into static storage. But also i need a minimum of around 8tb+ moving at a minimum rate of 1000mbp/s, thats why Raid 0 temp disk... Does software Raid, raid 0 offer the same performance as the motherboard controller would in raiding 4, 4tb SSD's offering throughput past 1000mbps?
Being as i have not experience with using Unix/linux os's or software that creates RAID like ZFS or whatever, i have another question regarding Data intergrity. If i create a Hardware Raid 6 array. is it possible to catalouge the array via third party Raid software ( in windows, im using windows) and then auto repair any corruption, kind of like patrol read, but third party to the controller. using software to do a disk wide or Data set Checksum, i believe would be too much of a resource hog, and take too much time on a mechanical 200tb RAID, hence that 3rd party catalouge idea.
Hey, Happy new year. And thanks for all the help!
im building an AM5 workstation/server/Lan rig. the main use of the build will be a Raid 6 Hardware array. comprised of 16, 16tb drives on an Adaptec 3254-16i Sas Controller. a grfx card with HDMI 2.1, and some ECC memory. On windows
AM5 is undeniably fast. Multi core performance, single core, and on dual CCD CPU's the ability to approach 100,000mbps DDR bandwidth. its extremely competative solution, whilst also being cost effective vs something like 5955x Threadripper CPU and the price it costs for memory, motherboards etc.
anyway my main Caveate with AM5 is the limited number of full or even half speed PCIE slots. the 2 main reasons with going with AM5 is the future upgradeability for new CPU's, and its Gaming performance against Threadripper/HEDT platform, further breathing life, value and useability into the workstation.
the main configuration i stated above is just the begining of a future Raids expansion. the case all of this is housed in is the Thermaltake WP200. With the bottom Cavity (p200) kitted to house the first 16 drive "transplantable" array. And the rest of the case has been fitted to accomodate something like 60 drives. Now being Kinetic mechanical drives i do not need the 2m IOPS and the 32gb/s Bandwidth PCIE 8x capability of the SAS controller, i could get away with less. Where im going with this is, when i choose to add further raid cards onto the AM5 platform, im going to have to get creative with adding PCIE slots, and this is where i need some guided experience with how im going to do it.
I plan to use an nVME to PCI-E 4x riser cable converter from the first 5.0 nVME slot on AM5 for the 16 drive array., And then a Riser cable coming from the chipset PCIE 4.0 4x slot (or get a new mobo and use the second 5.0 8x slot) For the Second Stationary Array thats going into Primary part of the case (W200).
i am weary that using Converter Cables or even riser cables might cause errors in the disk array, thinking janky riser and converter cable setups are unreliable.. and im kind of hoping someone here has used or is using riser cables to connect Raid controllers. Or someone who has used an nVME to PCIE slot converter before and can vouch its reliability. I kind of need to use Riser cables for the 16 drive array to make it "transplantable" and portable, so that you can just plug it into a seperate system Via the riser cable, and off you go using the array.
I know what im asking is a stretch, but i dont want to buy an extremely expensive and long riser cable, or nVME converter cable just to test this situation when its possible someone on the forums has experience in this situation. and if there is a problem with the proposed setup, ill have to buy a different Motherboard and ditch the Riser cable ideas.
I also have some Querys about software raids, that i dont think theres an solution for. Being software raid has Better Data Integrity via being able to check moving data in realtime toward a Raid. if i create a temporary part collection RAID 0 disk, in Software via the motherboard Sata ports. am i correct in assuming i would i be better off for data intergrity to use Raid software. or is RAID 0 off the motherboard for 2-4 SSD's offer data intergrity. once the Data in this RAID 0 has finished collecting its parts. it will auto move to the RAID 6 array, hence why im worried about Data Integrity on the temporary RAID 0 array. as i need it to be accurate before going into static storage. But also i need a minimum of around 8tb+ moving at a minimum rate of 1000mbp/s, thats why Raid 0 temp disk... Does software Raid, raid 0 offer the same performance as the motherboard controller would in raiding 4, 4tb SSD's offering throughput past 1000mbps?
Being as i have not experience with using Unix/linux os's or software that creates RAID like ZFS or whatever, i have another question regarding Data intergrity. If i create a Hardware Raid 6 array. is it possible to catalouge the array via third party Raid software ( in windows, im using windows) and then auto repair any corruption, kind of like patrol read, but third party to the controller. using software to do a disk wide or Data set Checksum, i believe would be too much of a resource hog, and take too much time on a mechanical 200tb RAID, hence that 3rd party catalouge idea.
Hey, Happy new year. And thanks for all the help!