apart from asus/asrock there's also the supermicro w790 boards ( X13SWA-TF | Motherboards | Products | Supermicro X13SRA-TF | Motherboards | Products | Supermicro ) - we purchased one of their workstations based around it start of the year for a project that had some budget constraints so we...
most of the in-wall cabinets I've seen just use the round holes - seems to be fairly 'standard' with lots of modules from different vendors that work with them ( mount via push-pin same as on that Legrand HTE-1012 shown in your picture )
square holes look way too small for proper rack nuts (...
I've used small neodymium super magnets to do this with similar small metal units like that ( netgear 5 port switches and edgerouter er-x ) to 'stick' them direct to the metal cabinet - works really well, if anything almost TOO well ( don't go for big magnets - becomes very hard to remove the...
you can see how the BBU connectsto the 9260-16i here https://i.ebayimg.com/images/g/QFUAAOSweEFcrtiy/s-l1200.webp - they're remote on the 16i as the bigger heatsink compared to the 8i means no space to directly mount it
do you have the BBU installed? Those cards have cache - but will only automatically enable write cache if the BBU is present ( you can force it on - but of course without the BBU that's risky )
I ran into similar poor write perf on a 9440-8i - and it was due to having no write cache (it's a...
did you use the front or rear USB ports? We've had issues on that generation HPs where even a physical KVM doesn't work inside the BIOS when plugged into the rear USBs - we had to directly plug a keyboard into the front USB ports to get into BIOS/boot menu
version of esxi doesn't matter as it's all handled in the VM itself - esxi just passes thru the raw pcie device to the vm
the limit was the chassis - it only HAS 4 u.2 bays, the md driver is extremely flexible and feature rich ( qnap and synology NAS units all use md under the hood as their...
we're running esxi 7 on our system - the nvme are pcie pass-thru'd to the VM running rhel8 ( this is on a supermicro 740gp-tnrt which has 4 bays of u.2 direct off the motherboard )
what OS are you running? We've got 4 * u.2 in raid0 on one cpu/gpu server using the linux kernel soft-raid ( md ) driver - getting the performance we were after ( this is local highspeed scratch storage for image analysis ) and cpu impact is minimal.
there's a note on the tech specs page for that old z370 board about the bottom slot ( https://rog.asus.com/motherboards/rog-maximus/rog-maximus-x-hero-model/spec/ )
1 x PCIe 3.0 x16 (x4 mode) *1
Note
*1 The PCIe x4_3 slot shares bandwidth with PCIex1_3. The PCIe x4_3 runs x2 mode by default...
if you're happy to ship across the ditch then PBtech carries samsung enterprise u.2 for ok prices :
https://www.pbtech.com/au/category/components/ssdrives/ssd-u2-nvme
they're a BIG operation - branches up and down the country and active in both the consumer and enterprise space ( they're also...
in that case just move it to an X1 slot - it'll down-shift just fine ( I've got a qnap ts453d with an aqc107 in it - the expansion slot on this is pcie2x2 and it again works just fine but is similarly bandwidth limited - doesn't matter for that system as the array is the bottleneck not the nic )
the b550-f has one x16/16 slot ( where your gpu is ) one x16/4 slot ( presumably that's what your current 10gbe card is in ) and 3 x1 slots
you don't say what your current 10gbe card is but I'm guessing it's something old ( x520/x540 etc are pcie2x8 but work fine in an x4 slot - you just can't...
SMB multichannel has nothing to do with link aggregation - indeed they're mututally incompatible
SMB-MC it's not longer 'beta/experimental' - it's available in dsm7.1.1 onwards
it actually stopped being 'experimental in samba 4.15.0 released in sept 2021 - it's just taken qnap/synology ages...
client does not need to support rdma
from ms docs
At least one of the following configurations:
Multiple network adapters
One or more network adapters that support Receive Side Scaling (RSS)
Multiple network adapters that are teamed (see NIC teaming)
One or more network adapters that support...
yes - link aggregation for SMB only helps the server ( when you have multiple client ) - for a client LAGG will do nothing
and yes - to setup LAGG you have to have a managed switch - the switch has to be configured to support it and there's no way to configure a 'dumb' switch
SMB multichannel...
also presumably you HAVE verified the setup on your ds1821 can actually fully saturate a single 10gbe link ( you'd either need to be using nvme caching, ssd arrays or at least 4 hdds in raid0 )
if your aim with link aggregation is to get better speed for the clients to the NAS then you can skip it ( and hence avoid the need for a managed switch ) - DSM7.2 supports smb multichannel, as does recent version of MacOS ( Configure SMB Multichannel behavior ) - and smb-mc will give you the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.