did you use the front or rear USB ports? We've had issues on that generation HPs where even a physical KVM doesn't work inside the BIOS when plugged into the rear USBs - we had to directly plug a keyboard into the front USB ports to get into BIOS/boot menu
version of esxi doesn't matter as it's all handled in the VM itself - esxi just passes thru the raw pcie device to the vm
the limit was the chassis - it only HAS 4 u.2 bays, the md driver is extremely flexible and feature rich ( qnap and synology NAS units all use md under the hood as their...
we're running esxi 7 on our system - the nvme are pcie pass-thru'd to the VM running rhel8 ( this is on a supermicro 740gp-tnrt which has 4 bays of u.2 direct off the motherboard )
what OS are you running? We've got 4 * u.2 in raid0 on one cpu/gpu server using the linux kernel soft-raid ( md ) driver - getting the performance we were after ( this is local highspeed scratch storage for image analysis ) and cpu impact is minimal.
there's a note on the tech specs page for that old z370 board about the bottom slot ( https://rog.asus.com/motherboards/rog-maximus/rog-maximus-x-hero-model/spec/ )
1 x PCIe 3.0 x16 (x4 mode) *1
Note
*1 The PCIe x4_3 slot shares bandwidth with PCIex1_3. The PCIe x4_3 runs x2 mode by default...
if you're happy to ship across the ditch then PBtech carries samsung enterprise u.2 for ok prices :
https://www.pbtech.com/au/category/components/ssdrives/ssd-u2-nvme
they're a BIG operation - branches up and down the country and active in both the consumer and enterprise space ( they're also...
in that case just move it to an X1 slot - it'll down-shift just fine ( I've got a qnap ts453d with an aqc107 in it - the expansion slot on this is pcie2x2 and it again works just fine but is similarly bandwidth limited - doesn't matter for that system as the array is the bottleneck not the nic )
the b550-f has one x16/16 slot ( where your gpu is ) one x16/4 slot ( presumably that's what your current 10gbe card is in ) and 3 x1 slots
you don't say what your current 10gbe card is but I'm guessing it's something old ( x520/x540 etc are pcie2x8 but work fine in an x4 slot - you just can't...
SMB multichannel has nothing to do with link aggregation - indeed they're mututally incompatible
SMB-MC it's not longer 'beta/experimental' - it's available in dsm7.1.1 onwards
it actually stopped being 'experimental in samba 4.15.0 released in sept 2021 - it's just taken qnap/synology ages...
client does not need to support rdma
from ms docs
At least one of the following configurations:
Multiple network adapters
One or more network adapters that support Receive Side Scaling (RSS)
Multiple network adapters that are teamed (see NIC teaming)
One or more network adapters that support...
yes - link aggregation for SMB only helps the server ( when you have multiple client ) - for a client LAGG will do nothing
and yes - to setup LAGG you have to have a managed switch - the switch has to be configured to support it and there's no way to configure a 'dumb' switch
SMB multichannel...
also presumably you HAVE verified the setup on your ds1821 can actually fully saturate a single 10gbe link ( you'd either need to be using nvme caching, ssd arrays or at least 4 hdds in raid0 )
if your aim with link aggregation is to get better speed for the clients to the NAS then you can skip it ( and hence avoid the need for a managed switch ) - DSM7.2 supports smb multichannel, as does recent version of MacOS ( Configure SMB Multichannel behavior ) - and smb-mc will give you the...
building an opnsense/pfsense/tnsr appliance capable of routing 10gbps is quite do-able - however the NIC requirements plus the CPU requirements ( you'll need both decent single core speed AND a resonable core count ) means it's hard to achivie it in a small/low power/shelf-mountable system like...
also since the 5009 only has a single 10g sfp+ port you'll need to send the ont's 10g to your switch and use vlans/trunking to run the 5009 in a 'router on a stick' topology ( no problem doing this since the internet service is only 5gbps )
something x86 based, able to route that kind of load and small/quiet/shelf mountable is going to be very hard - maybe having a look at something like the mikrotik rb5009 instead
supermicro do a pile of GPU oriented systems - starting with the 7049GP and 740GP 'desktop' systems ( which support 4 enterprise grade gpus ) going up to monster systems
https://www.supermicro.com/en/products/gpu
just be aware that even the 7049GP/740GP are very loud - as they have to push...
boot linux off a pendrive and see what is visible - this sounds like a freeBSD driver issue
If it works on Linux you may have to switch from TrueNAS core (freeBSD based) to TrueNAS scale (linux based)
I've got the base-T version in use here ( aoc-stgn-i2t - x540 equiv ) under esxi, one porr to a 10gbe port on the switch and one to a 1gbe (running opnsense virtualised - the 1gbe port is to connect to my fibre ONT) - no problems at all negotiating the lower speed
what you're seeing could be a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.