As far as "supported" by Windows ... in this case, I think that's just based on whether you're limited to using it as a PCIe device that you provide control fo and use it as external HDD (SSD) as an HBA ... or if they provide the Software to configure it as RAID 0/1/10, etc.
I've spoken to...
Trying to figure out what hardware leads to the best value in an NVMe // SSD array's performance:
Please provide Hardware overview of either an SSD or NVMe array.
Approximate performance (best recollection) is totally adequate
System (if OEM, eg: Dell, HPE, etc)
array config (RAIDzX, etc)
HBA...
Can you remind the context of what that's answering..?
I HAVE I HAVE I HAVE. lol. But I certainly don't blame you for not reading all this crap.
Still ... I TESTED ZFS with 1x NVMe ...(with FIO and real-world).
Testing 1 drive, 3 drives in Rz1 and 8 in Rz2 told you "nothing" apparently ...
NICE! Were you able to use Multi-channel SMB in TrueNAS !??
What are you using to get that ..?
I'm assuming you're using 2x SFP+ cables to get that 1.5x ... yes?
Oh, remember, it does for me ALSO ... in Ubuntu ... UNTIL I configure a RAID-5 set ... at which point it craps the bed.
:'( ...
HEY !!!! :) Thank you. To spare you having to read all the crap I posted here ... the person I bought either the unit i sent to you or the one i have (can't recall) said he got ~3GB/s (with much slower NVMe drives than those I have) in a RAIDz2 config...but even using SATA drives produced about...
I think this is the best indices ... https://dell.to/44w3XRU
And, this DEFINITELY looks like there are far from 96 lanes provisioned ... but rather ... only 32-Lanes.
But even still ... with just 16-lanes ... that should provide adequate lanes for 4-drives (if separated into each of those...
I re-wrote your premise to make sure I understood it ... yes?
I also can't think of the name for the schematic which shows the board's electical layout (lanes per PCIe slot, etc.)
-and-
I also searched for "schedul*" (R7415 + scheduler // schedule // etc.) and got no results relevant to...
It doesn't use any cards; they all plug into the motherboard. And I wouldn't even know how to route them to a card, as you know, those cables are lengthed to their exact connector.
With regards to lanes "arriving" ... any search terms you can suggest for me to research that ..?
Or is the fact...
But doesn't Epyc provide 128 PCIe 3.0 Lanes..?
(Rhetorical)... How do you get LESS than a single-drive's performance just bc several are working at once?
I THINK ... the 24 slots are in banks of 8 ... so far I've been using the third bank (bc I added an HBA330 to 0-7) ... think it'll make any difference to try slots 8-15..?
DAMMIT!! Why do you guys have to always be right?? :'(
Tested the stupid thing w RAID in Ubuntu; same shit!!
W: 730MB/s (RAID-5)
R: 850MB/s (RAID-5)
W: 2GB/s (1 Drive)
R: 3GB/s (1 Drive)
I don't understand this ... but, something makes this machine "defecate the bed" in...
Well, I'm testing with large video files, as I mentioned. I did perform tests with FIO ... but ultimately, it doesn't matter what I get with synthetic tests (although they're about the same / slightly worse than what I get with video). Why..? Bc it doesn't matter if I get 50 GB/s on synthetic...
Agreed:
I tested the individual drive of which the (below) RAID-5 is comprised...
And then tested a 3-drive RAID-5 array in Ubuntu (made with the above drive type)...
Both shown below in the pictures, & got the same crap perf as in ZFS in Ubuntu.
The machine (as I should've mentioned that...
I'm no longer testing via FIO. But when we did, contrary to a point you made (and I do trust you) ...
When I tested a RAIDz1 of 4 NVMe, I got about 125MB/s according to ZFS I/O in performance reporting
When I tested a RAIDz2 of 8 NVMe, I got about 87 MB/s according to ZFS I/O in performance...
Now this is a comment I'm in COMPLETE agreement with. Thank you. :)
I have ... it's not much better. Rand_ and I discussed that (maybe page 2 or 3 of this thread) ...
When I started I'd made a mistake in not selecting the location of the array, but after that..? Same results as these.
With a...
I've since purchased an R7415 (Epyc server) & several NVMe drives.
While the drive speeds in Ubuntu & Windows (synthetic benchmarks) are good:
Micron 7300 Pro:
Write: 2GB/s
Read: 3GB/s
Micron 9300 Pro:
Write: 3GB/s
Read: 3GB/s
In TrueNAS, (where I'd expect to get 2-3GB/s) they don't...
Sorry it's taken me a while to reply...
Yup, that's what I thought: 2GB/s – 3GB/s
My CPU and Temp stats:
Max CPU utilization: 7% (at up to 800MB/s )
Maximum system load: 3%
CPU Temp:
min: 45.45
avg: 46.11
max: 47.95
But the CPU stayed at 3% during the transfer, and only hit 7% for a...
Am I wrong in thinking the XL710 comes in both 10GbE and SFP28 ..?
Do you know how many NVMe drives that system would support..? (I'm assuming I could swap the drive config to SFF ...)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.