Hear Me Out: A USB-A to USB-A 3.0+ Data Transfer Cable (aka USB “Bridged”/"Networking"/"Crossover" cable) But Actually Works at USB 3.0+ Speeds

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Dec 4, 2024
36
4
8
I have been researching USB-A to USB-A 3.0+ Data Transfer Cables / USB Bridge Cables / USB Networking Cables / USB Crossover Cable (Not to be confused with USB NIC cables) and I haven't found a good solution that seems to work for Linux and Windows (throw Mac in there, too - why not).

A "USB Host to Host Connection" is what I'm after.

I'm sure I could probably "figure something out" by using one USB NIC at each end, and use a CAT5/6 in between, but then its more complex, and you are limited to either 1Gbps, less the performance overhead, for USB 2.0 ports (or at least the SPEED of USB 2.0) at each end, so it's a bit wasteful (and probably somewhat more expensive), but the upsides to that approach is that you might want to use this "trick" for other devices in the future, and you can make/crimp/get an extremely long CAT5/6. You you get more flexibility and range.

Fair points.

But IMO things change a bit with USB 3.0, 3.1 and 3.2 (and beyond)

I've tried to find at least a pair of USB-A to USB-A Networking / Bridge Cables that are 3.0 (preferably even 3.1 or 3.2, but I'll take 3.0!).

There doesn't seem to be a lot of actual devices for sale out there that we can actually buy and have great reviews or performance. As in, even though the cable is "marketed" as USB3.0, it is limited around 1-2Gbps MAX (Most seem actually slower, in Real World use, as reported by what few users share any benchmarks...) - That's a far cry from the theoretical 5Gbps maximum of USB3.0, though...

That seems like a market opportunity, but perhaps there is a reason?

From what I gather, the few USB3.0 cables that are out there that "market themselves" as supporting either ONLY Windows or ONLY Mac, and they use some kind of "special app" that needs installed to "make it all work".

While I'm not at all a fan of such apps / bloatwares, if it was known to work well at speeds closer to 5Gbps for data transfers between both Windows and Linux (obviously also Windows-to-Windows and Linux-to-Linux) then I'd jump for that option.

1738683287304.png



So where is that option?


I see this post from way back in the day - How do I connect two computers using USB 3.0? - In the comments, someone has supposedly written a patch for getting that particular cable to work with Linux. Cool... But feels like "its going to be janky...", before it even arrived to your mailbox.

So, why do I want this? Why would I prefer a USB-to-USB Connection?

It's kind of complicated, but not really. For my particular use case that I have in mind, I have an aging NAS box with 1Gbps ethernet, and 2x USB 3.0 ports on 2 separate USB3.0 channels. This box has no video out, and it can be modded to run a headless Linux OS. The stock firmware is pretty bad, compared to what can be done with a lightweight gui or even web management, but it does have "barely useful" features like SFTP, SMB and NFS. The 4x drives installed are SATA III (SATA 600) and after an upgrade to better HDDs, the performance of a Single HDD is actually WAY higher than what the 1Gbp NIC is able to provide. (~125MB/s is as good as it gets). So while there are RAID options like R0/1/5/10, the only gains are from redundancy(mirrors) rather than stripes. (ZFS is probably out of the question since the RAM is EXTREMELY limited and soldered on).
- There is also the issue that the NAS Box uses a pretty old ARM chip that "doesn't do well" with volumes/partitions over 16TB, so that's another caveat, but not a show stopper. The "best workaround idea" I've heard thus far is using iSCSI/multipath from remote PCs/Servers and partitioning however I want that way, via ZFS or otherwise. but for that to really make sense in terms of performance, I need to find a way to get closer-to- USB3.0 speeds, somehow.

So, assuming Linux is running on this old NAS, and the drives are configured some kind of a 2x2 Mirror or a RADI5/10 set of 4, the performance is forever limited by the 1Gbps NIC. I could add some more USB NICs and play with LACP/teaming, which is likely what I'm going to end up doing if I don't find a better solution. But wouldn't it be great to find a USB3.0 cable that could be attached directly to a PC/Server (~300+MB/s) AND a 1Gbps network connection (~125/MBs)?

I think so.

Has anyone had any luck with at least ~3Gbps data transfer speeds with a USB3.0 "bridge" cable? I did a few basic searches here on STH and didn't find much on the topic, nor did I see much (worth sharing here) on the entirety of the internet, at large.

Or maybe there is an even better solution that someone could share?

I have used CAT5 crossover cables before (back in the day when you had to crimp them yourself) and those are cool for one-off situations, but I like the idea of not having to keep a NAS *and* a PC/Server "always on" and do some extra error prone storage server configurations just to share a few drives, but I'd be willing to entertain creative solutions.

I'm also not sure if doing something wild and crazy like a 2x (or more) CAT5/6 Crossover cable LACP/Port Trunking would work (in a consistent and reliable way), but I'd love to hear about anyone who might have tried something like this before.

Here is a terse summary of what I've come (up empty) to find on this topic:
- There is a company with a name that sounds like it came straight out of the Office Space movie that seems to have moved away from offering a "magic port" with a "special cable" on a USB Hub that was cable of USB 2.0 Data Transfer speeds. I did a quick review of the current product offerings and there doesn't seem to be any kind of "magic port" options anymore on its USB3.2 products

- What few offerings I could find on "the big A" were "markety-markety-markerty-ing" USB 3.0, but actual transfer speeds were atrociously slow. The best I could confirm was about 40MB/s from the very limited options available.

- Basically: Same for "the Egg"

- I see limited stuff on fleaBay, too, but without better reviews and specifics on data transfer rates and/or "it took X amount of time transfer X amount of data" type of comments, its impossible to know if it would be a better solution that using something like a couple of 1Gbe USB NICs or a 2.5Gbe USB NIC, instead. (Isn't it weird that NO ONE seems to have included a CMD benchmark test screenshot for ANY of these products for over a decade??)

I understand that my use case is somewhat of an edge case, as usually you wouldn't want/need to keep a USB bridge cable "forever connected", you would probably use such a cable for something like a one-off situation or a "quick transfer" - but I assume the reason these cables are not more prolific in the marketplace, is because with speeds THAT slow, its faster to physically remove a drive and hook it up to the target machine with the target drive and/or use something like a SATA/NVMe to USB3.0 adapter that could get the job done 5 to 8 times faster. So what's the point? But still, it seems like I might be missing something here, or I've identified a gap in the marketplace (albeit it is likely very niche and narrow).


Anyway, I'd love to hear from the community about:

- Experience using iSCSI (with or without ZFS) over 1Gbe USB and / or 2.5Gbe USB 3.0 NICs
- Experience using iSCSI (with or without ZFS) over LACP/Port Trunking/Teaming NICs

- Experience using iSCSI (with or without ZFS) over (*NON* USB) CAT5/6 Crossover Cables
- Experience using iSCSI (with or without ZFS) over (*NON* USB) CAT5/6 Crossover Cables with LACP/Port Trunking/Teaming NICs

- Experience using USB Networking between Linux machines
- Experience using USB Networking between Linux and Windows machines

- Any other ideas for squeezing performance from a NAS device that has better performance capabilities than its NICs can provide to the network
 

Tech Junky

Active Member
Oct 26, 2023
711
240
43
USB is not worth the time and headaches for the speed you get. However you can still get a USB NIC that does 5gbps if your boxes pick up the driver and enable it. I have a Sabrent one I was using for sync purposes on my laptop for awhile before going P2P instead. My P2P over TB links at 20gbps with a ~10ft cable and hits IIRC last time I used it about 1.5-1.7GB/s. Just have to plug in the cable and set a static IP on both sides and leave the GW blank. There's probably some sort of script to get it to set the IP upon seeing the connection come up but, it's so infrequent that I need to use it I haven't bothered with finding one.

As for the NAS you're limited to the HW and it sounds a bit ancient. I would take the drives and put them into a PC case instead and either put a NIC in that meets your speed desires or put the drives into a DAS box with USB 10gbps. Both would probably run ~$250. Going the PC route though offers more options to play around and see what your options are.
 
  • Like
Reactions: coolelectricity
Dec 4, 2024
36
4
8
USB is not worth the time and headaches for the speed you get.
I've come to the same realization, I just didn't want to admit it, I guess.

I did actually have that same idea for my new-to-me 10TB drives, as the old NAS box setup with 2TB drives was "adequate" but not awesome.

But I've recently added a 10Gbe Enterprise switch into the homelab and its got significantly higher power draw, and I don't want to add a whole new PC into the mix (again). But I agree with you, that would totally solve the configuration dilemma.

EDIT:
What kind of NIC/port interface are you using for your P2P? Is that a 20Gbe NIC at both ends? or an LACP/Teamed RJ45/SFP+ connection? Super curious about what hardware/interface/cabling you're using for your P2P for that level of performance too!
Ahh I missed that it was TB in your reply - I WISH (SO HARD) this NAS had TB :p

And yes the NAS is pretty ancient. I spent too much on the networking and drives to budget for a new NAS. I am actually trying to slim down an already overly-complicated homelab situation now, too. I kept slowly collecting "this and that" - and I had fun and learned a lot doing it, but I'm WAAAAY overdue for a purge and reduction to simplify things.

The prices of the newer NASes are pretty absurd though, IMO (and as is my power bill...).

For my use case, all I really want is a dumb box with some SATA ports to an IP Address with decent Networking speeds. I appreciate all the cool features, etc, and HDMI/video options, etc, but that's not what I "need". So I'm just trying to squeeze a little more life from this old NAS that only has 1Gbe NIC and USB 3.0 options. (That new uNAS product line looks pretty sweet, though - maybe next time I have a budget for it)

From what I've read, using the USB3.0 port with a 2.5Gbe USB NIC caps out at around 1.7 - 1.9Gbps from the USB3.0 port on the device with Linux installed, because of the older PCIe / overhead. But this might also be because of the particular brand/chipset of USB NIC the reviewer/poster was using, too. Not sure yet.

Also, supposedly (and I'm somewhat skeptical of this - but can't find docs/schematics to prove/disprove) there are 2 separate USB3.0 channels for the 2x USB ports of the NAS, so having a decent USB3.0 option might work out, or at least give me options to weigh (Hence the post).

I actually already have 3 USB NICs which I could re-purpose for this, and maybe keep the cheap 1Gbe Managed Switch in play just for the NAS with a bunch of LACP-ed 1Gbe Ethernet connections. Not ideal, but it might be the cheap way forward until I get a better NAS situation. I really only need it as a backup target, and some ISOs, so I'm not in another "all my backups are on the storage of the machine that needs reformatted/repaired" type of situation. My main concern is how problematic some of the LACP stuff I've done in the past can be, and how much of a "queue" might get created for the larger/longer backups I have in mind for it. Like if the backups all take 5 hours and there's 6 machines I want daily backups of, that's not going to go well for me... So I'm still exploring options.
 
Last edited:

Tech Junky

Active Member
Oct 26, 2023
711
240
43
NAS = waste of money if you know it's just a basic file server in a cute box

TB for me was a cheap way to get 10gbps+ performance since one device is a laptop and a 10gbs dongle is easily $150 and then add a NIC on the other side for $100 for a Gen4 X1. It all adds up if you don't keep an eye on things.

For me in your situation since you're already upgrading capacity at this point you might as well take the leap and put the drives inside a power efficient PC and slap Linux on it, IRL 4 dives doing R10 won't even max out a 5ge link. I think i was able to hit 400-425MB/s which was more than sufficient for the bulk storage approach. Then I got the itch for all flash and of course went the M2 route initially before discovering/stumbling onto U.x drives which have the same performance but go beyond 8TB in size which brings down the cost per TB. 4*M2@4TB = ~$850 vs 15.36TB U @ $1200. Less BS with a single drive to deal with. However it took me 3 drives to get a working one as the first two lost data in under a week for some reason.

I ran the spinners for 8 years w/o any hiccups. I do tend to not sit on tech for too long though when it comes to other parts. The box started with a 8700K > 12700K > 7900X with multiple other things in/out over time as well.

I would probably relegate the NAS to monthly backups that run in the background and make a new box for daily use. For the TB I use a $60 card an a $20 cable to get 10X the 2.5gbps you list above. If you have multiple boxes in close proximity for the lab it's a cheaper option if you don't want to get into the ethernet cards and modules you have to deal with.

LACP / won't do much unless you generate multiple data streams to make use of it. It's good though for active/backup setups so you reduce downtime if a cable or port craps the bed. I did have it setup on a cable modem awhile back and it did unlock the extra bandwidth boosting beyond 1gbps plan speed. I got sick of their prices and antics and cut the cord to go 5G instead and then took that one step further to get into a group 5G plan and now pay under $40/mo for unlimited data that I can literally take anywhere since it's using a phone sim.

daily backups
I use rsync and it just backs up files that have changed and it literally takes less than a minute after the initial run. Unless you have a bunch of huge files it really shouldn't take that long.
 
Dec 4, 2024
36
4
8
NAS = waste of money if you know it's just a basic file server in a cute box

TB for me was a cheap way to get 10gbps+ performance since one device is a laptop and a 10gbs dongle is easily $150 and then add a NIC on the other side for $100 for a Gen4 X1. It all adds up if you don't keep an eye on things.

For me in your situation since you're already upgrading capacity at this point you might as well take the leap and put the drives inside a power efficient PC and slap Linux on it, IRL 4 dives doing R10 won't even max out a 5ge link. I think i was able to hit 400-425MB/s which was more than sufficient for the bulk storage approach. Then I got the itch for all flash and of course went the M2 route initially before discovering/stumbling onto U.x drives which have the same performance but go beyond 8TB in size which brings down the cost per TB. 4*M2@4TB = ~$850 vs 15.36TB U @ $1200. Less BS with a single drive to deal with. However it took me 3 drives to get a working one as the first two lost data in under a week for some reason.

I ran the spinners for 8 years w/o any hiccups. I do tend to not sit on tech for too long though when it comes to other parts. The box started with a 8700K > 12700K > 7900X with multiple other things in/out over time as well.

I would probably relegate the NAS to monthly backups that run in the background and make a new box for daily use. For the TB I use a $60 card an a $20 cable to get 10X the 2.5gbps you list above. If you have multiple boxes in close proximity for the lab it's a cheaper option if you don't want to get into the ethernet cards and modules you have to deal with.

LACP / won't do much unless you generate multiple data streams to make use of it. It's good though for active/backup setups so you reduce downtime if a cable or port craps the bed. I did have it setup on a cable modem awhile back and it did unlock the extra bandwidth boosting beyond 1gbps plan speed. I got sick of their prices and antics and cut the cord to go 5G instead and then took that one step further to get into a group 5G plan and now pay under $40/mo for unlimited data that I can literally take anywhere since it's using a phone sim.


I use rsync and it just backs up files that have changed and it literally takes less than a minute after the initial run. Unless you have a bunch of huge files it really shouldn't take that long.

Lots of good info here!

So yea, I'm in pretty deep, already, some gear dies on its on, ironically its usually never the stuff I wish did first, though.

I just got into the server chassis realm a few years ago, and slowly tinkered and deal hunted until I upgraded about as much as I could. Then the weather events wiped out a lot of stuff, over the last year+ (multiple...) so now that I'm sifting through whats actually left and still worth saving, I'm also taking the opportunity to try to simplify. My HL is WAY too complicated. I kept buying a little thing here and there to solve some problem or another, and then what the dookie goes kabookee I'm having to cobble together, reroute, test,test,test, jiggle and jangle EVERY little part/system. I'm pretty over it. But I also "eat the garnish", too (I don't like things to go to waste).

NAS = waste of money if you know it's just a basic file server in a cute box
I feel like that, too, for the INSANE prices on most NAS boxes, nowadays. This old NAS wasn't even close to $500 when I bought it. And to keep using it is somewhat of a compromise. (And the box isn't even that cute.) It's more the portability and power savings I'm after. The stock firmware for most NAS tends to be complete Doodoo butter, too. But I've found its nice to have a "data island". I was running into situations where I was filling up drives / partitions and it was becoming a giant ordeal to figure out how to free up space and still keep my organization. For awhile, the NAS solved more problems than it created. The old drives I had in it before were "pretty good" and I knew that I could get more performance, but the optimization loss wasn't all that severe. Now, it kind of is. Sucks to know I could be getting 3x or 4x faster transfer but I'm just missing one decent port and/or cable to make it happen.

I'd really rather have a separate box with high capacity 16+ spinners and a few SSDs and NVMe's for boot disks, ZILs, SLOGs and L2ARC with all my network+storage services (DNS, NFS, iSCSI, etc etc) and just leave it up and running. Which I still may eventually do. But the HL layout is pretty scattered through 3 different spaces, which to your point is kind of an issue (a lot more than I wish it were). It's kind of broken up like:
1) Work; The work area is key. It's "complex but manageable"
2) Play; I've been "trimming the fat" on the "Play" area, which was pretty easy
3) Hot+Noisy; Now that I have to 10Gbe Switch here, I don't need 509,732+ different switches, cables, routers, LACPs, Dongles, UPSs, Power Strips, whatever for the "hot and noisy" area. But I'm making some tough calls on what to keep or not. (re: "Nice to have" vs "Do I *REALLY* need it?")

I'm trying to consolidate down to 3 to 4 PCs, all in. I'm finding that there are limits to what I'm able to stuff into ports, cases and chassis (particularly hard drives and storage controllers). But I'm also not above taking "creative risks". I'm already making things better, but its the "what if another weather even happens" storage I'm trying to find a better solution for (the use case of the NAS) - it's mainly for getting back up and running by grabbing images off the network and being able to restore them. But that includes about 100 to 150 VMs, depending on the day. Some are tiny (8 to 10 GB) and some are pretty huge (~300+GB).

The Hypervisor I'm using has a pretty great backup system, and I usually use dd for (re) creating the boot drives to bootable images. Though, I've been moving towards using ZFS snapshots + not every system is running a hypervisor, and one system is still stuck with a RAID-only (IR) Controller, so I might have a use case for rsync as you described in there somewhere. I hadn't thought to use rysnc for those edge situations where I don't care about versioning, so I might check that out. Hm. Thanks for that. I'm getting some new ideas.

But some (most, if I'm being real) don't need backed up daily. So my hypothetical from before was more about leaving the capacity for it, in case I did ever need (or want) that.

But maybe I kind of do already want/need "close" to that, after how much effort everything has been to restore over the last year. The NAS will probably be fine for those task, but I just might run into problems like when I want to spin up a new VM and the backup job queue is hogging all the limited bandwidth/performance of the NAS so the ISO I'm trying to netboot from gets slow AF, acts weird, or in some cases gets lost / connection drops / times out etc. But it doesn't happen that often, not a big deal, just super annoying when it does, since if I'm firing up a fresh VM, or restoring from a backup or something "heavy", I'm probably "in a hurry" to get something done.

This NAS networking issue seems like a solvable problem with "all the stuff I already have". And it might be. But of the dozen or so situations I've used LACP/port trunking in, I've had issues with it about 20% of those use cases, which equates to fixing the same troublesome setup over and over again until something works or I give up and go a different route.

Maybe I could use a fresh perspective though, before I try to start selling off my older PCs / motherboards:

Do you ever run into issues where the separate PC you're using for your "NAS" is down and you can't access stuff you need elsewhere? How would you handle that?

Also, whats the power consumption like? At one point I had 6 PCs and 2 Server Chassis going 24/7 and the power bill was a wake up call :eek: But how power efficient is the "driver PC" for all that storage?



Linux on it, IRL 4 dives doing R10 won't even max out a 5ge link. I think i was able to hit 400-425MB/s which was more than sufficient for the bulk storage approach.


- I'm getting slightly better than that at an average about 500-550MB/s with a RAID10 (with peaks closer to 600ish) which actually shocked me a bit. Though they are Enterprise Drives, so I'm feeling slightly better about the spend (the biggest part of the budget). But that was testing outside of the NAS... which sucks since it is limited/crippled by the lack of connection types. I'd feel even better if could find a working solution for 4 or 5 Gbps. 3x USB NICs + 1 onboard might do it. But I'd rather find a better way to leverage those USB ports, even if its multiple channels. I'll mull on it some more before I start loading it up with more data than I can swap on/off.


U.x drives which have the same performance but go beyond 8TB in size which brings down the cost per TB.


- U.2: YES, that's the next frontier I have been wanting to explore, now that I have 10Gbe. I'd like to get all the "smaller and important VMs on NVMe". I see some solid deals on some used enterprise U.2 NVMe's on the Bay more and more often. I've been super tempted to go that route, but I don't want to sacrifice any of the devices I have in the PCIe slots of the machines I'd get the most benefits from it or physical space its too limited.... except for one... super tempted to go that route, but I don't really "need" it.

I have done well to find some cheap used/clearance/bulk NVMe sticks over the last year, so I'll burn those up first while hoping the U.2/U.3 NVMe drive prices keep dropping in the meanwhile.

But what's your take of U.x thus far? You mentioned you are getting the same level of performance from the U.2's, but the older/smaller NVMe I have sometimes top out at <2000MB/s whereas the new/larger stuff is ~6000+MB/s - with such a big swing in "stick NVMe" performance, I'm curious what real-world speeds you're seeing on U.2/3's.
And are they more TBW/Durable that the sticks, as advertised? That's the main reason I want them. But I've got a pretty good TB-to-$ ratio with the sticks at the moment, I've been picking up the smaller ones from refurbed/return systems in lots to use as boot drives and a few ZFS experiments and its working out thus far I'm sketched about using them for ZIL or L2ARC though. I burnt up a few health % points in a matter of 2-3 hours just testing ZIL and L2ARC performance.
 

Tech Junky

Active Member
Oct 26, 2023
711
240
43
Do you ever run into issues where the separate PC you're using for your "NAS" is down and you can't access stuff you need elsewhere? How would you handle that?

Also, whats the power consumption like? At one point I had 6 PCs and 2 Server Chassis going 24/7 and the power bill was a wake up call :eek: But how power efficient is the "driver PC" for all that storage?
Since the primary box is my router as well downtime is something I like to avoid and usually only happens on a borked kernel update / reboot and I have a process to revert within a couple of minutes. My idea ~10 years ago was to collapse several devices into a single box though and managed to kill off 5-6 in the process.

For awhile I had a locked in price per kw/h that was quite cheap compared to now. It's still manageable but not cheap. I could probably tweak some things to drop the use but don't want to break what's not broken.


It's going to bite you in the @$$ long term.

PCIe slots
I use a M2/Oculink cable for mine and get full speed from it. The challenge with the will be airflow and lanes. What's nice though is if you want to go to 30/60TB it sill only consumes x4 lanes per drive where with the M2 you have to raid since they top out at 8TB/Gen3.

real-world speeds you're seeing on U.2/3's.
When I bench it from "disks" it hits the advertised speeds @6.5GB/s. Its a Kioxia CD8 drive and usually hovers at whatever the system temps are vs running hot like Micron's do.

And are they more TBW/Durable that the sticks, as advertised?
There's no comparison as these are enterprise/data center drives and the 2.5" format makes a difference in keeping them cooler. It's a good move if you want to take the next leap and you'll want a 100ge switch to max out the transfers. It's a different mindset to get into when you stop messing with consumer gear for the sake of it just working and making life simple.

I put this into service 09/2023
=== START OF INFORMATION SECTION ===
Model Number: KIOXIA KCD8XRUG15T3
Serial Number:
Firmware Version: 0103
PCI Vendor/Subsystem ID: 0x1e0f
IEEE OUI Identifier: 0x8ce38e
Total NVM Capacity: 15,360,950,534,144 [15.3 TB]
Unallocated NVM Capacity: 0
Controller ID: 1
NVMe Version: 1.4
Number of Namespaces: 64
Namespace 1 Size/Capacity: 15,360,950,534,144 [15.3 TB]
Namespace 1 Utilization: 2,951,729,319,936 [2.95 TB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 8ce38e e30048e8c7
Local Time is: Wed Feb 5 00:09:32 2025 CST
Firmware Updates (0x16): 3 Slots, no Reset required
Optional Admin Commands (0x025f): Security Format Frmw_DL NS_Mngmt Self_Test MI_Snd/Rec Get_LBA_Sts
Optional NVM Commands (0x00ff): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Resv Timestmp Verify
Log Page Attributes (0x1e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Pers_Ev_Lg
Maximum Data Transfer Size: 1024 Pages
Warning Comp. Temp. Threshold: 77 Celsius
Critical Comp. Temp. Threshold: 85 Celsius
Namespace 1 Features (0x14): Dea/Unw_Error NP_Fields

Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 25.00W 25.00W - 0 0 0 0 500000 500000
1 + 12.00W 12.00W - 1 1 1 1 500000 500000

Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
1 - 512 8 0
2 - 4096 0 0
3 - 4096 8 0
4 - 4096 64 0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 48 Celsius
Available Spare: 100%
Available Spare Threshold: 7%
Percentage Used: 0%
Data Units Read: 94,842,974 [48.5 TB]
Data Units Written: 42,534,027 [21.7 TB]
Host Read Commands: 457,684,492
Host Write Commands: 619,624,254
Controller Busy Time: 999
Power Cycles: 236
Power On Hours: 12,089
Unsafe Shutdowns: 149
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0

Error Information (NVMe Log 0x01, 16 of 256 entries)
No Errors Logged

Self-test Log (NVMe Log 0x06)
Self-test status: No self-test in progress
No Self-tests Logged