SAS 4.0, PCI-E 4.0, Upcoming 24Gbps, New HBA’s and RAID cards, SlimSAS, My New “Cables” and the new SFF Connector: The Future Is Here, Bois.

Sleyk

Your Friendly Knowledgable Helper and Techlover!
Mar 25, 2016
1,361
707
113
Stamford, CT
Disclaimer: Please be prepared, this write-up is kinda long, but it’s not super long (well, it kinda is, lol). This whole thing was written with good info and clarity in mind, I promise! This writeup was also written with the “not as up to date” person in mind, explaining concepts and things that many tech-savvy people will already know about. For those who already know, you can skim through at your leisure. For everyone else, just grab a small drink, sit back, and have a good read!


Primer

This write-up actually took a span of one month, as I ended up having to wait for proper cables to test, and I refused to pay exorbitant fees for a test cable, so I had to wait to publish the full thing to my blog. Further, I was essentially “forced” to wait, as I couldn’t use the aforementioned expensive cables anyway, even if I did purchase them. You will see why later. Ultimately, it worked out quite well, and as such, I am able to bring this write-up to you. In my writing, I tend to make some slight jokes here and there. I don’t intend to offend anyone, and my jokes are silly and harmless at best. I also include my own pictures, as well as easily found pics from the internet here and there. I don’t claim origination or copyright for some of the pics you may come across, besides maybe one or two. However, as far as I’m concerned, you can copy and use most of the pics I included in the writeup without my permission, but if you end up wanting to use this writeup somewhere, please link to it and give me some credit, ok? :.)


You like my futuristic-retro spacey “Let’s go!” banner I created? I made it in Canva. It’s groovy :.)


Introduction

eBay, as many of us know, is like a haven for many tech dudes, computer and server lovers, deal finders, and nerds. We can always find gems that people don’t want, or even know they have; listing massively expensive items for dirt cheap, in an effort to make a few bucks. I personally live for listings like this, and happily snap up the gold bar, often mistakenly sold as “copper” by unsuspecting listers, looking to just sell what they lacked to appropriately appraise. Upon such dealings, many people often score monumental deals.


The biggest deals. The best deals. No other deals come close to those types of deals.”


Now, as you also know, the tech community is an amazing community of people who all share one thing: their immense love for technology and computers. I am part of this excellent tech community, who through tinkering and messing around, find methods to tinker and play around with current and older tech, such as flashing OEM HBA and RAID cards to different firmwares, among other tech-related things. I have contributed to the flashing or conversion of the 3rd gen LSI SAS3008 Dell OEM card, the Perc RAID H330, which is a PCI-E 3.0, 12Gb/s RAID card, to a fully working HBA330, suitable for IT mode and ZFS, as pass-through for your drives, without proprietary firmware, just to mark my small contribution to the community. Some folks I know have also done similar work in figuring out how to flash previously “unflashable” cards, like the Dell H310 Mini Mono and the Dell H710P Mini Mono.


Now, notwithstanding the immense work many people, who know way more, and have contributed way, way more, into flashing and converting these cards, so many people still need and rely on help, and we gladly give it. This is always a cool thing to do; as a member of the tech community, many people seek us out for information on how to best go about getting better functionality on their cards for IT mode, and ZFS, among other popular OS NAS tangibles. Some of us even sell on eBay, making a nice little pocket change and serving the community for those who would rather just spend a few bucks, than tinker with cards and flashing themselves. Thus we, the tech community guys, scour eBay, always in search for a deal on a piece of older or newer technology, looking to score a deal for ourselves or to make a small profit where we can.


Upon my recent eBay roamings and lurkings, I stumbled upon a listing for a new card that Dell recently manufactured, now, only about a year or so ago. To my surprise, the seller had the listing at an extremely low price, and I at first stopped to look at it, wondering if it was a scam, as almost always it ends up being. Upon reading the full listing, I realized that the seller may just have the card as listed, and I resolved myself to take a chance, (albeit a slim one, as eBay money-back guarantee will cover me in the event of fraud) and I went ahead and purchased the card.


I can tell you now, It fully paid off and then some. The card came exactly as advertised and described.


It was then that I celebrated, for I had scored essentially, a new card that goes for hundreds of dollars more, for a tiny fraction of the cost. New tech for a sweet freaking deal! However, let me not get too ahead of myself. Before I go into my new card adventures, let me give a little primer on what the new technology is, and then we can go from there.


SAS 4.0, PCI-E 4.0, Upcoming 24Gbps, SlimSAS and the new SFF Connector


So, you already know that we are currently in the 12Gbps generation, that is, PCI-E 3.0 and third-gen Broadcom/Avago SAS chipsets. The current-gen, that is, the gen most currently in widespread use, is great and allows excellent connectivity of storage at the 12Gbps SAS standard. Some SAS Hard drives can also connect on port speed, but of course, they will never transfer at the theoretical limits of the current standard. Thus, most people use the 3rd gen cards and adapters for SSD’s, but many are also still using this current-gen for Hard drives as well. They do it just to connect at the latest standard, which is fine.


For the newer standard, that is, SAS 4.0, we will see 24Gbps of (theoretical/proposed) speed, that is, for the next generation of NVME/M.2 SSD’s. HBA and RAID cards will still connect at the current speed of 12Gb/s, as there would be no point for this speed for ANY Hard Drive, and even current SSD’s as they stand. 24Gbps will be insane, which of course is double that of the current-gen. Can we even come close to the limits of such speeds? Probably not. However, the prospects of such advanced speeds are awesome! Bring it on, I say. Let’s keep going, with PCI-E 4.0 now starting to go mainstream and 5.0 and heck, even 6.0 looming, we need the protocols and speeds to match these super-fast standards.


As for now, just know that the standard is changing, as such, we ought to be prepared and know what the next-gen will bring. This is where the new SFF connector/SlimSAS comes in.


The New Card


As I mentioned earlier, I stumbled upon an eBay listing where a guy was selling a card he clearly had NO idea about. And honestly, if he did, he probably was hawking it online for a quick few bucks for the holidays. I get that. Times have been rough recently. The whole of 2020 has been tough. Poor chap. However, I say this again, people need to be more aware of what they sling and hawk on eBay. Remember, your loss is my gain. :.)


Now, this card’s ports was very strange to me, that is, the actual physical appearance. Up until getting the card itself, I had never gotten a close look at the new SFF connector and standard, and I mention this, as strangely enough, there isn’t a lot of pictures or info about these newer cards online. There’s a few pics here and there that I found, but even the card’s maker skimps on the details (that is, except for the simple stuff they include in the docs on their website). Well, at least I should say that it’s not abundantly apparent unless you do a semi-surface-dive as I did. Lucky for you my friends, I like reading and researching and getting to the bottom of things, and I found it all out for you and wrote it up here, so you won’t have to do all that digging yourself. :.)


Ladies and Gentlemen, I introduce to you, the OEM Dell HBA345 Card:


The Dell HBA345 Adapter (Notice the new vertical standing ports)


And here’s a pic with the card and the port covers on:


Dell HBA345 12Gbps Adapter with new low profile SFF connectors covered


The Dell HBA345 card has a newer LSI/Avago SAS 3416 “Tri-Mode” chipset, allowing for a total of 16 ports in the new SFF connectors (As mentioned before: 2 mystery connectors, for a total of 8 drives per connector) (Numbers 2 and 3 in the first pic – more on this in a bit).


This card is awesome. You can connect 16 drives to this thing. All in 2 distinct ports/connections.


Now the Dell HBA345 card is named along the same naming scheme as other Dell HBA/RAID cards from previous generations, with the first (or second, if you will) being the Dell H310, (2nd Gen LSI SAS 2008), then the Dell H330/HBA330 (3rd Gen LSI SAS3008) and now the Dell H345/HBA345 (3rd Gen (3.5) LSI SAS 34XX). Finally, there is one more that comes AFTER that, but we’ll get to that later.


The New Connections: The new SFF 8654 Port


As you can see from the above picture, it is abundantly clear that the SAS connector on the card is NOT a standard 3rd gen SFF port/connection. It looks like it could be similar to a 2nd gen SFF connector, the SFF 8087, but if you look closely, it is much too large to be an SFF 8087 port. First of all, it sits vertically and is clearly different from the current-gen square black plastic SFF 8643 internal connectors, which, in all honesty, I have come to very much like over the older SFF 8087 connectors.


So now this brings me to my research. Upon doing some moderate google-fu, I quickly realized that this connector was a whole new connector, that is, a new SFF standard. But what could this mysterious port/plug be? Upon searching a little deeper, I found it: The all-new SAS 4.0 SFF 8654 connector.


Or as we will be soon calling it: SlimSAS.


SlimSAS (the new standard’s nickname) is the new plug/port that will be used on the new generation of HBA and RAID cards. It is primarily engineered for the 24Gbps SAS 4.0 protocol, and will be mainstream with PCI 4.0 HBA and RAID adapters and controllers. So what is this new SlimSAS/SFF 8654 connector?


Some more google-ken showed me. It is a port, made by TE Connectivity, (TE) and it was chosen by the Storage Industry as the new standard for connectors for SAS 4.0 and 24Gbps.


In short, it’s the new kid on the block. If you want to do a little side reading, here’s a link with the brochure for ya:

TE Connectivity Document


The Next Gen Slim (Mini) SAS: Some ado about 8 “Lanes” and 4 “Lanes”


Now, I also wanted to take the time to explain a bit more on the new SFF standard itself. So, upon my research, that is, from what little info there was out there, I found that the new standard is very simple, and non-complicated, but you must be able to differentiate between the types of cables, as there is more than one.


Under the new SFF 8654 SAS 4.0 standard, you have 2 types of INTERNAL cables, The first is an 8 Lane connector, which is larger and longer in size. This first type of SFF 8654 connector is called SFF 8654-8i, or in other words, SFF 8654 “Eight Lane” Internal connector. It will connect to 8 hard drives. The word “Lane” in this case is not to be confused with 8 PCI-E lanes. Well, technically it is, and it does ring true and make sense to say it this way. I know that might sound confusing, but I will explain in a little bit below. For now, think of it as “ports”. Think of a typical cable with this connector as an SFF 8654 8i to 2 x SFF 8643 connectors. Where each SFF 8643 connector can connect to 4 x hard drives on a backplane. Thus, a single SFF 8654-8i connector can connect to 8 x SATA ports/drives.


Here is a pic of the new SFF 8654-8i connector: (Male Connector. The Female connector is as seen on the HBA345 card)



SFF 8654-8i Full Height Connector (Male). All I gotta say is homeboy is wide AF.


And here’s a pic of the 90 Degree Angle Male SFF 8654-8i connector:


SFF 8654-8i 90 Degree Full Height Connector (Male to Male). This pic is so clean and nice. Damn, that’s a gorgeous cable.


The second type of SFF 8654 connector is called SFF 8654-4i, or in other words, SFF 8654 “Four Lane” internal connector. It will connect to 4 hard drive ports. Again, this is not to be confused with connecting 4 x PCI-E lanes when referring to Hard Drive connections. “Lane” in this case only means ports. For this type of connector/cable, think of it as an SFF 8654-4i to 4 x SATA ports. (Where the SFF 8654-4i connector can connect to 4 hard drives.)


Here is a pic of the new SFF 8654-4i connector: (Male Connector, and notice the shorter width and darker yellow color)


SFF 8654-4i Connector (Male) Aw. He ain’t so bad. His big brudda is wide, but he is smaller and cute. :.)


Other important types of cables for this new standard would be a SFF 8654-8i to 2 x SFF 8654-4i connector cable as shown below:


SFF 8654-8i to 2 x SFF 8654-4i “Full Height” Male to Male Connectors. (Two little one’s are just as strong as a big one.)


And as mentioned earlier, a SFF 8654-8i to 2 x SFF 8643 cable:


SFF 8654-8i “Full Height” male connector to 2 x SFF 8643 Male Connectors. (Oh my, what a nice looking, useful cable!)


(If you notice, the SFF 8654-4i target connectors and the current-gen SFF 8643 target connectors are interchangeable at the other end of a master or primary host SFF 8654-8i connector, that is, they can both connect 4 x SATA/SAS hard drives; with the older SFF 8643 connector being SAS 3.0 and the newer SFF 8654-4i connector being SAS 4.0)


And here is a pic of the SFF 8654-8i “Low Profile” or LP Male connector. Same as a regular SFF 8654-8i male connector, just low profile. This will fit the low profile ports on the Dell HBA345:


The “Low Profile” SFF 8654-8i Male Connector. Also called “SlimSAS LP”. This male connector is the same as the regular “full height” SFF 8654-8i Male connector, it is just low profile. I like it. It looks…”efficient”.


Also, in addition to these connections, there can be a SFF 8654-8i to 2 x SFF 8639 NVME connectors (Now referred to as U.2) where you can connect 2 separate NVME/M.2 Drives, like this: (Note: These are U.2/NVME connectors/ports, NOT SAS (SFF 8482) ports)


Dual U.2 (Formerly SFF 8639) Connectors to SFF 8654-8i. These are NOT SAS (SFF 8482) ports! (SFF 8654-8i Connector not shown)



Or, If you only need to connect one NVME drive, there is an SFF 8654-4i to 1 x SFF 8639 (U.2) NVME connector cable, where you can connect 1 separate M.2/NVME Drive: (There will also be the new upcoming SFF-TA-10001 Specification connector, which will be called U.3)


Single U.2 Connector to SFF 8654-4i Connector (for 1 x NVME/M.2 Drive)


There is also SFF 8654-4i to SFF 8087 cables to connect to your older 2nd Gen SAS backplanes:


SFF 8654-4i to SFF 8087 Cable (From Host cards or motherboards to Backplanes)



There are many other types as well. Right Angle, Left Angle, Up-Angle, Down Angle, all sorts. Here is a pic of the different types that will soon be available:



These are all “full height” male connectors. The same applies for the low profile connectors not shown in this pic.


In addition to the ones above, there are also “full height” SFF 8654-8i male connectors, as well as “low profile” SFF 8654-8i male connectors. You would need either one depending on your card. The Dell card referenced above requires “low profile” male connectors to connect to the card. Normal size, or “full height” or “full size SFF 8654-8i male connectors won’t fit that particular card, whereas, other OEM card makers will use full height or full size SFF 8654-8i female connectors on their cards, thus, you can use the standard size or full size male connectors as shown above.


Now, some of these cables are actually available right now. Thus, when you get a clear idea and understanding of the standard, you can appreciate the structuring and backwards provisioning it allows for.


(I should just mention, however, that many of these newer cables are for the most part, prohibitively and excessively expensive, and not feasible for the majority of people. Not yet. Not all of them, but most of them are. Until the prices come down, I wouldn’t recommend going out to purchase anything REMOTELY close to SAS 4.0 and next-gen, including cards or cables. Seriously. It’s not really worth it yet. No solid-state drives, including the newer 4.0 SSD’s (NVME or otherwise) can even approach the real-world speeds of these newer connections, much less the theoretical. If you can afford it, then sure, but for the common everyday joe, I suggest waiting for this all to become more mainstream, and frankly, cheaper.)


Now, as far as NVME drives go, coming back to the use of the word “Lane” when referring to the new SFF 8654 connection, this is where the word is actually used properly and correctly for the new standard. Due to the “tri-mode” features of the newer LSI/Avago chipsets, you can connect Hard Drives, SSD’s and NVME drives. As the name implies, it can operate in three different modes, or “tri-mode”. A quick brochure with more reading on that here:


https://gzhls.at/blob/ldb/0/6/9/1/39e8a4cbf013a304eaa629af7c08672c40f5.pdf


Thus, when connecting NVME/M.2 drives, the 8 “lane” connector (the SFF 8654-8i) is appropriate for 2 x NVME/M.2 Drives, which of course, must be connected to a PCI-E x8 slot for proper bandwidth. The 4 “lane” connector, (SFF 8654-4i) is appropriate for only 1 x NVME/M.2 Drive, at PCI-E x4 slot size wiring.


I hope this section helps explain a little bit more about these new connectors and how to go about connecting your [future] drives and peripherals! :.)


My Initial Observations and the Beginning of my Cable Troubles


Now back to the card. As I was now ready to test my shiny new card, I placed the card in my test system, and it booted right away to a new “Broadcom/Avago” MPT Bios.


I quickly noticed that the new Dell HBA345 was categorized as a SAS3.5 device, whereas, the current-gen chipsets (SAS 30XX/31XX chipsets) are classified as SAS3 devices. Here are sample screenshot pics of an older HBA330 MPT Bios boot screen vs the new HBA345 MPT Bios boot screen:


(The SAS3 MPT3 Bios (HBA330/LSI 9300-8i) Screen)


Avago SAS3 MPT BIos (HBA330 /LSI 9300 MPT Bios)


(The SAS3.5 MPT3.5 Bios (HBA345) Screen)


Avago SAS3.5 MPT Bios (HBA345/LSI 9400 MPT Bios)


Now, upon trying to test my shiny new card, that is, to actually CONNECT drives to the thing, I realized immediately that I did not have the proper cables to connect to it. How then can I do any drive testing with the card, if I can’t connect any drives? In fact, would there even be ANY new SlimSAS (SAS 4.0) cables to even buy? (Remember, I hadn’t researched the new SAS 4.0 cables as yet, as shown above, and it led me down a new beneficial path, as you will read below).


A super quick search showed me that there are indeed many sellers selling the new cable/connection standard.


Yes my friends. You can go to eBay right now, and find new SFF 8654-4i to 4 x SATA “SAS 4.0” cables, lol. :.)


I searched “SFF 8654 to SATA” and many sellers (mostly Chinese-made, I believe) were selling the cables I needed. A search for “SlimSAS cable” may also yield similar results.


There were a few PCI-E 3.0 x4 to SFF 8654-4i interface cards, and SFF 8654-4i to SFF 8643 cables, and SFF 8654-4i to SFF 8087 cables, etc and so on. For me, I just wanted to test some drives on my new card, so I sprung for one and got myself an SFF 8654-4i to 4 x SATA cable.


It was $15 bucks.


Once my cable showed up, I was happy, but my happiness was short-lived. Remember, I previously mentioned that the new standard allowed for 2 types of connectors, that is, the SFF 8654-8i and the SFF 8654-4i connector. Well, I had mistakenly gotten the wrong connector. As it turns out, the cable I received was an SFF 8654-4i cable to 4 x SATA ports/ends, not the desired SFF 8654-8i to 4 x SATA ports I thought I had gotten. This was a slight oversight on my part, as I didn’t fully pay attention to the smaller connector at first, that is, the SFF 8654-4i connector. I just figured it would be the right connector for the card. This was silly of me to think, of course, as I had already discovered that the new card uses 2 x SFF 8654-8i connectors, of which the 8654-8i Male connector is almost TWICE the size of an 8654-4i Male end connector. Thus, I would have to wait a bit more and order the right cable, as I had no way to test the card with drives.


For a quick reference and size comparison of an SFF 8654-4i plug to the standard SATA female connector, check out a pic of the SFF 8654-4i to 4 x SATA cable that I found on eBay below:


SFF 8654-4i Male to 1 x SATA Female End. Notice the ACTUAL SIZE of the new SFF 8654-4i connector. It’s as small as a SATA Female plug end!


Upon realizing that I had purchased the wrong cable for my card, I was then forced to go back to eBay, where I searched for the correct size connector (SFF 8654-8i) cable. I did after a while find the correct connector, and it was literally ONE person selling it. It appeared to have come from a current Dell system that supports the new card, so I knew it was legit. It had a specific part number that was correct. However, to my dismay, the cable was immensely expensive! The seller wanted close to $200 for the damn…er…”desired” cable.


Hell. Freaking. No. There was absolutely no way I was ever gonna pay that much for a cable. Not even if SSSniperWolf licked it and promised to french me afterwards. (Yeah, she’s kinda cute. There, I said it.)


Further, to make matters worse, the cable that this master gouger was selling was actually NOT what I needed. Specifically, it was an SFF 8654-8i Male to 2 x SFF 8643 MALE connectors. This cable arrangement was not gonna work for me unless I had an actual SFF 8643 backplane to connect to from the host (my new card), which I didn’t have. Thus, I needed an SFF 8654-8i to 4x or 8x SATA ports/ends.


I began to search eBay and all over the internet, just to find, to my dismay, that somehow, this type of cable has NEVER been manufactured or made. That is, not yet I suppose. Or, in the spirit of being fair, perhaps not that “I” have found or seen. How then, could I test some simple SATA or SAS drives without the proper connection? I was at a loss.


And this, this one single problem, led me to something I have never done before.


I had an idea to CREATE my own cables. (Well, have someone else create it for me, lol)


The New SFF 8654-8i to 8 x SATA Cable


I contacted some good connections in China I had developed, who make wholesale and custom cables, and related to them my problem. I was super glad I had already cultivated a good relationship with them, as another unknown contact might not have been willing to create a previously non-existent cable for my simple testing. At least, not in numbers less than their normal quotas, which number by the 100’s or even 1000’s. They actually understood, and after I sent them a few mockups of the cable I needed, that is, for the new modification to the standard, I explained that I needed a new cable with SFF 8654-8i to 2 x SFF 8643 FEMALE connectors, so I could connect a regular SFF 8643 Male to 4 x SATA cable to that, and also very importantly, an SFF 8654-8i connector to 8x SATA Breakout cable or with SAS (8482) ports/ends. I also had an idea for a small 2-inch SFF 8643 FEMALE TO FEMALE END adapter, because, why not? :.)


In any case, they got to work, but it wasn’t easy. They had no previously existing wiring diagram to engineer such a cable, and their wire engineers initially said it wasn’t possible. I didn’t give up though. I knew it HAD to be possible, as if they can make an SFF 8654-4i to 4 x SATA cable, they can make the same, just with double the connections. My contact was nice to me, but they were at a loss at what to do.


So I took to Microsoft paint, and grabbed a pic from the web, and created a super spectacular masterpiece, akin to “The Mona Lisa” levels of work. I am very proud of my work. It took me like 4 days to make this quality sketch. Nah, just playing. It took less than 2 minutes. See my heart wrenching, tear-inducing art below:


My Spectacular award-winning Drawing Concept of How to make a SFF 8654-8i to 8 x SATA cable.


I explained that since we could already make an SFF 8654-8i to 2 x SFF 8643 Male Y splitter cable, (as shown above) then all you would need to do is remove the SFF 8643 connectors, and engineer 4 x SATA ports to each SFF 8643 end. That is, to take each individual pair of blue wires (each SFF 8643 connector has 8 wires total) and attach a SATA/SAS end to it. This, I thought was reasonable, and wondered why the new standard and protocol hadn’t ALREADY come up with this obviously important design. But then I feared that perhaps there was some hidden reason that wire engineers had not come up with such a design as yet, something that was maybe beyond my unpracticed, rank-novice minded reckoning. :.)


After all, these were wire engineers, right? Of course they know what they’re doing, and have way more experience than me in coming up with these types of cables.


Yet, I persisted. I kept explaining that we already use each separate pair of lines coming off an SFF 8564-4i connection for an individual SATA end, to which they agreed. Then I encouraged them to think along the same lines as connecting 4 x SATA ports to an SFF 8643 port like normal.


Tis’ the same thing, no? Surely this was possible.


Then, still not sure of their true power and potential as maniacal-diabolical, evil-genius engineers, and still doubtful that they could create such a “frankenstein” of a cable, I got a bit more technical and urged them to look at existing wire diagrams and pinouts. In the pic below for the same above shown cable, the wiring diagram shows (obviously) that it allows for several sets of send and receive signals as normal, just the same way, going to the SFF 8643 Male port. I then explained that you can take those same “Send” and “Receive” pins/wires, and create a separate SATA port for each set, which if you pay close attention to the wiring diagram, allows for exactly 4 sets of “send/transfer” and “receive” per side.


Check it out here:

Wiring/Pinout Diagram for an SFF 8654-8i to 2 x SFF 8643 Splitter Cable

Wiring/Pinout diagram for an SFF 8654-8i to 2 x SFF 8643 Y Splitter Cable. (Notice the 4 sets of “Transfer” and “Receive” signals on the “P0” part of each side. The “P0” part, in this case, being the SFF 8654-8i Male Connector. Thus, it is entirely possible to engineer SATA ports to each set or pair of “Transfer” and “Receive” signals) Go on, connect those transfers and receives. All you gotta do is just believes! (I know. I know. I get it, no one says that.)


Of note, even a common SATA end plug is not as simple as just connecting ends to a set of wires (Well, it is, but still). There are a few other signal wires that must be connected, along with proper signal testing to stay within the confines of the defined standard, and as such, I was told by my Chinese friends, that they needed to consider it, and the wire engineers would make a final decision. So I once again had to wait. Finally, after another day or two, the engineers told my contact that they could attempt to engineer the new SFF 8654-8i to 8 x SATA cable, along with a SAS hard drive (SFF 8482) variant, seeing that it was indeed feasible.


Victory! Lol.


The other cable I had in mind was a fail-safe in the case or event that the SFF 8654-8i to 8 x SATA plug cable didn’t work out. My idea being, to create the same cable, again, just as shown in the picture above, but to create it with FEMALE SFF 8643 ends. Bwahahahaaa! I’m a monster! :.)


Yup, I figured, all we need to do, is just create an SFF 8654-8i to SFF 8643 FEMALE Y split cable, then connect a regular SFF 8643 Male to 4 x SATA cable to that. Simple right? I mean, in theory, it should work, as the FEMALE end would be the “pass-through” section of the cable, passing the signal along to the regular SFF 8643 MALE end of the cable, like an extension cord, and then sending it in the direction towards the end SATA ports.


My contact wasn’t sure if it could be done, as they mostly use SFF 8643 Male end connectors for cables, as seen in most common SFF 8643 cables we use now. They only use a FEMALE end to test or connect the cables, like as on a motherboard Host port or on an HBA or RAID adapter.


But this should work right? After all, you normally connect a Male 8643 connector to a Female 8643 connector on the 3rd gen LSI HBA and RAID cards and on some server motherboards, so why can’t that same Female port be used as a pass-through for the same signals. It would marry perfectly together as husband and wife.


My contact once again, went back to their wire engineers. It was here, however, that they saw a problem, as the selected Female 8643 port, while perfectly orientated from front to back, with the pinouts opposite to the interface pins, may not allow for a small enough PCB board to pass along the signal. They worried that they would need to generate an all-new tiny PCB board to properly connect the pins outwards to a cable than from what they were normally doing. I understood that right away because usually the female 8643 port connector is soldered to the board or PCB of a given HBA or RAID card. Never has a female 8643 port been used to connect to an opposite-facing lead cable. At least, to my knowledge. I could be wrong though.


In any case, I told them that they should still attempt it, as if possible, they would have created something the industry didn’t make or manufacture prior. Whether because it wasn’t previously feasible, or if it was just too much of a bother, or perhaps they knew it might just plug in and blow up all their shit. (Kapow!) I do not know. I do know, however, that I wasn’t giving up, as it appeared, at least in theory, that this was highly possible. Or at least probable…


This was the piece I selected for them to use to create an SFF 8654-8i to 2 x FEMALE SFF 8643 Connector Y Splitter cable:

Single SFF 8643 Female Connector

The pic linked here shows a single Female SFF 8643 Connector. (Notice the pins are specifically oriented to the back of the interface pins, which is perfect to connect a cable to the opposite end, protruding back-facing towards a cable, that is, the cable coming from the SFF 8654-8i Male connector. This is different to the more common pin orientation of the Female 8643 ports found on some newer gen server motherboard host chipsets, and current-gen 12Gbps HBA and RAID cards, which are usually oriented facing downwards, so the port could be soldered to the board or PCB. If that made no sense to you whatsoever, just think: “He wanna make the cable do doggie-style.”)



Once again, I was relegated to waiting for their deliberation. I figured if this could be done, I would be set for now. There was a third 2-inch adapter-type cable I also wanted to bring up as an idea, but I didn’t want to overwhelm them with my evil genius.


After some time, I got back the news from my contact that the engineers would NOT attempt to engineer this baked-up bizarro-land mess of a cable I had thought up. That is, the SFF 8654-8i to 2 x FEMALE SFF 8643 ports. I was a little sad that they were somewhat hesitant to mad-scientist the thing, but I understood. They explained to me quite briefly that: “the new PCB board might not be capable of passing along the adequate higher frequencies needed to satisfy a proper signal.”


I thought about it, and I can see what they meant, although I think it was a load of crock they fed me so I wouldn’t go off the deep end, dreaming up fanciful cable madness to satiate my sick, twisted imagination. I will try to explain it technically to you. My explanation will be a little lengthy, but I will explain it here first, then simplify it a little bit more right after. If you take a moment to examine the tiny size of an SFF 8643 port, it might prove to be difficult to engineer a small enough board capable of providing proportional micro signals appropriate to maintaining the integrity of the overall signal from the female to the male end. Remembering that, the signal must then pass along to the SATA/SAS ends lying at the end of the chain. All this, as powered or commencing from the new SFF 8654-8i connector, of which additional advanced electrical testing would be required to know or ascertain what sort of micro-power/signaling requirements the new port itself would require upon having connected an additional line or extension. That is, if the signal would “lose” its integrity upon passing through the female end to the male end, using this new connector.


So, what does all this mean? Well, in simple terms of keeping it real, it means that it may simply cost too much money to engineer a small enough, but powerful enough board to properly transfer the correct wire signals to make it work. Think of it this way. If such a cable could be engineered, it would have to be genuinely certified (looking at you, SNIA Alliance, and TE) as not drawing additional power or creating inappropriate signals from the new connector. Of sorts, my engineer friends could do this as well to an extent, but If they went about it willy-nilly, it could result in me destroying my new SFF 8654-8i port on my card, rendering the card itself useless. Remember, I was asking them to create a single sample cable. Were they going to spend extra money and resources to ensure correct standard/protocol adherence? (Valid question, no?) Hence I thought this was wise, and for that aspect I commended them. At this point, unless they can “solve” the problem of ensuring the correct data integrity and within-limit power requirements, it may not be possible to do. I spoke to my contact again, and I told them that we could discuss the idea again later. My plan will be to change their mind by making known my willingness to throw a little money at it if need be. (I know, I know, so typical of a damn American) But that can come later. For now, I was just glad that they would attempt to engineer the SFF 8654-8i to 8 x SATA and SAS variant cables.


I must say, in true respect, props to the Chinese. Overall, there may be some IP issues they need to work out, (don’t look at me, lol) but they will also gladly attempt to (or at least try to conceive of a way to) engineer a never before seen cable, and test it out for you. I mean, that’s what it’s all about right? They are especially useful for DIY and tinkerers the world over. Think I’m lying? Go on, send a cute little email to the SNIA alliance, or even TE, asking them to create a “sample” prototype cable for you to test your shit. Go ahead, wait for their response, lol. Good luck.


Now before I go on, I find it important to state, that in no way, shape or form, am I claiming ownership of CURRENTLY held intellectual property by any organization or affiliation, in this case, the SATA or SNIA Alliance, TE connectivity, or any other damn super-mega wealthy company that can sue my poor ass. I certainly do NOT own the rights to the SATA port, and DEFINITELY not the new SFF connections. I am simply a lover of tech, and I just had an idea for a damn cable. Something that I have not found anywhere. At least, not yet. So if any super important company or organization is reading this, don’t go calling up the lawyers. No need for a C&D letter ok? I just had a useful idea. I then had some super-cool guys make a few cables, and I ain’t stealin’ nuthin. It is just an idea. I can’t patent an idea, and I can’t patent a cable made with parts I don’t own (or can I, lol). So don’t get your panties in a bunch. Matter of fact, I wouldn’t mind a job in your R&D or business department or something like that if you’re looking. Sometimes you big companies need a poor, common joe to help spring some ideas into the machine. Plus I got my advanced (Masters) degree. I can be of some value to ya. Especially on the business side. Think about that, ok? :.)


Finally, after what felt like quite a damn while, I was finally, finally holding in my hands, a NEVER before seen, (or at least, I hadn’t seen before) newly manufactured SFF Cable. Hot Damn!


Seriously, this was a big deal. For the whole world when you think about it really. Backwards compatibility people may now rejoice. We have been given the cable to save us from next-gen obscurity. You can now connect your SATA hard drives to any new next-gen SFF 8654 male connector port on any newer PCI-E 3.0/4.0 card built with this new connector. This may not seem important right now, but it will be, as the new cards go mainstream and more people obtain the newer cards for their systems.


Of note, in my searching for a similar pre-manufactured cable, I did find, to my delight, an SFF 8654-8i to 8x U.3 connector cable, directly manufactured by Broadcom themselves. It was a bit hard to find at first. I had to dig through a few Broadcom docs, and obtain a part number. Upon feeding it into google, there were a few suppliers claiming that they carried the cable, albeit, it would take 5-8 weeks or more for most of them to fulfill such an order. None really had a picture to show. I ended up finding a pic from another site. It was as close to an SFF 8654-8i to 8x anything cable I could find ANYWHERE. Check it out below:


SFF 8654-8i to 8x U.3 (NVME) connector cable. (Broadcom part no. 05-60006-00) Remember, those 8 connectors are NOT SFF 8482 (SAS Hard Drives) connectors. These are the new U.3 connectors. Connecting at PCI-E 4.0, with one 4.0 lane for each NVME card. This will allow a host card to connect to a backplane to connect to 8 x NVME drives, which at 4.0 speeds, is mostly sufficient for current-gen NVME /M.2 and upcoming M.3 drives.


So was I one of the first people to seriously think of this SATA cable and its forthcoming need? Is it not a simple matter to assume that this type of cable would be needed for the end-user? After all, we would need something like this to connect our current SAS/SATA drives to the new-gen cards. So was I/am I truly the first? If I am being humble and fair and honest, I would say, no I wasn’t. I am not silly enough to think that some brilliant, dutiful engineer working for the new SFF connection protocols did not foresee this type of cable or its need. I even have, in my searching, run across one other person who wanted to have a discussion about this self-same topic, (that is, of potential breakout cables) and I even replied to him on one of the tech forums, letting him know that he was the only other person I found beside myself, of even mentioning their possibility, and that I was working on something and would post a link once completed.


Perhaps, to my ignorance, there were probably already plans for the modification to be drawn up, or perhaps they were ALREADY drawn up, but not published or released to the public. I am also 100% sure that someone would have eventually come up with or realized the need for this type of cable, just as my friend on the tech forum I replied to. In addition to this, I’m pretty sure the new standard allows for this type of connection as well, but again, seemingly, no one had manufactured or produced the thing as yet. (Not even Broadcom themselves, which is slightly worrying for the future of the SATA interface, or maybe not?) Neither were there any current pinout/wiring diagrams I could find on the internet. If any were filed with the patent office prior to the date of me publishing this write-up on my blog, then I will even accept that as proof they were planning to make the thing.


Heheh, all I can say, at least for now, is that I was the first one to get this cable manufactured. Trust me, I checked around to make sure, like ALOT. I kept searching and re-searching over and over again. At one point, I was hoping I seriously would stumble across the cable, already manufactured, in hopes that it would render most of this whole damn writeup moot. Alas, I found no such cable!


Behold, the new, never-before-seen SFF 8654-8i to 8 x SATA cable: (You viewed it here first in the world, bois!)


The New SFF 8654-8i to 8 x SATA plugs. One plug. Eight Drives. Hot Damn. This is first in the World, son.
I need to patent this Frankenstein of a cable and make the big bucks. Catch me outside.




Here’s a close-up look of the new SFF 8654-8i to 8 x SATA cable. I had to tape it down to get the shot with the ends together. Count ’em bois. Eight (8) SATA ports from one SFF 8654-8i port. Nice.


Also, here is a pic of the brand new SFF 8654-8i to 8 x SFF 8482 (SAS Drive Cable). First in the world again, son.


Each port is an individual SAS (SFF 8482) port. SAS drive owners rejoice, bring your hands together and praise the great Gabiru. Say it with me: “Gabiru! Gabiru! Gabiru!”



And here is the brand-new cable drawing/mockup-diagram design for the new cable (SFF 8654-8i to 8 x SATA cable). Some identifying lines and names blanked out for obvious reasons, of course. I also won’t be showing the actual pinout and wiring layout designs, cuz, you know, people suck sometimes, and I don’t want any problems, son:

New Wiring Diagram for New SFF 8654-8i to SATA Cable

Cable Diagram for the New SFF 8654-8i to 8x SATA 7Pin Cable. My document is totally Copyrighted, Registered and Trademarked. (Yes, all at the same time, bitches.) For LEGAL PURPOSES, this document was drawn up by myself and is totally a fake. Don’t use this, ok? :.)


Now go on. Write my name in the annals of cable history dammit. I am the one true “Sho Haou” :.)


All joking aside though, it was nice to know I might be one of the few people, if not two or three people who might have come up with an idea (or realized the obvious need) for something useful. Maybe. We’ll see.


Now, after some time waiting, that is, for several long weeks, I was finally able to connect some drives. I also got the specific cable that I really needed, all made for me at a FRACTION of the cost that maniac was charging for on eBay, of which, in all fairness, I would not have been able to use it anyway. Upon connecting each port/plug, all my drives showed up right away. I was super happy! The new cable was working well.


Oh, by the way, I should mention that I’m primarily testing in Windows. There’s a reason for me doing this, and I will explain a bit later.


Now before I go on, I wanted to tell you that this new cable will only fit in “side leaning” or “full size” SFF 8654-8i female slots, or horizontal slots. I was able to successfully connect my drives and they all showed up great, but the Dell HBA345 card, in particular, sports a smaller “low profile SFF 8654-8i female slot if you can believe that. It is not a different slot, but it appears that the SFF 8654-8i female connector slot comes in two varieties: the full size SFF 8654-8i female connector, and the low profile SFF 8654-8i female connector, of which the Dell HBA345 was engineered with the low profile SFF 8654-8i slots.
(Just my damn luck)



I just wanted to let you know that as of right now, even the current new cables that I had manufactured will NOT fit the new Dell HBA345. They will work if you plug it into the slot, but when you plug in the cable, half of the connector is sitting outside the slot, as I used the full size SFF 8654-8i male connectors, not the low profile male connectors to make the new cables, so the black plastic molding is too big, effectively rendering the cable unsafe to use on the Dell HBA345 card for normal operation, as the cable is loose and wobbly in the slot. I could test the cables on the card in my testbench, but it is not suitable as yet for main production for the Dell card. Again, this is only for the Dell HBA345. I am working on getting some low profile SFF 8654-8i male connector cables specially made for these Dell cards that come with the low profile SFF 8654-8i female connectors. For now, the cables I currently have will fit any other full size SFF 8654-8i slot, like on the new cards that are coming out now. My new cable WILL fit perfectly on an LSI 9500-8i/16i and the LSI 9561-8i/16i and cards like the Lenovo 940-8i/16i/32i. (More on those newer cards later.)


In spite of the molding for the new cable not fitting flush into the Dell HBA345 card, the cable still worked excellently, and all my drives showed up good, and I ended my initial testing on the connections and the new connectors, satisfied that the new cable and ports I connected were working and functioning well with my new card. It is important to remember, I was only seeing typical 6Gbps speeds for my Hard drives and SSD’s. Even if you get this setup, that is what you will see, for obvious reasons. So don’t expect a super-speed increase on your regular hard drives, or for that matter, your SATA SSD’s. Same as for 6Gbps SAS drives, and even for actual 12Gbps SAS Hard drives. (If you have one, son) Again, Super Kudos to my contacts in China. Seriously. They, just like me, are some crazy sons of guns. I like that. The credit definitely goes to them for this new cable as well. I will be working with them to create a properly molded plastic housing for the connector, so it can fit Dell HBA345 cards, and any other card with a vertical or standing slot placement. At least for now, we have working cables.


Hence after all this, I say…All hail the new SFF 8654 connector!


All hail 8 x SAS/SATA drives connected to one beautiful freaking port. That’s so freaking awesome. Freak yeah!


After my tail-wagging excitement of playing with this new tech and cables bottomed out, I settled down to the thing we love to do. For you see now, yes, now, comes the FUN part.


It was time to tinker. well, rather, time to explore. Bwahahaa. (Hands rubbing together sinisterly…or is it sinisterily? Wait, yeah. No, I meant sinisteringly…you know what? I rubbed my hands together in a sinister way. Done. There you go.)


My Brief Testing and Exploration


I rebooted my testbench, and booted into Freedos, and since it’s an HBA345, I initially (and correctly) assumed it was without proprietary Dell RAID rom installed, so I chose to fire up my favorite flashing utility: sas3flsh. (sas3flash for EFI users).


I was surprised however, as the utility recognized no “Avago SAS adapters” in my system:


Huh? That’s strange. This is clearly an LSI card variant with new LSI chipset. So I then tried Megacli and the newer Megascu. I even tried Megaoem, but to no avail. They all claimed they saw nothing in my system, and that I should just stop and go somewhere. (Well, they didn’t say that, but yeah.)


Ok, alright, no problem. The bios and firmware must be some new Dell-ified/Broadcom hybrid nonsense, so I figured I would boot up the venerable Megarec. (That is, my aptly self-renamed: “Megarec3”)


So I call up Megarec3 to engage my new card, and surprisingly and to my honest shock, even Megarec didn’t recognize the card. What???


I was starting to get worried, because as you know, IF MEGAREC CAN’T SEE your card, you’re done. Go home. Bye-bye.


But alas, I know the card was working fine, as my previous drive and connection testing worked fine. I was able to see my drives, and transfer at nominal speeds. So I know the card works.


But alas again, neither sas3flsh nor Megarec3 itself could see or recognize it as an adapter in my system. So now I’m wondering if this card has any MegaRaid firmware installed in the recesses of its NVSRAM…


So back to google I go once more. After some reading and skimming through some Dell docs on the card, primarily the user guide, I found that it can be controlled by the HII utility. I don’t have experience with this utility, as I think it is built into your Dell system bios, and I didn’t feel like putting the card in my R430 to test it, cuz’ you know, I was lazy. So I searched for anything else. I then remembered that the perccli utility worked in Windows, and it is a Dell-ified version of the newer Storcli utility by LSI/Avago, so I thought to give it a try.


So I booted back into Windows, and called up perccli.


Success! It worked.


I got this output, upon typing: perccli /c0 show:


Output of HBA345 Adapter using perccli /c0 show (For those of us who care about these things, take a look at that new Device ID. Also look at those new Subvendor and Subdevice ID’s baby!)


From there, it was simple to update the card if I wanted to, with the command:


perccli /c0 download file=fullpathtoromfile


And it would have upgraded my card to the latest firmware. Of note, Dell also provides a Windows installer you can use to upgrade the card if you don’t wanna mess around with EFI flashing and PERC tools and stuff. Just download the latest firmware update for the card, and run it on your Windows installation. Either way works.


This is why I mentioned earlier that I could only test in Windows, as I ended up only being able to use perccli in Windows, because my current and older generation dos tools and utilities did not work to see the card, and perccli only runs in Windows or EFI, no DOS I believe. I could be wrong, but I wasn’t bothering to search out to see if some degenerate created another version of perccli for dos or some shit like that. As far as I believe, Storcli SHOULD also work to recognize the card, as perccli is based off of it, but I haven’t tested with that specific utility as yet, so I can’t say for certain.


Actually, now that I think about it, Storcli will DEFINITELY work. My thinking is that perhaps a newer version of megarec or sas3flsh (or perhaps a whole new utility, maybe called sas4flsh/sas4flash?) may be released soon to recognize these newer cards coming down the pipe as they become more mainstream.


(Update: Just realized it was sitting in the LSI firmware package readme. LSI only wants you to use Storcli to update their interim-gen cards for now, so there you go. No sas3flash/sas3flsh for you.)


So now that I could see the output of my card, and I could upgrade firmware and mess around with it if I pleased, I turned my attention to flashing or conversion to “IT” mode.


But wait, remember, the card is called an HBA345, thus following on the previous generations to it, including the current-gen, the straightforward “HBA” moniker is designated for cards that already come with IT mode firmware installed.


And of course, as mentioned earlier in my brief testing, that was the case. The Dell H345 is in fact, an HBA card, with IT firmware already installed, so it already allows pass-through for your drives. So no need to flash it. The card I think I actually want to tinker and play with, is in fact, not an HBA345, but a PERC RAID H345 or similar variant. I wanna see if Dell did the same thing with the H345 as it did with the H330. (It might be way too early for that, though.)


Hence, realizing this, my very basic testing with the new card is done. It is, of course, of the same OEM variance as a Dell HBA330, and prior gen cards, but with next-gen SFF 8654-8i connectors, and still offering 12Gbps speeds, but again, of a newer gen chipset, that is, the LSI/Avago SAS 3416 Tri-Mode chipset specifically. Interesting choice of Dell to include the new connector on an interim-gen card. Of note, Dell refers to this interim generation as “PERC 10” series of cards, with the last-gen, the HBA330, as the PERC series “9” family. As mentioned earlier, these new connectors (SFF 8654-8i, SFF 8654-4i) are solely primed for SAS 4.0 and PCI-E 4.0. Again, this card is for all intents and purposes, an interim card, NOT SAS 4.0, (SAS3.5) just like the LSI SAS 9400. If anything, it is a preview or as I see it, a prelude to what is coming next…


There’s more coming…(A whole lot more)


As I researched everything I could find on the new connectors and this new card, my google-ryu turned up something super interesting. This is what I was alluding to when I mentioned earlier in the write-up that there was something AFTER the HBA345. There is actually an even NEWER card that is coming. It is called an HBA355i. This card is the TRUE successor to the 3rd gen LSI/Avago 12Gbps SAS30XX/34XX chipsets, with full PCI-E 4.0 connectivity and 24Gbps speeds (for NVME drives). It will have an LSI/Avago 38XX series of chipsets built-in for 8 or 16 or more SAS ports.


These cards will be insane.


I managed to find a pic of this upcoming PCI-E 4.0 HBA355i card. It looks EXTREMELY similar to the HBA345: (Notice the same port orientation, but a more vivid greenish-yellowish color as opposed to the HBA345)


The Upcoming Dell HBA355i Adapter. It will also come with low profile SFF 8654-8i female connectors. In any case, full PCI-E 4.0 HBA bandwidth is on the way, son…


I am not 100% sure, but I think this new yellowish-green color will be present on some newer, next-gen cards as representative of PCI-E 4.0 HBA and RAID cards. Not in all cases of course. This is just a hunch and a prediction, which as with all predictions, could certainly be proven wrong. I kinda like the new color though.


Upcoming Cards


So finally, for comparison, the Dell HBA345 (the card I have in hand) is a 12Gbps SAS 3.5, PCI-E 3.1 adapter. So not really groundbreaking. I just lucked up and got a great deal on a new card.


Then there is the upcoming Dell HBA355i. It is a 12/24Gbps, SAS 4.0 Full PCI-E 4.0 HBA adapter.


There are others. Many others. Some monstrous cards are coming, and are even already available. Check out the monstrosity that is this Lenovo card:


The Lenovo 940-32i. This uses the standard “full height SFF 8654-8i male connectors. Eight drives PER connector. This card is a beast. What a freaking monstrosity.
That is a whopping 8 x SAS/SATA ports PER connector, for a total of 32 freaking ports!



Damn, now that’s what I’m talking about.


There is also the new LSI 9500 Series of cards, like the 9500-8i/16i, the LSI 9560-8i/16i (Lenovo 940-8i/16i) cards, which, along with the above Lenovo monstrosity, will use the newer LSI/Avago PCI-E 4.0 SAS 38XX/39XX chipsets.


I used the Lenovo cards as examples, as they were much easily found in clearer display and info than some of the newer cards from LSI or Dell direct. There’s other info out there if you look of course, but I just found it easier to use the Lenovo cards as examples.


Check them out for some extra reading here: Lenovo ThinkSystem RAID 940 Series Internal RAID Adapters Product Guide > Lenovo Press


Also, here is a doc with most of the newer LSI cards, along with a full lineup of their HBA and RAID cards that are now available or are upcoming: https://docs.broadcom.com/doc/AV00-0315EN


Pic of LSI/Avago/Broadcom 9560-8i below: (Notice, now that the new connector (SFF 8654-8i) is used, the card only needs 1 x connector to connect 8 drives. This will result in cleaner setups, and save so much money for a lot of people on cables and wiring, or at least, I hope!)


The LSI 9560-8i. (Lenovo 940-8i) One SFF 8654-8i Port. 8 drive connections or 2 x NVME cards, all in one port. These cards will use the LSI SAS 39XX ROC chipset. Beautiful.


Pic of LSI/Avago/Broadcom 9560-16i below: (Notice you only now need 2 x SFF 8654-8i ports to connect 16 drives!)


The LSI 9560-16i. (Lenovo 940-16i) Two SFF 8654-8i Ports. 16 drive SAS/SATA drive connections or 4 x NVME cards, all in two SFF 8654-8i ports. Simply Amazing.


Conclusion and True price of my card


So back to my card. Maybe some time during reading this crazy write-up, you might have wondered, “well how much of a deal did he score on his stupid card anyway?” Well, after a little more google-jutsu, I found out that the card I scored off of eBay is so new, that is, in terms of new value from Dell, it sells for over four hundred dollars. Score! I can tell you right now, I paid about eighty-something bucks for it with shipping. So about 1/5th of the current going price. (I’m keeping it for now though, no selly-selly.)


Now, just so you know, this new Dell card is not overtly new. That is, of new technology besides the new SFF connectors. This particular card I received is NOT a PCI-E 4.0 card. No. It is essentially an LSI/Avago/Broadcom 9400-16i, which also still connects at PCI-E 3.0 (or, rather PCI-E 3.1) and will only do 12Gbps bandwidth for those of you who would like to cross-reference a similar existing card. The only difference between the 2 cards being the older SFF 8643 connectors on the LSI card, and the newer SFF 8654-8i connectors on the Dell variant. The Dell card I received is actually scarcely seen as of now, as corporations/companies (or people) who are in possession of their new systems, like a PowerEdge C6525 Server for example, which the card can come as an option, depending on how you build your system, and it will be using the card, thus you won’t find it for sale commonly (second-hand) at most places for now, not even on eBay. (Again, that is, if you don’t luck up and run across a listing of a guy hawking a brand-new, next-gen card for a few pennies on the dollar.)


So whoever you are, my eBay hawking friend, I thank you. I thank you for not ensuring you checked the card you were selling and hawked it on eBay to make a quick buck.


Thank you my sweet, naïve friend. You have, in your own unknown way, helped the tech community gain a little insight into these newer cards and next-gen connectors. I was personally very happy to do this writeup, as I myself was wondering about the next generation of HBA and RAID cards, with PCI-E 4.0 now pretty much gaining mainstream adoption. I know this writeup will be of help to someone in the future, as we start forward-looking at these new, next-gen peripherals and connections.


A big shout out once again, to my Chinese friends, for making the previously non-existent SFF 8654-8i to 8 x SATA and SAS cables for me. It still baffles me that NO ONE realized a cable like this would be needed, save a few people here or there. I’m already researching if I can seriously patent this thing (probably not, but maybe, lol) and I spoke to a lawyer friend a few days ago and she will get back to me soon. So no stealing, bitches. It’s my modification idea. Plus, this write-up serves as proof that it came from me first. :.)


I know not from where or how he got his hands on a newer card like this, as I truly believe that most companies who purchase this new OEM stuff usually keep it for a while before selling or getting rid of it, but I commend you. I really do. It is entirely possible that the seller owned the card and just needed a few extra bucks, and so, sold it on eBay. (I just hope he didn’t steal it or some shit, and hawked it on eBay for a quick buck, which quite frankly, is an entirely plausible and realistic assumption as well)


So In closing, I will say this. Our HBA connectivity futures are looking bright. Very bright. I suppose we will also see next-gen variants from other OEM manufacturers start to pop up as time goes forward as well, like Fujitsu, Quanta, Adaptec, etc.)


Of note, I do realize that a lot of these cards and newer connects were already mostly available roughly starting like a year or thereabouts ago, or even slightly sooner. I’m not trying to say I discovered or found something that was previously never seen or anything like that. Not at all. People are ALREADY using some of these cards if you can believe it, so please don’t think that. All I really did was take the extra step to seek the production of an obviously useful and needed cable. That’s it.


So can I at least accept some credit for that, hmm? :.)


Once again, it is important to remember that as of the time of writing this, this is all still VERY new tech, and 75-80% of people, if not more, (even among the tech community) do not yet have or even need it, or have even gotten their hands on these newer cards as yet. Some of the “interim” between gen cards, (like the LSI 9400-16i) as I like to call them, ARE quite available right now, and you can easily find them on eBay and elsewhere. But just, well…just don’t look at their prices, unless you got’s the dough.


Now, all that being said, all we really need now is for SSD’s to catch up…that is, to get bigger and cheaper, you know, just like hard drives. Then we’re in business. I’ve been wanting an all-flash server for years now. And I mean, numbering in the Ten’s (meaning 20’s, 30’s 40’s etc.) if not, close to a hundred TB’s of flash. Not a few 1 or 2TB SSD’s thrown together in a box to make 8-16TB at most, which even if you DID do that, it would still be horribly expensive.


As for us common folk, that is, most of us not being big corporations and companies who can afford super new stuff, let the race to the next generation begin I guess? I suppose you can start looking at pricing and monitoring these new products. Hey, you never know what you may find, or stumble across.


Lol, and here I was just getting used to the current 3rd Gen SAS stuff :.)
 
Last edited:

Sleyk

Your Friendly Knowledgable Helper and Techlover!
Mar 25, 2016
1,361
707
113
Stamford, CT
Haha! Looks like Supermicro just started making these! Its about time :.)

We need companies to start making these cables to drive the prices down on these newer cards. All in, this is really good.
 

jpmomo

Active Member
Aug 12, 2018
494
157
43
Thanks for the write up as you helped answer a question that I have been trying to find an answer to for a while! I have a dell r7525 with their even newer perc h755n (the only hw based nvme raid card from dell!). The issue that I have with this card is that it does severely limit the bandwidth of my nvme ssds. They are nvme gen4 ssds with over 7GBps (56Gbps) throughput. The 755n only has 1 x8 gen4 pci connection to the system. Each of the 8 drives support x4 gen4 link width. In order to fully utilize the performance of these drives, I would need 4 x8 cables. These cables are the slimsas (sff-8654 8i) that you referenced in your article. The connections on the mb are the LP (low profile) type and the connections of the chassis backplane are the normal 8654 8i connectors. I was trying to figure out what the standard was for the connectors on the mb was called. You correctly pointed out that they are the sff-8654 LP! The reason that I was looking for these cables was to bypass the h755n altogether if I determined that it was in fact limiting the total performance of all 8 drives to x8 (gen4) throughput. There are several of the 8654 LP connectors built into the mb that I can use to connect directly to the backplane ( 4 8654 8i that serve the 8 u.2 nvme ssds.) This server actual can be configured/cabled to support 24 nvme gen 4 drives a full x4 bandwidth. mine only has the 8 x 2.5 nvme backplane but it also came with the perc h755n nvme raid card. the raid card has 1 x8 cable with an 8654 8i LP on both ends that connects from the mb to a connector on the raid card. then there are 2 cables that connect from the raid card to the backplane. those cables are y cables that have the 8654 LP connectors on the raid card side and 2 normal (or high) 8654 connectors on the backplane side. The specs for the raid card mentions that it supports on max link width of x2 for the 8 drives but isn't clear about how many simultaneously. I am assuming that there is some for of pci switch on the card and that ultimately, the single x8 cable connecting the card to the cpus would limit it to 4 drives max at x2. I understand that the raid card has other functionality but the drives I have would be too limited if this is the case.
 
  • Like
Reactions: Sleyk

Sleyk

Your Friendly Knowledgable Helper and Techlover!
Mar 25, 2016
1,361
707
113
Stamford, CT
Thanks for the write up as you helped answer a question that I have been trying to find an answer to for a while! I have a dell r7525 with their even newer perc h755n (the only hw based nvme raid card from dell!). The issue that I have with this card is that it does severely limit the bandwidth of my nvme ssds. They are nvme gen4 ssds with over 7GBps (56Gbps) throughput. The 755n only has 1 x8 gen4 pci connection to the system. Each of the 8 drives support x4 gen4 link width. In order to fully utilize the performance of these drives, I would need 4 x8 cables. These cables are the slimsas (sff-8654 8i) that you referenced in your article. The connections on the mb are the LP (low profile) type and the connections of the chassis backplane are the normal 8654 8i connectors. I was trying to figure out what the standard was for the connectors on the mb was called. You correctly pointed out that they are the sff-8654 LP! The reason that I was looking for these cables was to bypass the h755n altogether if I determined that it was in fact limiting the total performance of all 8 drives to x8 (gen4) throughput. There are several of the 8654 LP connectors built into the mb that I can use to connect directly to the backplane ( 4 8654 8i that serve the 8 u.2 nvme ssds.) This server actual can be configured/cabled to support 24 nvme gen 4 drives a full x4 bandwidth. mine only has the 8 x 2.5 nvme backplane but it also came with the perc h755n nvme raid card. the raid card has 1 x8 cable with an 8654 8i LP on both ends that connects from the mb to a connector on the raid card. then there are 2 cables that connect from the raid card to the backplane. those cables are y cables that have the 8654 LP connectors on the raid card side and 2 normal (or high) 8654 connectors on the backplane side. The specs for the raid card mentions that it supports on max link width of x2 for the 8 drives but isn't clear about how many simultaneously. I am assuming that there is some for of pci switch on the card and that ultimately, the single x8 cable connecting the card to the cpus would limit it to 4 drives max at x2. I understand that the raid card has other functionality but the drives I have would be too limited if this is the case.
Sure! Glad this helped! I remember looking at all this new stuff when I was doing my research on the new specifications and connectors, and I was worried that I wouldnt be able to find out what we might need for the future, so I kept digging and digging until I found all the connectors and each type of link width.

There is also much more I didn't include in the write-up, as there are 8654-16i links as well as up to 32i width links if you can believe it! Lol!

All in, I hope this will serve as a good primer for these newer upcoming cards. I am still in the process of trying to get them to source me the "LP" version of the connector, as the company has the connector on lockdown for now, but I have a friend working on it for me ;)
 
  • Like
Reactions: 89giop

Dinglestains

New Member
Aug 11, 2021
2
0
1
I'm looking to move a bunch (14) of my hard drives from USB to internal. I would like to get this HBA card since my motherboard supports PCI Express 4.0. I know its not really necessary now but I hope this make it more future proof. I've looked everywhere and can't find anywhere to purchase the cable you had to build. Do you know if its now available anywhere or where I can get two of these cables? Great post and thanks for putting all of this information together.
 

Sleyk

Your Friendly Knowledgable Helper and Techlover!
Mar 25, 2016
1,361
707
113
Stamford, CT
Thanks. I just found this after reading some replies to the original post. I missed the response when I read through this post yesterday. I'm in the U.S. (Georgia to be more specific). I wonder if anyone has tried this cable to verify it works correctly with the new card?
Yup, now they're available from Supermicro, which is good. That one is a right-angle connector though. You may need to check elsewhere for straight angle ones :.)
 

jmcguire525

New Member
Feb 16, 2021
6
1
3
SFF 8654-8i “Full Height” male connector to 2 x SFF 8643 Male Connectors. (Oh my, what a nice looking, useful cable!)
Sleyk, do you have any of these for sale or a link to purchase? So far this is all I have found... Link

Looks like what I need (using it on a ROMED6U-2L2T) but is a bit expensive for cable.
 

Sleyk

Your Friendly Knowledgable Helper and Techlover!
Mar 25, 2016
1,361
707
113
Stamford, CT
Sleyk, do you have any of these for sale or a link to purchase? So far this is all I have found... Link

Looks like what I need (using it on a ROMED6U-2L2T) but is a bit expensive for cable.
Hi there!

Not yet, but working on trying to get these sourced from my contacts :.)

It might be a few weeks yet with all the covid slowdowns, but pm me and will keep your contact for when something comes through :.)
 

Scatioti

New Member
Jun 14, 2020
1
1
3
You mention that 24Gbps is "insane", but am I not understanding correctly or is that not even remotely fast enough for modern high-end enterprise NVMe SSDs? Keep in mind there is a huge distinction between Gbps and GB/s: Gigabits per second versus Gigabytes per second. To illustrate: 24 Gbps = 3 GB/s.

Take for example the Kioxia CM6-V. This NVMe drive does sequential reading at almost 7000 MB/s (which is 55Gbps!). Thus, even a double SAS4 cable (with bandwidth of 2x24=48Gbps) will be bottlenecking this single NVMe SSD, wouldn't it?

I don't understand why this isn't talked about more - or perhaps I am completely confused about Slimline SAS cables in general? 24Gbps is peanuts compared to what PCIe 4.0 NVMe SSDs can achieve in maximum speeds on a standard PCI 4.0 x4 slot, so why aren't there better SAS connectivity options for servers for all-flash NVMe storage?
 
  • Like
Reactions: Sleyk

i386

Well-Known Member
Mar 18, 2016
3,389
1,137
113
33
Germany
I don't understand why this isn't talked about more - or perhaps I am completely confused about Slimline SAS cables in general? 24Gbps is peanuts compared to what PCIe 4.0 NVMe SSDs can achieve in maximum speeds on a standard PCI 4.0 x4 slot, so why aren't there better SAS connectivity options for servers for all-flash NVMe storage?
It's all about the money:
Nvme + pcie ssds are expensive.
Sas + ssds/hdds are cheap (compared to nvme ssds)

Sas has become a bottleneck as a protocol and hardware interface for flash and the industry developed nvme over pcie as a "successor"*. It doesn't make sense to invest resources in a "legacy" protocol.
Yes, sas 4/24GBit/s is slow compared to pcie 4.0 and nvme, but it's development started around 2012 and was finally ratified in 2017. Meanwhile it was overtaken by nvme/pcie for ssds :D

*Using quote marks because that stuff is still great for smb (or a home lab).
 
  • Like
Reactions: Sleyk

ectoplasmosis

Active Member
Jul 28, 2021
119
51
28
You mention that 24Gbps is "insane", but am I not understanding correctly or is that not even remotely fast enough for modern high-end enterprise NVMe SSDs? Keep in mind there is a huge distinction between Gbps and GB/s: Gigabits per second versus Gigabytes per second. To illustrate: 24 Gbps = 3 GB/s.

Take for example the Kioxia CM6-V. This NVMe drive does sequential reading at almost 7000 MB/s (which is 55Gbps!). Thus, even a double SAS4 cable (with bandwidth of 2x24=48Gbps) will be bottlenecking this single NVMe SSD, wouldn't it?

I don't understand why this isn't talked about more - or perhaps I am completely confused about Slimline SAS cables in general? 24Gbps is peanuts compared to what PCIe 4.0 NVMe SSDs can achieve in maximum speeds on a standard PCI 4.0 x4 slot, so why aren't there better SAS connectivity options for servers for all-flash NVMe storage?
Agreed, truly “insane” speeds are more the likes of the ~12GB/s (note the capital B) single-drive sequential throughput that the imminent NVME Gen5 drives are achieving... SAS4 is DoA imo.
 
  • Like
Reactions: Sleyk

Stephan

Well-Known Member
Apr 21, 2017
559
356
63
Germany
Agreed about SAS4. SAS3 ist nice though because basically nobody has a suitable controller so prices for TB+ SAS3 devices by durable enterprise HGST etc. SSDs are usually good bargains.
 

Sleyk

Your Friendly Knowledgable Helper and Techlover!
Mar 25, 2016
1,361
707
113
Stamford, CT
You mention that 24Gbps is "insane", but am I not understanding correctly or is that not even remotely fast enough for modern high-end enterprise NVMe SSDs? Keep in mind there is a huge distinction between Gbps and GB/s: Gigabits per second versus Gigabytes per second. To illustrate: 24 Gbps = 3 GB/s.

Take for example the Kioxia CM6-V. This NVMe drive does sequential reading at almost 7000 MB/s (which is 55Gbps!). Thus, even a double SAS4 cable (with bandwidth of 2x24=48Gbps) will be bottlenecking this single NVMe SSD, wouldn't it?

I don't understand why this isn't talked about more - or perhaps I am completely confused about Slimline SAS cables in general? 24Gbps is peanuts compared to what PCIe 4.0 NVMe SSDs can achieve in maximum speeds on a standard PCI 4.0 x4 slot, so why aren't there better SAS connectivity options for servers for all-flash NVMe storage?
True, modern pci-e 4.0 and 5.0 incoming drives will blow past a 24Gb/s connection, so it appears SAS4.0 has to now play catch up. I think the SAS protocol may not be far behind with a SAS4.5 or 5.0 version, since there are already boards with PCI-E 5.0.

I also agree with @i386, in that the protocol was ratified late in the game. This is why I forsee a faster protocol incoming soon, perhaps within the next few years is my guess.
 
  • Like
Reactions: ectoplasmosis

jpmomo

Active Member
Aug 12, 2018
494
157
43
Not sure the value prop for that tech when you can get >300Gbps with relatively cheap nvme raid.
 
  • Like
Reactions: Sleyk