R720 PCIe Limit?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

superhappychris

New Member
Feb 19, 2022
6
3
3
Hi everybody, long time viewer, first time poster...

I'm running into a weird problem with the Ableconn PEXM2-130 Dual PCIe NVMe M.2 SSDs Carrier (ASMedia ASM2824 Switch) I just bought where it only recognizes one of the NVMe drives unless I unplug the disk shelf attached to the LSI 9207-8e card I also have installed. If I attach the NVMe drives using single drive PCIe cards in separate slots then both drives are recognized even if the disk shelf is attached. Has anyone run into this before or know how to go about troubleshooting?

I noticed that when I plug in the Ableconn card the following error shows when booting: "Alert! System fatal error during previous boot PCI Express Error"

Info on the server in question - Dell R720 running Ubuntu Server 20.04.2 equipped with two E5-2630 v2 CPUs, 1100W power supplies and the following items in the PCIe slots
  • Slot 1: LSI 9207-8e (connected to a 15 drive EMC Jbod disk shelf)
  • Slot 2: Ableconn PEXM2-130 Dual PCIe NVMe M.2 SSDs Carrier (ASMedia ASM2824 Switch)
  • Slot 4: HP H240 internal SAS3 card running in HBA mode (the 8 SFF drive bay is connected to this instead of the onboard controller)
  • Slot 5: Generic M.2 NVME to PCIe 3.0 x4 Adapter
  • Slot 6: GTX 1070 GPU
I've tried swapping the expansion cards into different slots to no effect. Not sure how to proceed from here...
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
Yes, it does sound like a weird problem--but those are the most fun.

A few clarifications, please:
If I attach the NVMe drives using single drive PCIe cards in separate slots then both drives are recognized even if the disk shelf is attached
Are you saying that you removed the Ableconn from Slot 2, moving one of its M.2s onto a 2nd M.2-PCIe adapter (ala the Slot 5 one) which went into Slot 2, and moving the other of its M.2s onto a 3rd M.2-PCIe which went into Slot 3?
I noticed that when I plug in the Ableconn card the following error shows when booting: "Alert! System fatal error during previous boot PCI Express Error"
Did this happen BOTH with and without the 9207 (in Slot 1)?

Also, is your boot device, and/or your root (/) device, connected via any of the PCIe slots?
 
Last edited:

superhappychris

New Member
Feb 19, 2022
6
3
3
Appreciate the reply! I definitely enjoy a good head-scratcher, but the extra long boot times after making PCIe card changes are a nightmare for my ADHD lol

If I attach the NVMe drives using single drive PCIe cards in separate slots then both drives are recognized even if the disk shelf is attached
Are you saying that you removed the Ableconn from Slot 2, moving one of its M.2s onto a 2nd M.2-PCIe adapter (ala the Slot 5 one) which went into Slot 2, and moving the other of its M.2s onto a 3rd M.2-PCIe which went into Slot 3?
That is correct. Wanted to see if having the same quantity of M.2s attached but via different cards caused the same problem.

I noticed that when I plug in the Ableconn card the following error shows when booting: "Alert! System fatal error during previous boot PCI Express Error"
Did this happen BOTH with and without the 9207 (in Slot 1)?
Yes it does. That error only pops up when the Ableconn is attached and it doesn't seem to make a difference what else is attached (although i definitely haven't tried all combinations), which makes me think it might just be a problem with the Ableconn card itself

Also, is your boot device, and/or your root (/) device, connected via any of the PCIe slots?
No, my boot device is a SATA SSD connected to the SATA port on the motherboard (J_SATA_CD) that is normally taken up by the optical drive.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
Before we put on our HazMat suits (or at least hip-waders & mud boots) and really dig into your problem, let me put forth a proposal that should transform the current migraine headache into a blissful psilocybin journey: Return the Ableconn card (cf:"just bought") and purchase one of these [Link] ([Alt-Link]). (Same product but: [Link] is correctly described--[Alt-Link] not so, but one of the reviews is success (x2) on a R720xd. Also see [STH discussion] (& following ~8 posts) for related details.)

Move your H240 to Slot2; put the Ceacent in Slot4. For just $10 more ($160 vs $150), you get x16-to-4xM.2, max 110mm, using first-rate [PLX 8748] switch chip (vs x8-to-2xM.2, max 80mm, using second-rate (and flaky) [ASM 2824]).

RSVP
 

superhappychris

New Member
Feb 19, 2022
6
3
3
Proposal accepted! I was thinking about going with one of those ali-express cards initially but opted for the Ableconn mainly because I didn't want to wait for the shipping. Didn't realize the ASM 2824 was so flaky or I wouldn't have bothered with it in the first place.

Also see [STH discussion] (& following ~8 posts) for related details
In that thread you show benchmark results from nvmt. I tried googling that program but came up empty. What is it?
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
because I didn't want to wait for the shipping
Over the past year, I've bought 3 of the ANU28PE16s; each has arrived [NH, USA] 12-17 calendar days after order, packed/boxed well.
you show benchmark results from nvmt. ... What is it?
personal homebrew hacks: nvmt is a (simple) shell script, invoking xft, one instance per drive, background/concurrently. [But not-for-distribution; I retired 25 yrs ago after ~15 years of developing/licensing/&supporting my own software products. No mas!]
For SAS/SATA testing,
for i in a b c ...; do hdparm -t [--direct] /dev/sd$i & done
works well for testing throughput capacity of storage subsystems; NVMe needed additional effort. I expect that fio could also be scripted for the task.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
Didn't realize the ASM 2824 was so flaky ...
[To be fair,] based on perusal of reviews, etc of the Ableconn card (& its Lycom twin; + some QNAP cards), the chip does appear to "work" in "consumer-typical" settings. That linked bug-report was in a not-simple (but not hairy) setting, but does exemplify the 2nd-rateness. Switch chips from PLX (aka Avago) & PMC (aka Microchip) are designed for, and used extensively in, complex data-center and industrial systems. In a pie-chart of PCIe switch market segments, I would guess that HBA/AIC would be invisible (or a footnote).

The ASM2824 issue in your system sounds roughly similar to what I came across in an Amazon review from someone with a HP DL380 (Gen unspecified) [4th review down @ [Link]]; that fiasco didn't even need any SSDs attached. Probably a Gen8 [e5-26xx v3/v4, like yours], since the review just below is 100% happy, on a e5-26xx v1/v2 platform.

Before you remove your card, it could be interesting to gather some "crime scene evidence" for curiosity sake. Two configs, one with shelf attached and one without. For each config, following a boot-up, do:
Code:
lspci -vv > [No]Shelf.lspci
dd if=/dev/nvmeNn1 of=/dev/null bs=1M count=1k # N being the number for the/a visible 2824-attached SSD
dmesg > [No]Shelf.dmesg
Then zip 2824 *Shelf.????? and attach 2824.zip to a posting.
 
Last edited:

superhappychris

New Member
Feb 19, 2022
6
3
3
personal homebrew hacks: nvmt is a (simple) shell script, invoking xft, one instance per drive, background/concurrently. [But not-for-distribution; I retired 25 yrs ago after ~15 years of developing/licensing/&supporting my own software products. No mas!]
For SAS/SATA testing,
for i in a b c ...; do hdparm -t [--direct] /dev/sd$i & done
works well for testing throughput capacity of storage subsystems; NVMe needed additional effort. I expect that fio could also be scripted for the task.
Appreciate the info! I'm a software developer and probably the biggest deterrent to releasing a product myself is the thought of having to deal with end user support lol

Then zip 2824 *Shelf.????? and attach 2824.zip to a posting.
Zip file attached for inspection. Unfortunately I don't have any experience troubleshooting hardware or PCIe issues like this so I'm not really sure what to look for in these files. Definitely interested to see if they gives you any clues as to what is going on.
 

Attachments

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
...
Zip file attached for inspection. Unfortunately I don't have any experience troubleshooting hardware or PCIe issues like this so I'm not really sure what to look for in these files. Definitely interested to see if they gives you any clues as to what is going on.
Thanks for the files. My own experience is so dated (was a hardcore Unix kernel hack in the mid-70s [& now I am in my mid-70s :)]) that it's little more than a [de/il]lusion. But, "better a has-been than a never-was", right?

Nothing jumped out to me from the kernel messages to explain the problem, but the lspci definitely documents its existence. Below is the relevant summary:
Code:
44:00.0 PCI bridge: ASMedia Technology Inc. Device 2824 (rev 01) (prog-if 00 [Normal decode])
    Bus: primary=44, secondary=45, subordinate=49, sec-latency=0
        LnkCap:    Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 8GT/s (ok), Width x8 (ok)

45:00.0 PCI bridge: ASMedia Technology Inc. Device 2824 (rev 01) (prog-if 00 [Normal decode])
    Bus: primary=45, secondary=46, subordinate=46, sec-latency=0
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x4 (ok)

45:04.0 PCI bridge: ASMedia Technology Inc. Device 2824 (rev 01) (prog-if 00 [Normal decode])
    Bus: primary=45, secondary=47, subordinate=47, sec-latency=0
        LnkCap:    Port #4, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x1 (downgraded)

45:08.0 PCI bridge: ASMedia Technology Inc. Device 2824 (rev 01) (prog-if 00 [Normal decode])
    Bus: primary=45, secondary=48, subordinate=48, sec-latency=0
        LnkCap:    Port #8, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x4 (ok)

45:0c.0 PCI bridge: ASMedia Technology Inc. Device 2824 (rev 01) (prog-if 00 [Normal decode])
    Bus: primary=45, secondary=49, subordinate=49, sec-latency=0
        LnkCap:    Port #12, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x1 (downgraded)

46:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 (prog-if 02 [NVM Express])
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x4 (ok)

# ===== Below is absent with Shelf attached =====

48:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 (prog-if 02 [NVM Express])
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x4 (ok)
Pertinent are the 4 LnkSta: lines showing Speed (downgraded) but Width (ok). All 4 target ports (though only 2 are "implemented") failed to negotiate Gen3, or even Gen2, falling back to lowly Gen1. I expect that the dd commands reported ~800 MB/s (vs ~2500) ... [I did not anticipate this fallback; my reason for the dd's was to see if it would provoke any kernel messages (it didn't).]

For comparison, below are the analogous lspci lines for my 8-port card (w/6 SSDs connected):
Code:
06:00.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=06, secondary=07, subordinate=0f, sec-latency=0
        LnkCap:    Port #0, Speed 8GT/s, Width x16, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x16 (ok)
07:08.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=08, subordinate=08, sec-latency=0
        LnkCap:    Port #8, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:09.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=09, subordinate=09, sec-latency=0
        LnkCap:    Port #9, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:0a.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0a, subordinate=0a, sec-latency=0
        LnkCap:    Port #10, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:0b.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0b, subordinate=0b, sec-latency=0
        LnkCap:    Port #11, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:10.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0c, subordinate=0c, sec-latency=0
        LnkCap:    Port #16, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x0 (downgraded)
07:11.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0d, subordinate=0d, sec-latency=0
        LnkCap:    Port #17, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x0 (downgraded)
07:12.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
        LnkCap:    Port #18, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:13.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0f, subordinate=0f, sec-latency=0
        LnkCap:    Port #19, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
06:00.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=06, secondary=07, subordinate=0f, sec-latency=0
        LnkCap:    Port #0, Speed 8GT/s, Width x16, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x16 (ok)
07:08.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=08, subordinate=08, sec-latency=0
        LnkCap:    Port #8, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:09.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=09, subordinate=09, sec-latency=0
        LnkCap:    Port #9, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:0a.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0a, subordinate=0a, sec-latency=0
        LnkCap:    Port #10, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:0b.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0b, subordinate=0b, sec-latency=0
        LnkCap:    Port #11, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:10.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0c, subordinate=0c, sec-latency=0
        LnkCap:    Port #16, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x0 (downgraded)
07:11.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0d, subordinate=0d, sec-latency=0
        LnkCap:    Port #17, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 2.5GT/s (downgraded), Width x0 (downgraded)
07:12.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
        LnkCap:    Port #18, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
07:13.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
    Bus: primary=07, secondary=0f, subordinate=0f, sec-latency=0
        LnkCap:    Port #19, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
08:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/SN750 / PC SN720 NVMe SSD (prog-if 02 [NVM Express])
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <8us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
09:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03) (prog-if 02 [NVM Express])
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <8us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
0a:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03) (prog-if 02 [NVM Express])
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <8us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
0b:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 (prog-if 02 [NVM Express])
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
0e:00.0 Non-Volatile memory controller: SK hynix Device 174a (prog-if 02 [NVM Express])
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
0f:00.0 Non-Volatile memory controller: SK hynix Device 174a (prog-if 02 [NVM Express])
        LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkSta:    Speed 8GT/s (ok), Width x4 (ok)
 
  • Like
Reactions: What_Is_Computer

acquacow

Well-Known Member
Feb 15, 2017
798
449
63
43
Just to chime in on R720s and PCI-e... We used to drop 16 pci-e devices off of the two x16 risers when I was at Fusion-io. We weren't using bifurcation though, we had a pci-e switch on our cards to split up the bandwidth. The three 1/2-width slots, we filled with infiniband cards and never had any issues with anything.

We used the GPU power cable kits to power our hardware so that we didn't run the pci-e system out of power and brown it out under peak flash write workloads.

 
  • Like
Reactions: abq

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
Just to chime in on R720s and PCI-e... We used to drop 16 pci-e devices off of the two x16 risers when I was at Fusion-io. We weren't using bifurcation though, we had a pci-e switch on our cards to split up the bandwidth. ...
I'm curious to know what card you were using 5+ years ago (custom/internal?). Which switch chip?
Thanks.
 

acquacow

Well-Known Member
Feb 15, 2017
798
449
63
43
I'm curious to know what card you were using 5+ years ago (custom/internal?). Which switch chip?
Thanks.
These are two Fusion-io ioDrive Octals.
Each contains 8 1.2TB pci-e memory modules, 20TB usable across the two of them.

We had to test them in every server they would fit in...
The R720 was our favorite 2U box, though there was an Asus 2U that was a lot cheaper and could hold 4 Octals at once for 40TB in 2U. ESC4000 G4 | ASUS Servers and Workstations

 
  • Like
Reactions: UhClem
Feb 21, 2022
32
4
8
I'm not an expert, but I do have some expertise on putting hardware that doesn't belong in the R720. Based on what has been mentioned above and past experience I would agree the card in question is the issue since it isn't negotiating link speed properly. I had a similar issue with a GTX 770 that I accidentally knocked some caps off of by the slot. Found out they were necessary to negotiate link speed. The server wasn't pleased. It would randomly crash when booting up the VM that used that card. Ran the dell diagnostics and it threw an error. Been a while now though so I forget what the error was.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
These are two Fusion-io ioDrive Octals.
Each contains 8 1.2TB pci-e memory modules, 20TB usable across the two of them. ...
Thanks. Didn't realize it was 10+ years back, and using PCIe Gen2 (PEX8680 chip?).
Looks like its development started at tail end of Gen2 era, and release at begin of Gen3?
Still, a very cutting-edge product.
 

acquacow

Well-Known Member
Feb 15, 2017
798
449
63
43
Thanks. Didn't realize it was 10+ years back, and using PCIe Gen2 (PEX8680 chip?).
Looks like its development started at tail end of Gen2 era, and release at begin of Gen3?
Still, a very cutting-edge product.
Yeah, those were all gen2. That beast of a product went away when our last gen cards came out, as they were pci-e 3.0 and could do 6.4TB in a single slot, allowing us 12TB in two slots vs the previous 10TB. The 6.4TB cards were a TON cheaper too. Those 10TB cards in the pics were $500k each. Our 1.2TB cards at the time were $25k... just so you can see how much the prices on fast flash came down =)
 

superhappychris

New Member
Feb 19, 2022
6
3
3
So after my initial Ali Express order was stopped at customs and returned to the supplier; my second order arrived and I can confirm that the problem was with the Ableconn card, not the R720. I've only tried it with 2 NVMe drives so far. I'll reply to this post if I run into any problems once I get around to buying another 2 drives and adding them to the expansion card.

For reference, this is the card I ended up ordering: 136.0US $ |ANM24PE16 Ceacent M.2 Controller, Four Port ,support M.2 SSD rise, pcie 3.0 X16 with heatsink (not include SSD)|Add On Cards| - AliExpress

Appreciate all the help on this one @UhClem !!
 
  • Like
Reactions: abq

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
... I've only tried it with 2 NVMe drives so far. I'll reply to this post if I run into any problems once I get around to buying another 2 drives and adding them to the expansion card.
Always a good idea to test now (within the 30 days) ["An ounce of prevention is worth a pound of cure."]
You can test for full functionality:
1) run the following command:
Code:
lspci -vv -d10b5: | grep LnkSta:
That should produce:
Code:
                LnkSta: Speed 8GT/s (ok), Width x16 (ok)
                LnkSta: Speed 8GT/s (ok), Width x4 (ok)
                LnkSta: Speed 8GT/s (ok), Width x4 (ok)
                LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
                ...
                LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
(First line is the uplink/host.) The other lines (downlinks/targets) might be re-arranged, but two of "8GT ... x4" is important.
2) Then, move the two NVMe's into the other two M.2 slots, and run again.

[Edit]
I'm curious to know how many of the "(downgraded)" lines you get (2..6?). Thanks
 
Last edited:

superhappychris

New Member
Feb 19, 2022
6
3
3
Always a good idea to test now (within the 30 days) ["An ounce of prevention is worth a pound of cure."]
Good call. Ran the command you suggested and everything looks good for all 4 slots!

I'm curious to know how many of the "(downgraded)" lines you get (2..6?). Thanks
This is the output I get -
Code:
LnkSta: Speed 8GT/s (ok), Width x16 (ok)
LnkSta: Speed 8GT/s (ok), Width x4 (ok)
LnkSta: Speed 8GT/s (ok), Width x4 (ok)
LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
 
  • Like
Reactions: abq and UhClem

Quiron

New Member
Jan 28, 2023
2
0
1
Hello everybody, I'm looking for info about this card Ceacent ANM24PE16 , I intend to use 4 ssds (non-raid) on a old X58 motherboard PCIe 2.0 (desktop motherboard, not a server), I'm not an expert, I understand that speeds will be around 1500MB/s on a single drive which compared with the SATA II is a nice improvement for the tasks to perform (loading/streaming virtual instruments mainly); I appreciate if someone that has used it can share some thoughts about speeds achieved, thermal treatment suggested for 4 ssds on this card, and in general if you recommend this card.
These are my specs just in case:

Asus P6TD Deluxe (X58 chipset)
CPU Xeon5690 (Max. memory bandwidht 32 GB/s - Max PCIe lanes 36)
Radeon ATI HD4670 on 1st slot x16
48Gb RAM
4 HDD sata 2
2 SSD Sata 2
2nd and 3rd PCIe 2.0 x16 slots available

Thanks everybody!
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
about this card Ceacent ANM24PE16 , I intend to use 4 ssds (non-raid) on a old X58 motherboard PCIe 2.0 ...
share some thoughts about speeds achieved, thermal treatment suggested for 4 ssds on this card, and in general if you recommend this card. ...
That card should work well. Speeds will exceed your expectations. Thermals will most likely depend on what M.2s you use (& how hard they're used). The fan and baffle will/should direct airflow over the switch chip & SSDs (and out the vented bracket).

Please report back.
 
  • Like
Reactions: Quiron