Dell PERC H310

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Giovanni

New Member
Feb 5, 2018
3
0
1
53
Hello all,

I'm new here, my name is Giovanni, I'm writing from Italy.

I'm building a new home server, and since I developed an unconditional love for Linux in the last years, my new build OS will be Ubuntu or Debian (not decided, yet).

I took a cheap motherboard I had on hand (AM1M-S2H) with an AMD CPU (KAbini 5350), more than enough for me, and added a Dell PERC H310 to it.

The complete specifications are:
- Motherboard ASUS AM1M-S2H
- CPU AMD AM1 Kabini 5350
- 4 Gb RAM (for now, it will be expanded soon)
- Intel PRO/1000 NIC
- 64 Gb SSD for the OS (Kingston something), connected to the onboard SATA0 port
- 2 x 3Tb WD RED HDD, connected to the PERC H310 port A

I'm right now experimenting with Ubuntu Server 17.10 and Debian 9 Stretch.

I have some problems with the PERC H310 SAS card:

1. If I flash it with the original IR firmware (P20), the OS cannot recognize nor the card nor the disks attached to it; no way I cannot access the disks.
Do I have to install any driver for the OS?
This sound strange to me...

2. If I flash it with the IT firmware, I can see the two disks from the OS, the card is thus recognized and managed; but, guess what, I cannot spin down the disks!

Since these two disks will be used very rarely (once every one/two days), I would like to be able to spin them down for most of the time, and wake them up only when I need it.

I know there's a workaround for Windows environment (driver trick), but cannot find ANYTHING working for Debian-based Linux.

Any suggestion for me?
Is there anybody that has been able to spin down HDDs with this card?

If this is not possible, can you suggest me another 8 port SAS card (with IT firmware, maybe) that implements spin down commands in Linux?

Thank you in advance.

Regards,
Giovanni


PS I know this has been asked a lot of time, and yes, I searched the forum, but I've been unable to find a solution, so I have to ask again (maybe in the meantime somebody solved the issue).
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Bonjourno , ciao da Londra! Debian user myself here, also using a similar card (an IBM M1015 reflashed to LSI IT mode), using a similar chip to the H310 (both contain the ubiquitous SAS2008 silicon).

No special drivers should be needed - linux will load kernel modules automatically in the event of detecting the hardware. You can check if these are resident using lsmod and looking for mpt2sas;
Code:
root@wug:~# lsmod|grep -i mpt
mpt2sas               151328  12
raid_class             12788  1 mpt2sas
scsi_transport_sas     33531  1 mpt2sas
scsi_mod              191405  7 sg,scsi_transport_sas,usb_storage,mpt2sas,libata,sd_mod,raid_class
dmesg or lshw will likely also show you some hardware info depending on how your card reports itself to the OS.

Now IT vs. IR firmware - I suspect the reason that the IR firmware doesn't work is that the discs haven't been set up to appear as a RAID array (IR = Integrated RAID), at least not one that the card understands. Unless you have specific requirements like VMware, you're better of sticking with IT mode, which just presents the discs straight to the OS. This is the best way to do RAID when using Linux/mdadm or ZFS in my opinion.

As far as spin-down... I've never tried it myself with these HBAs (in my experience, using spin-down results in higher failure rates of hard drives, so the money saved in power is massively outweighed by the cost of buying new discs). However, if you're adamant that this is what you want I believe you can pass a SMART attr to the discs in question to enable standby mode with a command like so:
Code:
smartctl --set=standby,now /dev/sdX
Bear in mind that if you're using RAID, then any time any disc access is needed - which can be surprisingly often even when the server supposedly isn't in use - all of the discs in the array will need to spin up - this take a long time (resulting in freezes on waiting for IO).
 
  • Like
Reactions: leebo_28

Mariuszpe

New Member
Aug 21, 2019
1
0
1
Motherboard AM1M-S2H have PCI Express x16 slot, running at x4 (PCIEX4). PERC H310 is PCIe x8 card .
I think the problem may be not enough multi-lane slot.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
As far as spin-down... I've never tried it myself with these HBAs (in my experience, using spin-down results in higher failure rates of hard drives, so the money saved in power is massively outweighed by the cost of buying new discs)
What exactly are you attributing to the high failure rate and spin-down of disks? Frankly you can argue either way.
Leaving the drive spinning will wear the bearing faster, but the bearing is built to last a really long time.
Stop/start cycles will cause the heads to park, but most drives are spec'd for ~300,000 load/unload cycles, which is about 165 times a day.

Motherboard AM1M-S2H have PCI Express x16 slot, running at x4 (PCIEX4). PERC H310 is PCIe x8 card .
I think the problem may be not enough multi-lane slot.
Even running it at x4 is 2.0 GB/s of bandwidth you should be able to get up to 8 regular HDD fine.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
What exactly are you attributing to the high failure rate and spin-down of disks? Frankly you can argue either way.
I'm not meaning to attribute anything, it's just anecdotage from me. I've simply just experienced less in the way of failures from drives that haven't been set to spin down (but as I haven't configured systems for spindown in years as a result it hardly counts as up to date anecdata either).
 
Last edited:

Fritz

Well-Known Member
Apr 6, 2015
3,386
1,387
113
70
Drives last longer when they spin continuously. Start-Stop and Spinup-Spindown leads to premature failure.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Drives last longer when they spin continuously. Start-Stop and Spinup-Spindown leads to premature failure.
I’d love to read the source for this if you have one and corresponding statistics.

(Not being sarcastic I’m legitimately interested to know whether stopping is better or not for my use case, I have some drives that are accessed a couple times a week others barely once a month)
 

radiaani

New Member
Sep 24, 2019
1
0
1
PERC H310 is PCIe x8 card .
I think the problem may be not enough multi-lane slot.
Sorry for hijacking an existing thread but I've got an issue with Dell PERC H310, too.

Does H310 require a x8 PCIe slot with full eight PCIe lanes?
On HP ML310e Gen8 V2, I've got two "eight-lane" slots but only one is a true x8 PCIe slot while the other is a x1 PCIe.

My H310 works nicely in the true x8 slot but it is not recognized at all in the other. The green LED is flashing but apart from that the card is like absent: computer boots normally (Debian Buster) but there is no sign of the card anywhere (lspci, syslog etc). Is the slot defective or is the card just not working with x1 PCIe?

The card is a LSI SAS2008, and its specs claim that it "Supports x8, x4, x1 PCIe lanes". So, in principle it should work but does the Dell firmware support x1 PCIe?
Would flashing a different firmware help? I am planning to use the card for a SAS LTO drive, and I have no plans to run RAID with it.

Code:
07:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2008 [Falcon] (rev 03)
    Subsystem: Dell MegaRAID SAS 2008 [Falcon] (PERC H310)
    Physical Slot: 3
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0, Cache Line Size: 64 bytes
    Interrupt: pin A routed to IRQ 17
    NUMA node: 0
    Region 0: I/O ports at 5000 [size=256]
    Region 1: Memory at fbff0000 (64-bit, non-prefetchable) [size=16K]
    Region 3: Memory at fbf80000 (64-bit, non-prefetchable) [size=256K]
    [virtual] Expansion ROM at fbf00000 [disabled] [size=128K]
    Capabilities: [50] Power Management version 3
        Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
        Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
    Capabilities: [68] Express (v2) Endpoint, MSI 00
        DevCap:    MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
        DevCtl:    Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported-
            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
            MaxPayload 128 bytes, MaxReadReq 4096 bytes
        DevSta:    CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-
        LnkCap:    Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns, L1 <1us
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
        LnkCtl:    ASPM Disabled; RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
        DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported
        DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
        LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
             Compliance De-emphasis: -6dB
        LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
             EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
    Capabilities: [d0] Vital Product Data
pcilib: sysfs_read_vpd: read failed: Input/output error
        Not readable
    Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
        Address: 0000000000000000  Data: 0000
    Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-
        Vector table: BAR=1 offset=00002000
        PBA: BAR=1 offset=00003800
    Capabilities: [100 v1] Advanced Error Reporting
        UESta:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
        UEMsk:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
        UESvrt:    DLP- SDES+ TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
        CESta:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
        CEMsk:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
        AERCap:    First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
    Capabilities: [138 v1] Power Budgeting <?>
    Kernel driver in use: megaraid_sas
    Kernel modules: megaraid_sas

Why bother with the x1 PCIe slot if the other is working? Well, next to the working x8 slot there is a x16 slot where I'd like to put a "2-slots wide" graphics card. Nice design...

Any suggestions for a x1 PCIe SAS controller if it is impossible to get this one working?

Is it possible to make this card work in a HP Microserver G7 (N54L)? I tested, and it froze up during boot.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Does H310 require a x8 PCIe slot with full eight PCIe lanes?
No, I've run it in 4x slots, it will limit the max throughput of the drives you attach to the card though, if its not working thats either a limitation of the slot or something else is at play (unless the bios of the card is borked, you can just reflash it if thats the case though).
Would flashing a different firmware help? I am planning to use the card for a SAS LTO drive, and I have no plans to run RAID with it.
You could try flashing it into IT-mode and an equivelant 9211-8i if you don't need raid.
This is the guide I used for mine: Flash Dell PERC H310 to LSI 9211-8i IT Mode Using Legacy (DOS) and UEFI Method (HBA Firmware + BIOS) | JC-LAN.org
Is it possible to make this card work in a HP Microserver G7 (N54L)? I tested, and it froze up during boot.
Have you done the pin taping already? The non-boot is a exhibited sign of the pin issue
Yannick's Tech Blog: Modding a Dell Perc 6 / Dell H310 / Dell H710 (other LSI 1078 or 9223-8i based) SAS Raidcontroller


Edit: I did find this in the manual on page 29 for the HPEProLiant ML310e Generation 8 (Gen8) v2 "HPE Storage Controllers/SAS Controllers NOTE: Smart Array Storage controller can be added only in PCIe Slot 3, Slot 4 or both." Which means there is something proprietary likely keeping them from working because HPE's reasons.
HPE ProLiant ML310e Generation 8 (Gen8) v2
 
Last edited: