Using Dell H310 in IT mode

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kevindd992002

Member
Oct 4, 2021
110
4
18
So I just upgraded my home NAS to use an AsRock H570M-ITX/AC board with a Dell H310 card flashed in IT mode. I have a total of 6 drives, 4 plugged in to the HBA card and 2 directly to the motherboard's SATA ports. I have Debian 11 installed and during bootup I get these:

1644244051039.png

1. Is the "overriding NVDATA EEDPTagMode setting" message normal? When I Google it, almost all results have that message. Not sure if it's a warning or an error message but here's the output of dmesg | grep mpt:

[ 0.008079] Device empty
[ 1.126162] mpt3sas version 39.100.00.00 loaded
[ 1.126212] mpt3sas 0000:01:00.0: can't disable ASPM; OS doesn't have ASPM control
[ 1.126352] mpt3sas 0000:01:00.0: enabling device (0000 -> 0002)
[ 1.126425] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (32516028 kB)
[ 1.171717] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[ 1.171723] mpt2sas_cm0: MSI-X vectors supported: 1
[ 1.171724] mpt2sas_cm0: 0 1 1
[ 1.171771] mpt2sas_cm0: High IOPs queues : disabled
[ 1.171772] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 136
[ 1.171772] mpt2sas_cm0: iomem(0x00000000a1140000), mapped(0x000000006db8560e), size(65536)
[ 1.171774] mpt2sas_cm0: ioport(0x0000000000003000), size(256)
[ 1.224978] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[ 1.251294] mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15)
[ 1.251338] mpt2sas_cm0: request pool(0x00000000ea8a314c) - dma(0xfff80000): depth(3492), frame_size(128), pool_size(436 kB)
[ 1.259272] mpt2sas_cm0: sense pool(0x000000003db124a2) - dma(0xff880000): depth(3367), element_size(96), pool_size (315 kB)
[ 1.259275] mpt2sas_cm0: sense pool(0x000000003db124a2)- dma(0xff880000): depth(3367),element_size(96), pool_size(0 kB)
[ 1.259311] mpt2sas_cm0: reply pool(0x0000000093b6693e) - dma(0xff800000): depth(3556), frame_size(128), pool_size(444 kB)
[ 1.259316] mpt2sas_cm0: config page(0x00000000dbb8dba5) - dma(0xff7fb000): size(512)
[ 1.259317] mpt2sas_cm0: Allocated physical memory: size(7579 kB)
[ 1.259317] mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432)
[ 1.259318] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[ 1.304832] mpt2sas_cm0: overriding NVDATA EEDPTagMode setting
[ 1.305311] mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x03), BiosVersion(00.00.00.00)
[ 1.305316] mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[ 1.306183] mpt2sas_cm0: sending port enable !!
[ 2.828384] mpt2sas_cm0: hba_port entry: 00000000011eb10a, port: 255 is added to hba_port list
[ 2.830555] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x5b8ca3a0efa70b00), phys(8)
[ 2.831256] mpt2sas_cm0: handle(0x9) sas_address(0x4433221104000000) port_type(0x1)
[ 3.333118] mpt2sas_cm0: handle(0xa) sas_address(0x4433221105000000) port_type(0x1)
[ 3.582604] mpt2sas_cm0: handle(0xb) sas_address(0x4433221106000000) port_type(0x1)
[ 3.830945] mpt2sas_cm0: handle(0xc) sas_address(0x4433221107000000) port_type(0x1)
[ 8.962273] mpt2sas_cm0: port enable: SUCCESS
2. I'm not sure about why I'm receiving the "asking for cache data failed" errors for sde and sdf. Those are the two drives connected directly to the motherboard and those definitely have cache. sdf is a 6TB Red NAS drive and sde is a 8TB White NAS (shucked) drive.

Any ideas? Thanks.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
You can safely ignore the `EEDPTagMode` message. It's just informing you a workaround was activated for a hardware bug present in some firmware revisions of some controllers. The workaround enables End-to-End Data Protection (basically checksumming for the transport between host and media), either with or without additional Application and Reference Tags (Mode 1).

As for cache on your SATA drives, try kernel 5.15.6 or later. In the meantime, I suspect your drive caches might not be getting utilized, which would impact performance.
 

kevindd992002

Member
Oct 4, 2021
110
4
18
Thanks for the info. I'm kinda skeptical in upgrading to a kernel version that's not yet in the bullseye backports repo. I think 5.15.6 is still in the unstable/experimental repo of Debian. I think I may have to wait a bit more time.
 

kevindd992002

Member
Oct 4, 2021
110
4
18
I just upgraded to 5.16 and it looks like it fixed the problem. This is what I'm seeing in /var/log/syslog now:

Apr 1 02:36:31 epsilon kernel: [ 8.817174] sd 3:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Apr 1 02:36:31 epsilon kernel: [ 8.817363] sd 4:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Apr 1 02:36:31 epsilon kernel: [ 8.821135] sd 0:0:2:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA
Apr 1 02:36:31 epsilon kernel: [ 8.823790] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA
Apr 1 02:36:31 epsilon kernel: [ 8.823912] sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
Apr 1 02:36:31 epsilon kernel: [ 8.823955] sd 0:0:3:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA

Why does sde and sdf not support DPO and FUA though? Those are the two drives connected directly to the mobo sata ports.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
there is a difference between onboard SATA ports vs SATA tunnelled through SAS. Historically, FUA/DPO (per-unit write cache bypass) was implemented inconsistently in SATA drives (and better with SCSI/SAS drives), so the Linux SATA driver disabled it. I don't know if it's been enabled recently, or if there is a way to enable it. I don't think FUA is often used in practise.
 

kevindd992002

Member
Oct 4, 2021
110
4
18
So in short, nothing to worry about? Or would it to just migrate the 2 SATA disks I have currently connected to the motherboard to connect to the DELL PERC card and be done with it? I just have to buy another forward breakout cable if I do that.