LSI 9361-8i RAID-10 configuration with CacheCade for XFS (weird strip size reported to OS)

Discussion in 'RAID Controllers and Host Bus Adapters' started by anomaly, Jul 7, 2018.

  1. anomaly

    anomaly Member

    Joined:
    Jan 8, 2018
    Messages:
    132
    Likes Received:
    14
    Hi!

    I've been benchmarking a RAID 10 setup using 4 HGST 7.2k RPM Deskstar NAS hard drives (6TB capacity), together with 2 SAS 12G SSDs in RAID1 for a 232G CacheCade volume.

    These are the CSV values that can be presented to Bonnie to Google Chart :

    Code:
    1.97,1.97,raid10-cachecade-plain-xfs,1,1530444241,125G,,692,99,228278,17,64101,6,1592,97,515270,25,+++++,+++,16,,,,,31536,65,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,12280us,22991us,464ms,46953us,466ms,1233us,212us,90us,24765us,141us,14us,100us
    1.97,1.97,raid10-cachecade-plain-xfs,1,1530451081,125G,,686,99,203211,15,121429,12,1636,98,505617,26,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,13561us,20040us,607ms,22433us,599ms,709us,616us,96us,118us,130us,22us,1074us
    1.97,1.97,raid10-plain-xfs,1,1530412134,125G,,687,99,422396,34,213197,21,1573,95,429972,19,539.7,8,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,12301us,17752us,3307ms,56395us,739ms,85450us,321us,108us,1914us,460us,21us,105us
    1.97,1.97,raid10-plain-xfs,1,1530371478,125G,,693,99,425293,34,212840,20,1000,62,430286,18,413.0,31,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,12311us,16923us,5281ms,628ms,852ms,126ms,612us,100us,2076us,165us,21us,106us
    1.97,1.97,raid10-aes-256-lvm-xfs,1,1530478870,125G,,667,99,427428,36,211251,24,915,61,434150,22,549.7,8,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,12864us,175ms,2835ms,1128ms,766ms,110ms,239us,92us,125us,135us,10us,75us
    1.97,1.97,raid10-aes-256-lvm-xfs,1,1530492453,125G,,682,99,424837,36,211906,24,1180,78,434745,22,570.8,9,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,20168us,175ms,3696ms,505ms,789ms,92669us,515us,165us,123us,162us,14us,80us
    1.97,1.97,raid10-cachecade-aes-256-lvm-xfs,1,1530468991,125G,,636,99,182621,15,71440,8,1578,99,335155,17,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,31697,70,+++++,+++,+++++,+++,17437us,19762ms,585ms,5554us,82347us,7974us,138us,103us,42263us,591us,16us,109us
    1.97,1.97,raid10-cachecade-aes-256-lvm-xfs,1,1530470159,125G,,614,99,176952,14,124478,14,1511,97,340974,17,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,14125us,237ms,393ms,32691us,70111us,1312us,159us,97us,114us,158us,22us,96us
    
    [​IMG]
    The different setups:
    1. RAID 10 with CacheCade enabled, plain XFS on top of GPT partitioned disk, no LVM
    2. RAID 10 without CacheCade (disabled), plain XFS on top of GPT partitioned disk, no LVM
    3. Same as above, CacheCade enabled, using LVM + XFS
    4. Same as above, CacheCade disabled, using LVM + XFS
    5. Same as above, CacheCade enabled, using LVM + AES-NI accelerated LUKS AES-256 encryption, XFS
    6. Same as above, CacheCade disabled, using LVM + AES-NI accelerated LUKS AES-256 encryption, XFS

    Now, on to the problem: when I create the XFS filesystem, using the presumably correct parameters, I get a warning about the strip width not matching what the OS is returning to the kernel/IO layer:

    Code:
    root@pve1:~# mkfs.xfs -f -d su=256k,sw=2 /dev/sda1
    mkfs.xfs: Specified data stripe width 1024 is not the same as the volume stripe width 512
    meta-data=/dev/sda1              isize=512    agcount=32, agsize=91561920 blks
             =                       sectsz=4096  attr=2, projid32bit=1
             =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
    data     =                       bsize=4096   blocks=2929981440, imaxpct=5
             =                       sunit=64     swidth=128 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal log           bsize=4096   blocks=521728, version=2
             =                       sectsz=4096  sunit=1 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    
    This is what the OS sees:

    Code:
    # blockdev --getalignoff --getss --getpbsz --getsz --getbsz --getsize /dev/sda
    0
    512
    4096
    23439867904
    4096
    23439867904
    
    The storcli relevant information:

    Code:
    # /opt/MegaRAID/storcli/storcli64 /c0/v0 show all
    CLI Version = 007.0606.0000.0000 Mar 20, 2018
    Operating system = Linux 4.15.17-3-pve
    Controller = 0
    Status = Success
    Description = None
    
    
    /c0/v0 :
    ======
    
    ---------------------------------------------------------------
    DG/VD TYPE   State Access Consist Cache Cac sCC      Size Name
    ---------------------------------------------------------------
    3/0   RAID10 Optl  RW     Yes     RWBD  RW  ON  10.915 TB VD_0
    ---------------------------------------------------------------
    
    Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
    Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
    Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
    AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
    Check Consistency
    
    
    PDs for VD 0 :
    ============
    
    --------------------------------------------------------------------------------
    EID:Slt DID State DG     Size Intf Med SED PI SeSz Model                Sp Type
    --------------------------------------------------------------------------------
    252:2    11 Onln   3 5.457 TB SATA HDD N   N  512B HGST HDN726060ALE614 U  -
    252:3     8 Onln   3 5.457 TB SATA HDD N   N  512B HGST HDN726060ALE614 U  -
    252:0     9 Onln   3 5.457 TB SATA HDD N   N  512B HGST HDN726060ALE614 U  -
    252:1    10 Onln   3 5.457 TB SATA HDD N   N  512B HGST HDN726060ALE614 U  -
    --------------------------------------------------------------------------------
    
    EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
    DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
    UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
    Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
    SeSz-Sector Size|Sp-Spun|U-Up|D-Down/PowerSave|T-Transition|F-Foreign
    UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
    CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded
    
    
    VD0 Properties :
    ==============
    Strip Size = 256 KB
    Number of Blocks = 23439867904
    VD has Emulated PD = Yes
    Span Depth = 2
    Number of Drives Per Span = 2
    Write Cache(initial setting) = WriteBack
    Disk Cache Policy = Disabled
    Encryption = None
    Data Protection = Disabled
    Active Operations = None
    Exposed to OS = Yes
    OS Drive Name = /dev/sda
    Creation Date = 24-06-2018
    Creation Time = 06:53:52 AM
    Emulation type = default
    Is LD Ready for OS Requests = Yes
    SCSI NAA Id = 600605b00d6c145022c1fc80a34243b9
    SCSI Unmap = No
    
    The strip size is 256K and I am clearly using the correct parameters for mkfs.xfs. Unless I am missing something else, why am I getting that strange warning about mismatching arguments?

    This is bothering me as I don't want to perform benchmarks that will give skewed results (more skewed anyway than what a benchmark can do, which involving Cache Cade, won't be a realistic figure).

    Hopefully someone with more experiencing using XFS on hardware RAID can chime in!
     
    #1
  2. anomaly

    anomaly Member

    Joined:
    Jan 8, 2018
    Messages:
    132
    Likes Received:
    14
    Anyone? :)
     
    #2
Similar Threads: 9361-8i RAID-10
Forum Title Date
RAID Controllers and Host Bus Adapters LSICVM02 compatible with 9361-8I 2g cache card? May 4, 2018
RAID Controllers and Host Bus Adapters LSI 9361-8i no Raid 5 / 6 Licence where to get LSI Advanced Software option Nov 17, 2017
RAID Controllers and Host Bus Adapters ESXi6.5U1 and LSI 9361-8i Dropping Storage Oct 22, 2017
RAID Controllers and Host Bus Adapters 9361-8i and on PCI Board led indicators. Aug 8, 2017
RAID Controllers and Host Bus Adapters LSI 9361-8i help and information required. Jun 15, 2017

Share This Page