Thank you for a quick answer
1). check SMART info of the hdd/ssd
all disks are healthy, ale quite cold (50°C when fully loaded)
2). use another cables (another brand?)
I have 2 different brands :/ tested already,
also works fine when half of the disks are fully loaded, even if it was like this:
cable1: loaded, laded, loaded, loaded
cable2: loaded, laded, loaded, loaded
cable3: not working,not working,not working,not working
cable4: not working,not working,not working,not working
also works fine when:
cable1: not working,not working,not working,not working
cable2: not working,not working,not working,not working
cable3: loaded, laded, loaded, loaded
cable4: loaded, laded, loaded, loaded
3). try it with another expander (intel, adaptec, astek, areca) or try with another hba-chipset (like adaptec/microsemi)
hmmm sounds like good idea.
Just tested IBM and HP, both at the same time, and same issues :/
LSI 9217-8i
*cable1: IBM expander
*cable A: disk1, disk2, disk3, disk4
*cable B: disk5, disk6, disk7, disk8
*cable2: HP expander
*cable A: disk9, disk10, disk11, disk12
*cable B: disk13, disk14, disk15, disk16
4). enough airflow for heating-"problems" of sas-chipset. maybe passive cooling of sas-chipset isn't enough and then u get weird errors because too high temperature (sometimes 85-90° C) of sas-chipset (on both cards hba and expander)
yes I have big and slow 14cm fan on all cards (currently setup is on a table) you are able to touch it I think they are not more than 45°C.
5). maybe some problems with pci-e slot of mainboard? or weird and undocumented mainboard incompatibility with lsi-hba's.
https://forums.servethehome.com/index.php?threads/boot-issue-with-lsi-9201-16i.21255/
as a good example of point 5
hmmm, I will read it.
At this moment tested on 2 main boards:
One is quite nice chinese something huanan "x79" (C600 chipset) orange one but it supports only one CPU and only 64GB RRD3. Tested with Xeon E5 1650 v2 @ 4.1GHz.
the issue here is lack of memory (don't support 32GB modules, and only 4 slots) and pci-e lanes used in 100%:
16x for Mellanox Connect-X 4 100gb/s
16x for asus 4 nvme board for 3x NVME (maybe in final design will be 4x nvme, but CPU is too slow to make any sense)
and only 8x left for HBA - this is why I choose 9217-8i
and second (probably final choice) will be mainboard from Fujitsu Celcius R920, because CPUs are too slow to decode ZFS (Linux implementation is crappy :/) and only method to speedup is lot of RAM.
2.8GB/s per single thread read/write (on overclocked 1650v2 and overclocked Threadripper)
~6.4GB/s max on 6+ threads (when reads 10.5GB/s from 3x nvme while no file system).
dual core, it's support 32GB modules and have 16 ports.
cpu0 pcie: x16 x16 x4
cpu1 pcie: x16 x16 x8
but I am not able to select pci-e slots configuration in bios so each nvme will consume full slot.
already tested with 1x E5 1650 v2 and 2x E5 2640, (waiting for E5 2667 v2
)- same issues :/
maybe I can use 2 cards, but I am worry about sending data between CPUs, what I read is LSI 9206-16e
is using 2 chipsets from 9217 but connected 4x pcie 3.0 each (card is 8x pcie 3.0) = 7.2GB/s real speed limit = more than I need (243MB/s for each Toshiba HDD + 128/256 MB fast 550MB/s HDD buffer). but still better to use 9217+expander somehow - it's fast enough and I have it already
thank you