SAS2 expanders $60 (IBM, LSI chip, Intel alternative)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Davidtt485

New Member
Jan 6, 2019
3
0
1
Bought one of these units, had it come with the 602a Firmware and had issues detecting different SATA drives in all of the ports, flashed it and now i have no issues it detects all drives. I am even using one of the top ports for two Intel SSDs

Setup is 9207-4e4i connect internally to the top first port of the sas expander
Drives connected to all other ports, using a Norco 4220
 

Davidtt485

New Member
Jan 6, 2019
3
0
1
I wouldn’t say that in all in fact I plan on buying another one, with a relatively easy firmware upgrade this performs excellent and can support twenty drives. Price vs performance is awesome with this it cost me like 21 bucks
 

MichalPL

Active Member
Feb 10, 2019
189
26
28
Hi, I need your help - I am not 100% if it's an Expander issue, but looks like something between LSI card and expander.
Short background: I am trying to build (I hope :) ) fast NAS server. Final configuration: Ubuntu ZFS, 16x Toshiba N300 8TB, LSI 9217-8i+IBM expander, 3x NVME Evo970 for cache, Mellanox 100Gb/s, 2x Xeon E5 8core 4.0GHz, 512 DDR3 1.8GHz. In theory it should go ~3.9GBytes/s from HDD and around 6.4Gbytes/s from cache (CPU too slow, need ~6GHz overclocking single core performance for max speed) and 11Gbytes/s from RAM.

Temporary I have connected old 16x WD 0.5TB 7200rpm (~110MBytes/s) drives or 14x SSD (~520MBytes/s) for testing an the problem is here:
[ 1826.294940] print_req_error: I/O error, dev sdm, sector 63248
[ 1826.294968] sd 0:0:12:0: [sdm] tag#62 FAILED Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK
[ 1826.294971] sd 0:0:12:0: [sdm] tag#62 CDB: Write(10) 2a 00 00 01 00 10 00 01 00 00
[ 1826.294973] print_req_error: I/O error, dev sdm, sector 65552
[ 1826.294996] sd 0:0:12:0: [sdm] tag#5 FAILED Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK

When I am using HDD very intense IO errors arriving (if using only 8 HDD from array everything is fine, when more I have IO errors)
card is LSI9217-8i connected to IBM expander 46M0997 (510A) via 2 cables (same issue with 1 cable, but works slower).

No issues when make 2 ZFS volumes 8xHDD in first and 8x in second when using one volume at the time (when using two at same time only at full speed - same issues).

I bought LSI 9201-16e, and everything works fine, but due to PCI-e 2.0 x8 the real speed is 2.9Gbytes/s (too slow :/)
with LSI 9217-8i on 8x ssd the real speed is 4.4Gbytes/s.
with LSI 9217-8i Expander it is 3.85Gbytes/s but IO errors are slowing down ZFS or make errors.

I have same issue with HP 487738-001 Expander (IO errors), and same issue when connecting (any) expanders to LSI 9201-16e :/ but here everything works fine up to 15HDD, when using 16 same type of IO errors.

I am using standard firmware in LSI cards, is the IT mode faster or will fix issues? (I see I have reach max speeds of interfaces on LSI 9201-16e, IO/s not tested yet) is the IT mode also support expanders ?

I don't know what is wrong :/ The only solution I have at this moment is 16 port 6gbps card with 8 pcie 3.0 lanes or 16 pcie 2.0, means LSI 9206-16e or LSI 9202-16e but it means cables outside PC (RGB lol !!!) case. Thank you !
 

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
Hi, I need your help - I am not 100% if it's an Expander issue, but looks like something between LSI card and expander.
Short background: I am trying to build (I hope :) ) fast NAS server. Final configuration: Ubuntu ZFS, 16x Toshiba N300 8TB, LSI 9217-8i+IBM expander, 3x NVME Evo970 for cache, Mellanox 100Gb/s, 2x Xeon E5 8core 4.0GHz, 512 DDR3 1.8GHz. In theory it should go ~3.9GBytes/s from HDD and around 6.4Gbytes/s from cache (CPU too slow, need ~6GHz overclocking single core performance for max speed) and 11Gbytes/s from RAM.

Temporary I have connected old 16x WD 0.5TB 7200rpm (~110MBytes/s) drives or 14x SSD (~520MBytes/s) for testing an the problem is here:
[ 1826.294940] print_req_error: I/O error, dev sdm, sector 63248
[ 1826.294968] sd 0:0:12:0: [sdm] tag#62 FAILED Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK
[ 1826.294971] sd 0:0:12:0: [sdm] tag#62 CDB: Write(10) 2a 00 00 01 00 10 00 01 00 00
[ 1826.294973] print_req_error: I/O error, dev sdm, sector 65552
[ 1826.294996] sd 0:0:12:0: [sdm] tag#5 FAILED Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK

When I am using HDD very intense IO errors arriving (if using only 8 HDD from array everything is fine, when more I have IO errors)
card is LSI9217-8i connected to IBM expander 46M0997 (510A) via 2 cables (same issue with 1 cable, but works slower).

No issues when make 2 ZFS volumes 8xHDD in first and 8x in second when using one volume at the time (when using two at same time only at full speed - same issues).

I bought LSI 9201-16e, and everything works fine, but due to PCI-e 2.0 x8 the real speed is 2.9Gbytes/s (too slow :/)
with LSI 9217-8i on 8x ssd the real speed is 4.4Gbytes/s.
with LSI 9217-8i Expander it is 3.85Gbytes/s but IO errors are slowing down ZFS or make errors.

I have same issue with HP 487738-001 Expander (IO errors), and same issue when connecting (any) expanders to LSI 9201-16e :/ but here everything works fine up to 15HDD, when using 16 same type of IO errors.

I am using standard firmware in LSI cards, is the IT mode faster or will fix issues? (I see I have reach max speeds of interfaces on LSI 9201-16e, IO/s not tested yet) is the IT mode also support expanders ?

I don't know what is wrong :/ The only solution I have at this moment is 16 port 6gbps card with 8 pcie 3.0 lanes or 16 pcie 2.0, means LSI 9206-16e or LSI 9202-16e but it means cables outside PC (RGB lol !!!) case. Thank you !
i/o error could be more than 1 reason, or maybe combination of more reasons.

1). check SMART info of the hdd/ssd
2). use another cables (another brand?)
3). try it with another expander (intel, adaptec, astek, areca) or try with another hba-chipset (like adaptec/microsemi)
4). enough airflow for heating-"problems" of sas-chipset. maybe passive cooling of sas-chipset isn't enough and then u get weird errors because too high temperature (sometimes 85-90° C) of sas-chipset (on both cards hba and expander)
5). maybe some problems with pci-e slot of mainboard? or weird and undocumented mainboard incompatibility with lsi-hba's.

https://forums.servethehome.com/index.php?threads/boot-issue-with-lsi-9201-16i.21255/

as a good example of point 5
 

MichalPL

Active Member
Feb 10, 2019
189
26
28
Thank you for a quick answer :)

1). check SMART info of the hdd/ssd
all disks are healthy, ale quite cold (50°C when fully loaded)

2). use another cables (another brand?)
I have 2 different brands :/ tested already,
also works fine when half of the disks are fully loaded, even if it was like this:
cable1: loaded, laded, loaded, loaded
cable2: loaded, laded, loaded, loaded
cable3: not working,not working,not working,not working
cable4: not working,not working,not working,not working

also works fine when:
cable1: not working,not working,not working,not working
cable2: not working,not working,not working,not working
cable3: loaded, laded, loaded, loaded
cable4: loaded, laded, loaded, loaded


3). try it with another expander (intel, adaptec, astek, areca) or try with another hba-chipset (like adaptec/microsemi)
hmmm sounds like good idea.

Just tested IBM and HP, both at the same time, and same issues :/

LSI 9217-8i
*cable1: IBM expander
*cable A: disk1, disk2, disk3, disk4
*cable B: disk5, disk6, disk7, disk8
*cable2: HP expander
*cable A: disk9, disk10, disk11, disk12
*cable B: disk13, disk14, disk15, disk16

4). enough airflow for heating-"problems" of sas-chipset. maybe passive cooling of sas-chipset isn't enough and then u get weird errors because too high temperature (sometimes 85-90° C) of sas-chipset (on both cards hba and expander)
yes I have big and slow 14cm fan on all cards (currently setup is on a table) you are able to touch it I think they are not more than 45°C.

5). maybe some problems with pci-e slot of mainboard? or weird and undocumented mainboard incompatibility with lsi-hba's.

https://forums.servethehome.com/index.php?threads/boot-issue-with-lsi-9201-16i.21255/

as a good example of point 5
hmmm, I will read it.

At this moment tested on 2 main boards:

One is quite nice chinese something huanan "x79" (C600 chipset) orange one but it supports only one CPU and only 64GB RRD3. Tested with Xeon E5 1650 v2 @ 4.1GHz.

the issue here is lack of memory (don't support 32GB modules, and only 4 slots) and pci-e lanes used in 100%:
16x for Mellanox Connect-X 4 100gb/s
16x for asus 4 nvme board for 3x NVME (maybe in final design will be 4x nvme, but CPU is too slow to make any sense)
and only 8x left for HBA - this is why I choose 9217-8i

and second (probably final choice) will be mainboard from Fujitsu Celcius R920, because CPUs are too slow to decode ZFS (Linux implementation is crappy :/) and only method to speedup is lot of RAM.
2.8GB/s per single thread read/write (on overclocked 1650v2 and overclocked Threadripper)
~6.4GB/s max on 6+ threads (when reads 10.5GB/s from 3x nvme while no file system).

dual core, it's support 32GB modules and have 16 ports.
cpu0 pcie: x16 x16 x4
cpu1 pcie: x16 x16 x8
but I am not able to select pci-e slots configuration in bios so each nvme will consume full slot.

already tested with 1x E5 1650 v2 and 2x E5 2640, (waiting for E5 2667 v2 :) )- same issues :/

maybe I can use 2 cards, but I am worry about sending data between CPUs, what I read is LSI 9206-16e
is using 2 chipsets from 9217 but connected 4x pcie 3.0 each (card is 8x pcie 3.0) = 7.2GB/s real speed limit = more than I need (243MB/s for each Toshiba HDD + 128/256 MB fast 550MB/s HDD buffer). but still better to use 9217+expander somehow - it's fast enough and I have it already :)

thank you
 

TrevInCarlton

New Member
Sep 19, 2018
17
4
3
Nottingham, UK
Hi, I need your help - I am not 100% if it's an Expander issue, but looks like something between LSI card and expander.
Short background: I am trying to build (I hope :) ) fast NAS server. Final configuration: Ubuntu ZFS, 16x Toshiba N300 8TB, LSI 9217-8i+IBM expander, 3x NVME Evo970 for cache, Mellanox 100Gb/s, 2x Xeon E5 8core 4.0GHz, 512 DDR3 1.8GHz. In theory it should go ~3.9GBytes/s from HDD and around 6.4Gbytes/s from cache (CPU too slow, need ~6GHz overclocking single core performance for max speed) and 11Gbytes/s from RAM.

Temporary I have connected old 16x WD 0.5TB 7200rpm (~110MBytes/s) drives or 14x SSD (~520MBytes/s) for testing an the problem is here:
[ 1826.294940] print_req_error: I/O error, dev sdm, sector 63248
[ 1826.294968] sd 0:0:12:0: [sdm] tag#62 FAILED Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK
[ 1826.294971] sd 0:0:12:0: [sdm] tag#62 CDB: Write(10) 2a 00 00 01 00 10 00 01 00 00
[ 1826.294973] print_req_error: I/O error, dev sdm, sector 65552
[ 1826.294996] sd 0:0:12:0: [sdm] tag#5 FAILED Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK

When I am using HDD very intense IO errors arriving (if using only 8 HDD from array everything is fine, when more I have IO errors)
card is LSI9217-8i connected to IBM expander 46M0997 (510A) via 2 cables (same issue with 1 cable, but works slower).

No issues when make 2 ZFS volumes 8xHDD in first and 8x in second when using one volume at the time (when using two at same time only at full speed - same issues).

I bought LSI 9201-16e, and everything works fine, but due to PCI-e 2.0 x8 the real speed is 2.9Gbytes/s (too slow :/)
with LSI 9217-8i on 8x ssd the real speed is 4.4Gbytes/s.
with LSI 9217-8i Expander it is 3.85Gbytes/s but IO errors are slowing down ZFS or make errors.

I have same issue with HP 487738-001 Expander (IO errors), and same issue when connecting (any) expanders to LSI 9201-16e :/ but here everything works fine up to 15HDD, when using 16 same type of IO errors.

I am using standard firmware in LSI cards, is the IT mode faster or will fix issues? (I see I have reach max speeds of interfaces on LSI 9201-16e, IO/s not tested yet) is the IT mode also support expanders ?

I don't know what is wrong :/ The only solution I have at this moment is 16 port 6gbps card with 8 pcie 3.0 lanes or 16 pcie 2.0, means LSI 9206-16e or LSI 9202-16e but it means cables outside PC (RGB lol !!!) case. Thank you !
You need to update the firmware on the IBM expander 46M0997 to 634a. I had weeks of problems until I did the firmware update. I am running Freenas without any problems now after updating the firmware. This is not that easy but is well covered in this thread, particularly all the input by "The Bloke". I could not have sorted out my problems without his input.
 
Last edited:

RedX1

Active Member
Aug 11, 2017
134
147
43
Hi

A good video tutorial of how to update the IBM 46M0997 Expander to 634a Firmware can be found here.





It worked well for me, you might find this very useful too.





RedX1
 
  • Like
Reactions: epicurean

Davidtt485

New Member
Jan 6, 2019
3
0
1
The fact that you are having the same issues with two different expanders means its either the HBA or the Drives, Because this problem appears when you are adding drives I vote for HBA issue, possibly due to over heating, I have had good luck with re applying thermal paste.
 

MichalPL

Active Member
Feb 10, 2019
189
26
28
Gentleman's it's working ! updating to 634a fixed the problem ! ... kind of.

... Meanwhile I have received the 4.0GHz 8 core "old" CPU (Xeon E5 2667 v2) and tested performance and the 634a firmware not looks good :/

LSI 9217 -> IBM 46M0997 -> 16x old WD HDD 7200 0.5TB its fully working now (HP 487738-001 with updated firmware to 2.10 still have same issues). but the performance is very bad :/ (also tested with 10x ssd).

in case of HDD instead of 1.9GB/s now I have about 1.4GB/s, and with SSD instead of 3.8GB/s it's 2.5GB/s and the maximum speed per one hdd is 263MB/s instead of 520MB/s. (they are still connected as SATA3 - 6Gbit/s), Expander was connected via 2 cables. The LSI card still working 4.4GB/s with 8x ssd without Expander.

maybe... it will be ok for Toshiba N300 where it will be 243MB/s but still slows down an older HDD a lot with ~115MB/s bandwidth a lot = need faster solution.
 

MichalPL

Active Member
Feb 10, 2019
189
26
28
Just received the LSI 9206 -16e, and... results are not good :/ it's definitely not a double performance of the LSI 9217/
If you don't care a lot about max performance the good Expander (with the right firmware) + LSI 9217 is not bad choice.

Performance on ZFS (with old 16x 0.5TB WD HD and Xeon E5 1650 v2 overclocked to 4.1GHz on all cores and DDR3 2133MHz) :

LSI 9201 + LSI 9217 - 16xHDD (8x HDD each) - 1.9GB/s
LSI 9206 - 16x HDD - 1.6GB/s (900 MB/s on 8x HDD)
LSI 9217 + Exp. - 16x HDD - ~1.4GB/s
LSI 9201 - 16x HDD - ~1.4GB/s
LSI 9217 - 8x HDD - 901MB/s

Peak performance (reading) on SSD without filesystem:
LSI 9206 - 4.95GB/s
LSI 9217 - 4.4GB/s
LSI 9217 + Exp - 3.85GB/s (on firmware 510a)
LSI 9201 - 2.9GB/s
LSI 9217 + Exp - 2.5GB/s (on firmware 634a)
 
  • Like
Reactions: nikalai

DanielWood

Member
Sep 14, 2018
44
17
8
Temporary I have connected old 16x WD 0.5TB 7200rpm (~110MBytes/s) drives or 14x SSD (~520MBytes/s) for testing an the problem is here:
Code:
[ 1826.294940] print_req_error: I/O error, dev sdm, sector 63248
[ 1826.294968] sd 0:0:12:0: [sdm] tag#62 FAILED Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK
[ 1826.294971] sd 0:0:12:0: [sdm] tag#62 CDB: Write(10) 2a 00 00 01 00 10 00 01 00 00
[ 1826.294973] print_req_error: I/O error, dev sdm, sector 65552
[ 1826.294996] sd 0:0:12:0: [sdm] tag#5 FAILED Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK
I recently built an array using 48x WD BLack 4TB SATA disks connected to 2x HP SAS Expanders and was encountering a similar issue on a LSI 9211-8i (H200), a 9205-8i (HP H220), and on an Supermicro Onboard Intel SCU. The problem would only manifest under heavy load (20 hours into a scrub) random drives would get numerous errors like the above errors and then start dropping from the array.

I tried numerous things, but what finally fixed my issue was disabling NCQ. I couldn't do it on the drives. Disabling NCQ just wouldnt take. I suspect its something to do with the HP SAS expander, as I could do it to the 2x Hot Spares connected to onboard ATA or one of the SAS controllers.

In the end, since this isnt a super high performance filer, I just disabled NCQ system wide.
My notes on disabling system wide on CentOS7 (you can get more specific on which bus you kill NCQ on, but that wasnt a concern for me).
Code:
Edit /etc/default/grub
Add libata.force=noncq to GRUB_CMDLINE_LINUX
[ -d /sys/firmware/efi ] && `grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg` || `grub2-mkconfig -o /boot/grub2/grub.cfg`
I have since been doing full scrubs every week for the past two months, the same READ SENSE errors have not cropped up again. I suspect this is something to do with SATA tunneling over SAS and it locking up the bus while it waits for write to be acknowledged. But I'm just postulating, as I dont understand the SATA-over-SAS mechanism well enough to do anything but guess.
 

MichalPL

Active Member
Feb 10, 2019
189
26
28
On HP Expander card I was not able to achieve a bugs free readings. On IBM Expander, yes with firmware 634a.
Edit: I mistakes results IBM (IBM 46M0997 ) and HP (HP 487738-001) expanders, IBM never achieved 550MB/s only 260MB/s IBM limited it to 3Gbps SATA (can do 6 via SAS protocol), HP did it - but with errors.

Here is a code how to check it quick (in a seconds) and also measure the read speed of the whole system:
"sda1" "sdb1",.... is just a sample, put your partitions (1 per hdd drive) here, test will take 15 sec, so big enough.
Code:
for disk in sda1 sdb1 sdc1 sdd1 sde1 sdf1 sdg1 sdh1 ; do dd if=/dev/${disk} iflag=direct bs=100M of=/dev/zero  & done; sleep 15; killall -USR1 dd; sleep 1; killall dd
then you can see IO errors or just see it when one HDD is doing for example 76MB/s instead of 210MB/s.

What I suggest is to use 3x 16port pci-e 3.0 controllers if you have enough pci-e lanes free, the first solution will give you 10GB/s HDD access ;) (there is no CPU on the market who is able to use it in ZFS ;) I think max is ~7.3GB/s when using 4.5GHz Intel on DDR4) or with the 3x IBM Expanders maybe you will achieve 4GB/s.
 
Last edited:

DanielWood

Member
Sep 14, 2018
44
17
8
On HP Expander card I was not able to achieve a bugs free readings. On IBM Expander, yes with firmware 634a.
Edit: I mistakes results IBM (IBM 46M0997 ) and HP (HP 487738-001) expanders, IBM never achieved 550MB/s only 260MB/s IBM limited it to 3Gbps SATA (can do 6 via SAS protocol), HP did it - but with errors.

Here is a code how to check it quick (in a seconds) and also measure the read speed of the whole system:
"sda1" "sdb1",.... is just a sample, put your partitions (1 per hdd drive) here, test will take 15 sec, so big enough.
Code:
for disk in sda1 sdb1 sdc1 sdd1 sde1 sdf1 sdg1 sdh1 ; do dd if=/dev/${disk} iflag=direct bs=100M of=/dev/zero  & done; sleep 15; killall -USR1 dd; sleep 1; killall dd
then you can see IO errors or just see it when one HDD is doing for example 76MB/s instead of 210MB/s.

What I suggest is to use 3x 16port pci-e 3.0 controllers if you have enough pci-e lanes free, the first solution will give you 10GB/s HDD access ;) (there is no CPU on the market who is able to use it in ZFS ;) I think max is ~7.3GB/s when using 4.5GHz Intel on DDR4) or with the 3x IBM Expanders maybe you will achieve 4GB/s.
I'll run that on Monday and we'll see if it throws any errors with and without NCQ. (I havent yet tossed this into production, so causing the errors is a non-issue.)

I'll use a slight variation of your command to make my life a bit easier:
Code:
for disk in `find /dev/disk/by-vdev/*-part1`; do dd if=/dev/${disk} iflag=direct bs=100M of=/dev/zero  & done; sleep 15; killall -USR1 dd; sleep 1; killall dd
I'm not very concerned with increasing performance past what it is now (I need to run some SMB3 benches), as I am generally satisfied with it, even with a single link to each Expander. Stability is of more concern to me.
 

MichalPL

Active Member
Feb 10, 2019
189
26
28
So the IBM expanders should work well, with 1 sas connected they also can have 5 sas ports out = 20sata, so with you 2 cards and 2 8-port expanders can connect all 48hdd's. With the speed around 2GB/s per controller.

Yes SMB3 bench's is tricky :/ I achieved max ~4100MB/s on Mellanox connectx-3 40GbE, and... 3900MB/s on ConnectX-4 100GbE, what is more tricky everything works better (faster) on cheap Chinese motherboard and E5 1650v2 (six-core) (overclocked to 4.1GHz) than dual-core E5 2667 (8core each) 4.0GHz, but I need the second one to have lot of RAM memory for cache and deduplication.
 

DanielWood

Member
Sep 14, 2018
44
17
8
Yes SMB3 bench's is tricky :/ I achieved max ~4100MB/s on Mellanox connectx-3 40GbE, and... 3900MB/s on ConnectX-4 100GbE, what is more tricky everything works better (faster) on cheap Chinese motherboard and E5 1650v2 (six-core) (overclocked to 4.1GHz) than dual-core E5 2667 (8core each) 4.0GHz, but I need the second one to have lot of RAM memory for cache and deduplication.
I got hit by a nasty bug in my ultra ghetto SAN test bed (Optiplex 7010, SolarFlare 10GBe NIC, i7-3770, 24GB DDR3).

I had dedupe+compression+4K running on an ADATA SX8200 NVME. Worked great. Until I started migrating VMs to another datastore, resulting in zfs hanging after the transfer was complete while it did a ton of writes. Completely hanged all NFS traffic while in this state. Once it was done cleaning up the deleted file(I assume it was cleaning the entries from the DDT), it returned to normal. But the write activity went in spikes(10 seconds on, 10 seconds idle), still had 12GB RAM free(DDT was 6GB).

My point is, since you are running dedupe, you might want to check for this scenario now, rather than later.
 

MichalPL

Active Member
Feb 10, 2019
189
26
28
I am using 3x 970Evo as a SSD (faster than ADATA - but I have same ADATA at home, works good) but results via Ethernet are poor :/ btw. new Intel QLC SSD (660p) are great! (just bought it to my desktop at work 1TB is 1.9GB/s, much slower than 970Evo and ADATA but 2.5x cheaper, so can make fast raid0 - not sure if good for cache (200TBW), but maybe good when just replace many HDD into many SSD).

Today found 1h40min to make tests :), still need work :/ (X: is a network drive)

upload_2019-3-20_3-16-33.png
on ATTO got also 4.8GB/s write, still 2x too slow :/

currently testing on TR 1950X overclocked to 4.1GHz (almost 550W :/), E5 1650 v2 overclocked to 4.1GHz and E5 2603 @ 1.8GHz.
2x E5 2667W is too complicated to make tests at this stage (until receive ~10GB/s), dual CPU = more problems :/

No good idea how to speedup :/ only idea I have is to change the voltage to 1.45V set 1650 v2 at 5.0 GHz like i9-9900k :/
and same with memory set all DDR3 ECC to 2133MHz - but it's server ! not good to change voltage.
upload_2019-3-20_3-20-15.png
 

istqian

New Member
Jun 9, 2016
14
14
3
Fukuoka, Japan
I would like to share firmware update under FreeBSD 12.0
My HW is
Supermicro X9DRL, DELL H200 (IT firmware P20), IBM SAS Expander.
missing HDD when poweron, replug cable can fix it.

Since my OS is FreeBSD, I don't want to reinstall Windows/Linux. and I found sg3_utils is available from ports.
So here we go:
PS: before following command, pkg install sg3_utils first.
Code:
## download and unzip firmware

unzip ibm_fw_exp_6gb-sas-634a_linux_32-64.bin

## get exapnder device name by:

camcontrol devlist

## update firmware by: (firmware name is dl-634a.rd2, device name ses0)

sg_write_buffer --mode=dmc_offs_defer --bpw=4096 --in=dl-634a.rd2 /dev/ses0
sg_write_buffer --mode=activate_mc /dev/ses0

## have a reboot.
 
  • Like
Reactions: BLinux

clubman

New Member
Sep 15, 2019
5
4
3
found this thread when searching / googling for information about these sas expanders (the ibm one).
i too have problems when connecting drives with the early firmware (510A), and am going to do a few tests before i look at going to the most current firmware.
My sas expander is also being powered by a mining type pci express slot unit, but i am going to make a cable up to go from a 12v dc power pack to molex to power the card. I am also going to add a fan connector to the mining pci express card (to cool the heatsink on the card).
will put some pics up and detail my results once done