NetApp Quad Port QSFP HBA X2065A-R6 111-00341 PCIe Card

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MasterCATZ

New Member
Jun 8, 2011
17
0
3
NetApp Quad Port QSFP HBA X2065A-R6 111-00341 PCIe Card | eBay

Does anyone know about this?

Ideally, I am looking for 4 internal ports with 1-2 external ports

for running 16Bay rack that has the PC and running an expander in a 24Bay rack shell

not needing raid functions just something for my Linux box running SnapRaid for my Multimedia

LSI SAS9201 Quad external seem to go double that price and double again if Internal?

however whats up with this card
The Sixteen-port, Half-length Lsi Sas 9201-16E; 6GB/S Per Port. In The Box: Ls | eBay

the smallest heat sink of them all?


or should I go for this one
LSI SAS9201-16e 16 Port 6Gb/s SAS SATA PCI-E HBA Adapter JBOD RAID H3-25577-00A | eBay

one of my old M1015's died along with pci-e slot
so looking for options and I would rather buy new cables and run the external ports back inside for the internal ports then pay twice as much for internal version card or better yet just run a pci-e extender cable and mount the card somewhere else
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Not familiar with the first one but since logo on the pic is PMCS, I'm guessing some sort of Adaptec.

9201-16e should be fine for your use case. However 8088-8087 cables are not cheap to my knowledge. If you have two free PCIe slots, 2 x 8i HBAs could work out to be same. 16i cards are expensive for some reason. Also both of these are full height.
Also not sure of the quality of the PCIe extender cables for this purpose.

Sent from my Nexus 6 using Tapatalk
 

MasterCATZ

New Member
Jun 8, 2011
17
0
3
thanks for the reply, I actually have 2x cables around here somewhere but I can pick up others for $16

no spare pci-e slots which is why I am needing to get it done with only one card now
the only other option would be


IBM 46M0907 SAS HBA PCI Express 2.0 x8 SAS Controller | eBay

but I am unsure if those internal ports are SATA or those ports that break into 4x SATA ports
or if all ports all work same time or 3x internal 1 x external
 

vanfawx

Active Member
Jan 4, 2015
365
67
28
45
Vancouver, Canada
I wouldn't recommend the NetApp controllers. They use QSFP connectors and not SAS 8088 connectors, which will be more expensive, and probably rarer.

That IBM SAS HBA's internal ports are SAS, but will work with standard SATA drives. So you get 4 external ports and 4 internal ports, 8 total.
 

MasterCATZ

New Member
Jun 8, 2011
17
0
3

MasterCATZ

New Member
Jun 8, 2011
17
0
3
and for the hell of me cannot get the PM8003 Working ( or its DOA )
Apparently, Linux works with the X2065A-R6 card as a PMC PM8001 SAS controller
 

Stefan75

Member
Jan 22, 2018
96
10
8
48
Switzerland
Did you read this? JBOD SAN Driver Woes (NetApp DS4243 / x2065a-r6 ) • r/homelab
"I work for NetApp (commenting unofficially, etc). The X2065 is a custom card based on the PMC8001 SAS chipset - works in FreeBSD and Linux only, not Windows. You might need to pop the IOM-B out of the back of the shelf to get it to work on them too."

I was thinking of getting one of these controllers ($20).
But the non existent Windows drivers kills it for me.
I guess I'm going to get the QSFP to SFF 8088 adapter cables ($50) instead.
 
Last edited:

MasterCATZ

New Member
Jun 8, 2011
17
0
3
with the newer kernels, 4.15+ am having issues
the first NetApp HBA SAS 4-port 3/6 GB QSFP PCIe 111-00341 B0 Controller Pm8003 was actually DOA,
so I bought another dozen of them which had rev5 firmware flashed on to them and had them reading pre-existing data / formatted disks just fine for the last 6 months

I got down to 1tb free space so went to put another shelf online and checked the disks for a week then ran into formatting issues and drives just randomly dropping offline

and I know the drives are 100% good, just gave them a full badblocks scan 160 hrs later confirmed all good

I am wondering if
poping the IOM-B out of the back

will help, but whenever I do this all fans go full pelt speed anyway to stop this >?
*edit* never mind just realized that was a quote of theducks , I have their email address somewhere I'll contact them again, NetApp only offers support if you buy directly from them

for now, I can not even get 1 shelf by its self working anymore to format disks tried with just 1 cable and all 4 cables plugged into it


I can throw heaps of data onto the disks and read back fine ~ 6gbs but trying to format them in the NetApp DS4246 Shelf is a completely different story

aio@aio:~$ sudo mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 -v -L SRD1NA1B1 -m 1 /dev/sdav1
[sudo] password for aio:
mke2fs 1.44.1 (24-Mar-2018)
fs_types for mke2fs.conf resolution: 'ext4'
Filesystem label=SRD1NA1B1
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
183148544 inodes, 732566016 blocks
7325660 blocks (1.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2881486848
22357 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Filesystem UUID: ff9587f0-9981-4e67-bef8-dda7b2825f27
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: 13392/22357
and it hangs ,,, and or causes pc to freeze


[ 1746.145191] pm80xx mpi_ssp_completion 2086:task 0x00000000c79a859a done with io_status 0x1 resp 0x0 stat 0x8d but aborted by upper layer!
[ 1746.145209] pm80xx pm8001_abort_task 1276:rc= -5
[ 1746.145214] sas: sas_scsi_find_task: task 0x00000000c79a859a is done
[ 1746.145215] sas: sas_eh_handle_sas_errors: task 0x00000000c79a859a is done
[ 1746.145219] sas: trying to find task 0x00000000d743de27
[ 1746.145220] sas: sas_scsi_find_task: aborting task 0x00000000d743de27
[ 1746.146535] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.146539] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.146728] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.146729] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.148049] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.148051] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.148276] pm80xx mpi_ssp_completion 1874:sas IO status 0x1
[ 1746.148277] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.148280] pm80xx mpi_ssp_completion 2086:task 0x00000000d743de27 done with io_status 0x1 resp 0x0 stat 0x8d but aborted by upper layer!
[ 1746.148295] pm80xx pm8001_abort_task 1276:rc= -5
[ 1746.148299] sas: sas_scsi_find_task: task 0x00000000d743de27 is done
[ 1746.148300] sas: sas_eh_handle_sas_errors: task 0x00000000d743de27 is done
[ 1746.148304] sas: trying to find task 0x00000000ba0c786d
[ 1746.148305] sas: sas_scsi_find_task: aborting task 0x00000000ba0c786d
[ 1746.149444] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.149446] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.149586] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.149588] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.150846] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.150849] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.150868] sas: done REVALIDATING DOMAIN on port 0, pid:9014, res 0x0
[ 1746.151010] pm80xx mpi_ssp_completion 1874:sas IO status 0x1
[ 1746.151012] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.151015] pm80xx mpi_ssp_completion 2086:task 0x00000000ba0c786d done with io_status 0x1 resp 0x0 stat 0x8d but aborted by upper layer!
[ 1746.151023] pm80xx pm8001_abort_task 1276:rc= -5
[ 1746.151024] sas: sas_scsi_find_task: task 0x00000000ba0c786d is done
[ 1746.151025] sas: sas_eh_handle_sas_errors: task 0x00000000ba0c786d is done
[ 1746.151028] sas: trying to find task 0x000000007313481d
[ 1746.151029] sas: sas_scsi_find_task: aborting task 0x000000007313481d
[ 1746.151159] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.151160] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.151280] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.151281] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.151409] pm80xx mpi_ssp_completion 1874:sas IO status 0x17
[ 1746.151410] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.151549] pm80xx mpi_ssp_completion 1874:sas IO status 0x1
[ 1746.151550] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1746.151552] pm80xx mpi_ssp_completion 2086:task 0x000000007313481d done with io_status 0x1 resp 0x0 stat 0x8d but aborted by upper layer!
[ 1746.151559] pm80xx pm8001_abort_task 1276:rc= -5
[ 1746.151561] sas: sas_scsi_find_task: task 0x000000007313481d is done
[ 1746.151562] sas: sas_eh_handle_sas_errors: task 0x000000007313481d is done
[ 1746.151652] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 97 tries: 1
[ 1746.152377] sas: done REVALIDATING DOMAIN on port 0, pid:10692, res 0x0
[ 1746.166677] scsi_io_completion: 43 callbacks suppressed
[ 1746.166685] sd 1:0:1:0: [sdav] tag#0 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.166690] sd 1:0:1:0: [sdav] tag#0 CDB: Write(16) 8a 00 00 00 00 00 73 00 09 00 00 00 04 00 00 00
[ 1746.166692] print_req_error: 43 callbacks suppressed
[ 1746.166694] print_req_error: I/O error, dev sdav, sector 1929382144
[ 1746.166701] buffer_io_error: 6774 callbacks suppressed
[ 1746.166703] Buffer I/O error on dev sdav1, logical block 241172512, lost async page write
[ 1746.166714] Buffer I/O error on dev sdav1, logical block 241172513, lost async page write
[ 1746.166718] Buffer I/O error on dev sdav1, logical block 241172514, lost async page write
[ 1746.166721] Buffer I/O error on dev sdav1, logical block 241172515, lost async page write
[ 1746.166724] Buffer I/O error on dev sdav1, logical block 241172516, lost async page write
[ 1746.166728] Buffer I/O error on dev sdav1, logical block 241172517, lost async page write
[ 1746.166731] Buffer I/O error on dev sdav1, logical block 241172518, lost async page write
[ 1746.166733] Buffer I/O error on dev sdav1, logical block 241172519, lost async page write
[ 1746.166736] Buffer I/O error on dev sdav1, logical block 241172520, lost async page write
[ 1746.166738] Buffer I/O error on dev sdav1, logical block 241172521, lost async page write
[ 1746.166860] sd 1:0:1:0: [sdav] tag#1 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.166862] sd 1:0:1:0: [sdav] tag#1 CDB: Write(16) 8a 00 00 00 00 00 72 c1 05 00 00 00 04 00 00 00
[ 1746.166863] print_req_error: I/O error, dev sdav, sector 1925252352
[ 1746.167545] sd 1:0:1:0: [sdav] tag#2 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.167547] sd 1:0:1:0: [sdav] tag#2 CDB: Write(16) 8a 00 00 00 00 00 72 c1 01 00 00 00 04 00 00 00
[ 1746.167548] print_req_error: I/O error, dev sdav, sector 1925251328
[ 1746.170597] sd 1:0:1:0: [sdav] tag#3 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.170601] sd 1:0:1:0: [sdav] tag#3 CDB: Write(16) 8a 00 00 00 00 00 72 c0 fd 00 00 00 04 00 00 00
[ 1746.170607] print_req_error: I/O error, dev sdav, sector 1925250304
[ 1746.177114] sd 1:0:1:0: [sdav] tag#4 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.177119] sd 1:0:1:0: [sdav] tag#4 CDB: Write(16) 8a 00 00 00 00 00 72 c0 f9 00 00 00 04 00 00 00
[ 1746.177121] print_req_error: I/O error, dev sdav, sector 1925249280
[ 1746.177276] sd 1:0:1:0: [sdav] tag#5 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.177278] sd 1:0:1:0: [sdav] tag#5 CDB: Write(16) 8a 00 00 00 00 00 72 c0 f5 00 00 00 04 00 00 00
[ 1746.177279] print_req_error: I/O error, dev sdav, sector 1925248256
[ 1746.177417] sd 1:0:1:0: [sdav] tag#6 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.177419] sd 1:0:1:0: [sdav] tag#6 CDB: Write(16) 8a 00 00 00 00 00 72 c0 f1 00 00 00 04 00 00 00
[ 1746.177420] print_req_error: I/O error, dev sdav, sector 1925247232
[ 1746.177565] sd 1:0:1:0: [sdav] tag#7 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.177567] sd 1:0:1:0: [sdav] tag#7 CDB: Write(16) 8a 00 00 00 00 00 72 c0 ed 00 00 00 04 00 00 00
[ 1746.177568] print_req_error: I/O error, dev sdav, sector 1925246208
[ 1746.177697] sd 1:0:1:0: [sdav] tag#8 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.177699] sd 1:0:1:0: [sdav] tag#8 CDB: Write(16) 8a 00 00 00 00 00 72 c0 e9 00 00 00 04 00 00 00
[ 1746.177700] print_req_error: I/O error, dev sdav, sector 1925245184
[ 1746.177837] sd 1:0:1:0: [sdav] tag#9 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1746.177839] sd 1:0:1:0: [sdav] tag#9 CDB: Write(16) 8a 00 00 00 00 00 72 c0 e5 00 00 00 04 00 00 00
[ 1746.177840] print_req_error: I/O error, dev sdav, sector 1925244160


[ 1778.699331] pm80xx mpi_ssp_completion 1874:sas IO status 0x1
[ 1778.699333] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1778.699336] pm80xx mpi_ssp_completion 2086:task 0x00000000302a78eb done with io_status 0x1 resp 0x0 stat 0x8d but aborted by upper layer!
[ 1778.699343] sas: sas_scsi_find_task: task 0x00000000302a78eb is done
[ 1778.699344] sas: sas_eh_handle_sas_errors: task 0x00000000302a78eb is done
[ 1778.699346] sas: trying to find task 0x000000001ff798d5
[ 1778.699347] sas: sas_scsi_find_task: aborting task 0x000000001ff798d5
[ 1778.699869] pm80xx mpi_ssp_completion 1874:sas IO status 0x1
[ 1778.699871] pm80xx mpi_ssp_completion 1883:SAS Address of IO Failure Drive:500605ba00b9e05d
[ 1778.699874] pm80xx mpi_ssp_completion 2086:task 0x000000001ff798d5 done with io_status 0x1 resp 0x0 stat 0x8d but
 
Last edited:

MasterCATZ

New Member
Jun 8, 2011
17
0
3
looks like it is an ext4 bug

if I write to an ext4 file system that is multi-pathed it crashes PC

writing to btrfs is fine so I might need to change my file systems