Broadcom 9560-16i and ToughArmor MB699VP-B

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Crusher

New Member
Feb 13, 2019
9
1
3
Hello all,

I am planing to go for a 9560-16i, because my 2 adaptec 8805 are not supported by Esxi7 and will be replaced by my 9460-16i.
I never got my MB699VP-B working with 9460-16i, but I want to give it a second try.

Does anybody have experience with 9560-16i controller and the 4bay Nvme Icy Dock ToughArmor MB699VP-B?
There are two U.2 enabler cables in the controller manual:

Cables for RAID Controller Cards and HBAs

05-60002-00 "a x8 SFF-8654 to 2 x4 SFF-8643 connection. Use this cable for NVMe connections on SuperMicro Purley backplanes"

05-60003-00 "a x8 SFF-8654 to 2 x4 SFF-8643 connection"

Unfortunately I do not know the pinout of the MB699VP-B.

Has anybody testet it yet?

Thanks in advance and best regards

Michael
 

akeen

New Member
Mar 1, 2022
13
0
1
HI
I have 2 x 9560-16i card with cache adapters
I installed it on 2 x AS-2124US-TNRP Supermicro Server.
I also have 14 x kioxia kcd6xvul12t8 12.8tb u.3, was planing to use 7 +7
*** kioxia drives are not working with the card
i also have 2 x micron 9300 max MTFDHAL3T2TDR 3.2TB and these drives work
I contacted , supermicro, broadcom, kioxia, acmemicro (where i bought the server)
now only broadcom responds to my emails , everyone else just ignores me. and I feel broadcom is getting annoyed from me as well.
anyway i spent over 80k for all the parts and it doesn't work, you can feel my frustration.
kioxia drives are not detected during boot time, but if i unplug and plug them back then they are detected and i can configure raid. but the problem is once reboots , drives gone again.
i also bought nvme expander BROADCOM 05-50054-00 PCI-Express 4.0 x16 PCI-Express P411W-32P NVMe Switch Adapter (broadcom recommended it) to use my 7 drives. (9560-16i raid card only supports 4 x nvme u2/u3 drives)
and guess what, it didn't work either,
when i told broadcom about it then they say , raid card and nvme switch are very different and it may not support each others backplanes. i was like whaaaat!
now i am thinking a way to utilize it. and i can tell you this, its hard.
so my advice , be careful when using the 9560-16i with any type of drive
They have some compatability list very limited on their site

Good luck
A.K.
 

dbTH

Member
Apr 9, 2017
149
59
28
HI
I have 2 x 9560-16i card with cache adapters
I installed it on 2 x AS-2124US-TNRP Supermicro Server.
I also have 14 x kioxia kcd6xvul12t8 12.8tb u.3, was planing to use 7 +7
*** kioxia drives are not working with the card
i also have 2 x micron 9300 max MTFDHAL3T2TDR 3.2TB and these drives work
I contacted , supermicro, broadcom, kioxia, acmemicro (where i bought the server)
now only broadcom responds to my emails , everyone else just ignores me. and I feel broadcom is getting annoyed from me as well.
anyway i spent over 80k for all the parts and it doesn't work, you can feel my frustration.
kioxia drives are not detected during boot time, but if i unplug and plug them back then they are detected and i can configure raid. but the problem is once reboots , drives gone again.
i also bought nvme expander BROADCOM 05-50054-00 PCI-Express 4.0 x16 PCI-Express P411W-32P NVMe Switch Adapter (broadcom recommended it) to use my 7 drives. (9560-16i raid card only supports 4 x nvme u2/u3 drives)
and guess what, it didn't work either,
when i told broadcom about it then they say , raid card and nvme switch are very different and it may not support each others backplanes. i was like whaaaat!
now i am thinking a way to utilize it. and i can tell you this, its hard.
so my advice , be careful when using the 9560-16i with any type of drive
They have some compatability list very limited on their site

Good luck
A.K.
For the Supermicro BPN-NVME4-216N-S24 backplne on AS-2124US-TNRP system that supports PCIe4 NVMe, it would be better to use the compatible Supermicro retimer AOC such as AOC-SLG4-4E4T-P instead of Broadcom 9560-16i RAID or P411W-32P NVMe Switch Adapter . That would reduce the chance of PCIe 4.0 NVMe drive such as kioxia kcd6xvul12t8 not working properly
 

akeen

New Member
Mar 1, 2022
13
0
1
For the Supermicro BPN-NVME4-216N-S24 backplne on AS-2124US-TNRP system that supports PCIe4 NVMe, it would be better to use the compatible Supermicro retimer AOC such as AOC-SLG4-4E4T-P instead of Broadcom 9560-16i RAID or P411W-32P NVMe Switch Adapter . That would reduce the chance of PCIe 4.0 NVMe drive such as kioxia kcd6xvul12t8 not working properly
Hi
Thanks for the information. I already have these , it came with A+2124US-TNRP , the problem is i cant make hardware raid with it. its just host bus adapter. infact it turned out P411W-32P is the same as host bus adapter rather than raid expander. all these cards detects the kioxia drives but only problem is the raid card does not detect the drives during boot time and thats kind deal breaker. I need install esxi on the server and I am stuck with hardware raid.
 

dbTH

Member
Apr 9, 2017
149
59
28
Hi
Thanks for the information. I already have these , it came with A+2124US-TNRP , the problem is i cant make hardware raid with it. its just host bus adapter. infact it turned out P411W-32P is the same as host bus adapter rather than raid expander. all these cards detects the kioxia drives but only problem is the raid card does not detect the drives during boot time and thats kind deal breaker. I need install esxi on the server and I am stuck with hardware raid.
Did you change the Supermicro system board BIOS setting? Few things you probably can check and try:
1. Advance --> PCIe/PCI/PnP Configuration CPU Slotx PCIE 4.0 X16 OPROM --> What's your current setting on the slots where the 9560-16i has been installed. The default is Legacy. Does it make any difference if you change the setting to non-default ?
2. NVMe Firmware Source ---> What's your current setting?
3. On the storage backplane , check the jumper setting. There are different configurations with respect to CPUs and NVMe slots/ports
4. Is the Supermicro AOC AOC-SLG4-4E4T that came with it still connected to the backplane? Your storage is having a hybrid bakplane, but you can't connect to the SAS and NVMe ports at the same time. One of them needs to be removed.
5. Did you check with Broadcom about the 9560-16i firmware bug or doing an update?
6. How about the Kioxia U.3 drive firmware bug or doing an update?
 

akeen

New Member
Mar 1, 2022
13
0
1
1. options are efi, legacy, disabled , tried all
2. options , native firmware , vendor defined , tried both
3. this backplane is designed for nvme , plus everything works with other cards like sm or even p411w, setting is correct
4. no i pulled all the sm cards and disconnected cables,
5. yes i did , they said its kioxia drives fault and tried to contact kioxia and they said they didnt respond. they asked me to stuff for collecting diagnostics ..etc but nothing came from it
6. I contacted kioxia from their website, they didnt respond. there isno fw upgrade for the drives i have.
7. all cards and bios have the latest fw

but i figured out a way to use it
i put 2 x 9560-16i to the motherboard and able to configure it after booting esxi 6.7 via usb. now os sees 7 drives and able to make 2 raid arrays both raid5
when i reboot, it looses the configuration and i had to unplug and replug the drives and use storcli to import foreign configuration , it imports and starts the array wihtout issue and no rebuild necessary
 

dbTH

Member
Apr 9, 2017
149
59
28
1. options are efi, legacy, disabled , tried all
2. options , native firmware , vendor defined , tried both
3. this backplane is designed for nvme , plus everything works with other cards like sm or even p411w, setting is correct
4. no i pulled all the sm cards and disconnected cables,
5. yes i did , they said its kioxia drives fault and tried to contact kioxia and they said they didnt respond. they asked me to stuff for collecting diagnostics ..etc but nothing came from it
6. I contacted kioxia from their website, they didnt respond. there isno fw upgrade for the drives i have.
7. all cards and bios have the latest fw

but i figured out a way to use it
i put 2 x 9560-16i to the motherboard and able to configure it after booting esxi 6.7 via usb. now os sees 7 drives and able to make 2 raid arrays both raid5
when i reboot, it looses the configuration and i had to unplug and replug the drives and use storcli to import foreign configuration , it imports and starts the array without issue and no rebuild necessary
Does it lose the RAID configuration on every reboot? If it does, I wouldn't leave it like that, but dig further to find the root cause. You may also check if it is the compatibility issue of your Koxia NVME drive on on ESXI 6.7. Does it work well if you boot with other OS such as Linux?
Also, are you are to enter WebBIOS during boot to configure the RAID and MegaRAID BIOS setting?
 

akeen

New Member
Mar 1, 2022
13
0
1
root cause is raid card does not recognize the drives on boot time, when the raid card does not recognize , its beyond the OS compatability. There is no place to install OS.
with beginning of broadcom 94xx cards , there is no WebBIOS anymore. no keyboard shortcut to enter. its just EFI and configure it in system bios.
i tried the legacy and efi bios , its the same result, only exception is if i leave it on legacy , raid card retain the settings of the one done in esxi shell with storcli but if i unplug the power from the server then again drives are not detected.
i think its a bug for legacy setting in pcie settings in the bios with 2 cards , funny thing is , its better that way than efi erasing the drive configuration.
 

dbTH

Member
Apr 9, 2017
149
59
28
root cause is raid card does not recognize the drives on boot time, when the raid card does not recognize , its beyond the OS compatability. There is no place to install OS.
with beginning of broadcom 94xx cards , there is no WebBIOS anymore. no keyboard shortcut to enter. its just EFI and configure it in system bios.
i tried the legacy and efi bios , its the same result, only exception is if i leave it on legacy , raid card retain the settings of the one done in esxi shell with storcli but if i unplug the power from the server then again drives are not detected.
i think its a bug for legacy setting in pcie settings in the bios with 2 cards , funny thing is , its better that way than efi erasing the drive configuration.
Your setup with "2 broadcom tri-mode RAID cards + NVMe4 hybrid backplane + NVMe disks" make things quite interesting.
Did you have opportunity to open a case with Supermicro if your configuration is supported and probably they would be able to see if there's a BIOS bug or provide some configuration tips. There were reported case with Supermicro where 2 AOCs are installed (but not on the right PCIe slot) and not getting detected properly (don't member the link now). Your system seems is also using WIO riser and there also reported case such as :
Though your system and CPUs are different than what is mentioned in above link, but unsupported or inappropriate-configuration or RAID card on the wrong PCIe slot may also cause things not working properly during the boot or getting detected at BIOS level.

Also, does it work if you use only single Broadcom 9560-16i RAID card?
I guess it works (all NVMe disk detected and you could see them even on the BMC console) if you use the multiple retimer AOCs AOC-SLG4-4E4Tcards that come with the system? Sure it will be without hardware RAID with the retimer AOC
 
Last edited:

akeen

New Member
Mar 1, 2022
13
0
1
Hi
to make things clearer, i got the servers with built in SMC host adapters and everything works fine, even with kioxia drives.
but i need raid card due to esxi restriction and only raid card that supports nvme that i know is 9560,
i used 9560-16i on 1124us-tnrp server which is 1u model of 2124us-tnrp and its working fine with 2 x micron 9300 max drives with raid1
i have over a year of uptime with no issue on centos7, i have again 2 identical servers of the same.
so when i was ordering the 2124us-tnrp server there was an option for drives whether to get pcie3 based micron 9300 max or pcie4 based kioxia drives which is faster with similar specs (next gen) and 1k cheaper, so as any human being would choose, i chose the kioxia
everything works fine with raid 9560 with micron 9300 max drives, but with kioxia , it doesnt.
I opened a case with supermicro , as soon as they hear that i am using third party raid card, they say , we dont support and and we dont care.
I opened up a case with broadcom and they kept blaming everything, i need to use right cable, right back plane, right bios .blah blah
because of the support requests and the people they deal with, support of these companies expect the fault is on user rather than themselves. its hard to deal with them.
i sent a message kioxia from their website, they dont seem to have any ticketing system for support. they didnt reply back.
so right now , how i set the system is ,
server boots from usb drive, esxi 6.7u3 on it, (standard esxi 6.7u3 does not have the latest lsi_mr3 driver by the way, it needs to be added with powershell and make customized iso) (esxi7 crashes during boot time)
after the os loads , i manually unplug and replug the drives, then i use storcli to import all foreign configuration (which i created the raid configuration before)
array becomes optimal right away and does not need any rebuild or anything
i backed up the usb flash drive as image via dd from esxi console to a backup node and if usb fails i can write a new one and good to go.
till kioxia or broadcom releases a new firmware which would fix this , i have to work like that
i wont put anything critical for some time and even the stuff i put , i will have backups on other nodes.
 

dbTH

Member
Apr 9, 2017
149
59
28
Hi
to make things clearer, i got the servers with built in SMC host adapters and everything works fine, even with kioxia drives.
but i need raid card due to esxi restriction and only raid card that supports nvme that i know is 9560,
i used 9560-16i on 1124us-tnrp server which is 1u model of 2124us-tnrp and its working fine with 2 x micron 9300 max drives with raid1
i have over a year of uptime with no issue on centos7, i have again 2 identical servers of the same.
so when i was ordering the 2124us-tnrp server there was an option for drives whether to get pcie3 based micron 9300 max or pcie4 based kioxia drives which is faster with similar specs (next gen) and 1k cheaper, so as any human being would choose, i chose the kioxia
everything works fine with raid 9560 with micron 9300 max drives, but with kioxia , it doesnt.
I opened a case with supermicro , as soon as they hear that i am using third party raid card, they say , we dont support and and we dont care.
I opened up a case with broadcom and they kept blaming everything, i need to use right cable, right back plane, right bios .blah blah
because of the support requests and the people they deal with, support of these companies expect the fault is on user rather than themselves. its hard to deal with them.
i sent a message kioxia from their website, they dont seem to have any ticketing system for support. they didnt reply back.
so right now , how i set the system is ,
server boots from usb drive, esxi 6.7u3 on it, (standard esxi 6.7u3 does not have the latest lsi_mr3 driver by the way, it needs to be added with powershell and make customized iso) (esxi7 crashes during boot time)
after the os loads , i manually unplug and replug the drives, then i use storcli to import all foreign configuration (which i created the raid configuration before)
array becomes optimal right away and does not need any rebuild or anything
i backed up the usb flash drive as image via dd from esxi console to a backup node and if usb fails i can write a new one and good to go.
till kioxia or broadcom releases a new firmware which would fix this , i have to work like that
i wont put anything critical for some time and even the stuff i put , i will have backups on other nodes.
 

dbTH

Member
Apr 9, 2017
149
59
28
Regarding this "1124us-tnrp server which is 1u model of 2124us-tnrp". Well, even though the system boards of both are same, but the storage backplanes are different:
1124us-tnrp --> NVMe 3.0 hybrid backplane
2124us-tnrp --> NVMe 4.0 hybrid backplane
So, there may be some compatibility or firmware issue either on Broadcom RAID 9560 or on the Koxia NVMe 4.0 SSD

I understand it's painful to use third-party devices on a vendor system for which they don't validate and support.

Since you are not putting critical data on the system, then why bother to have hardware RAID?
With AOC-SLG4-4E4T AOC, You can install ESXI on single NVMe drive with VMs OS sitting on un-raid disk on VMFS datastore. ESXI or VM OS actually don't do lots of I/Os, so chance of disk getting wear out or failing is very slow. For actual critical data that need some protection , you can do NVMe device pass-through to the VMs and then use OS/software RAID.
With software RAID on NVMe disks and system you have, the I/O performance penalty is minimal. On other hand, using RAID card on MVMe disks actually adds the I/O latency than you simply use retimer AOC AOC-SLG4-4E4T. Do some I/O benchmarks on both setup you will see the difference.
 

akeen

New Member
Mar 1, 2022
13
0
1
I appreciate your ideas
there is definitely issue with either raid card or disk or maybe both, my luck is its just 2 combination fails
i am not putting critical data but its still needed and i can recover fast, its some what redundant with my current setup. recovery process is almost instant.
i monitor the raid for any failure and i tested couple times to fail and recover it , it worked fine.
what you are suggesting would work yes but if it fails it would take great deal to recover it. plus data would be scattered on 7 disks.
With the current setup i have, if a drive fails , it will not affect anything , i ll just replace the drive and wait for the raid to build back.
i benchmarked the kioxia vs micron on windows and linux, with AOC AOC-SLG4-4E4T, p411w, and raid card
there is not a significant difference as you would expect from the specs of pcie3 vs pcie4,
vm's i have are part of a cluster and i cannot make passthrough, i wont be able to migrate to other hosts
 

dbTH

Member
Apr 9, 2017
149
59
28
Do you have limitation that is preventing you doing the device pass-through under esxi? I guess many of esxi users are doing that. I have been doing both the LSI HBA (for spindle disks) and NVMe disks pass-through to a VM under esxi 6.7 and running a NAS under the VM for a long time , many years back.
 
Last edited:

akeen

New Member
Mar 1, 2022
13
0
1
most importantly which is the deal breaker for me is, i want to be able to move the vm to another host (preferably live migration), if the disk is pass through, i wont be able to do it
plus each disk i have is 12tb , i dont need that much of space for a vm , my vm's are mostly around 500gb - 2tb range
 

dbTH

Member
Apr 9, 2017
149
59
28
most importantly which is the deal breaker for me is, i want to be able to move the vm to another host (preferably live migration), if the disk is pass through, i wont be able to do it
plus each disk i have is 12tb , i dont need that much of space for a vm , my vm's are mostly around 500gb - 2tb range
Also ever think of setup VSAN if the licensing cost is not the issue? And you will not need hardware RAID. You can do the "software RAID" and is accessible by the VMs under the cluster
 

akeen

New Member
Mar 1, 2022
13
0
1
yes that was my worst case scenario, but i dont have a server which can host 14 nvme disks, i had to buy another server just for that
 

akeen

New Member
Mar 1, 2022
13
0
1
or i could use one of the servers with all the drives on it , and make it vsan and use the other for esxi host , then i would end up using only 1 esxi instead of , i have 2 x AMD EPYC 7543 on each server and 1 set would be totaly wasted, i paid 6k for each cpu. but yes if i couldnt do what i have done probably thats what i would do.
 

akeen

New Member
Mar 1, 2022
13
0
1
so my advice to anyone out there who wants to use Broadcom 9560-16i with some nvme drives,
here is a list of things you should know
there is no nvme expander card from broadcom for the 9560-16i so you can only use 4 nvme drives max on each card
kioxia sucks dont use it
i have success with micron 9300 max and I suggest stick with that
speed of the raid 5 with 4 drives are insane
here is comparison single drive vs raid5 with 9560-16i

AS2124us-TNRP server with kioxia 12.8 tb nvme pcie4 direct attached to motherboard
windows 2019 server
kloxia kcd6xvul12t8 12.8tb u.3

------------------------------------------------------------------------------
CrystalDiskMark 8.0.4 x64 (C) 2007-2021 hiyohiyo
Crystal Dew World: Crystal Dew World
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
SEQ 1MiB (Q= 8, T= 1): 4003.678 MB/s [ 3818.2 IOPS] < 2094.30 us>
SEQ 1MiB (Q= 1, T= 1): 3077.569 MB/s [ 2935.0 IOPS] < 340.48 us>
RND 4KiB (Q= 32, T= 1): 526.520 MB/s [ 128544.9 IOPS] < 240.93 us>
RND 4KiB (Q= 1, T= 1): 39.477 MB/s [ 9637.9 IOPS] < 103.59 us>

[Write]
SEQ 1MiB (Q= 8, T= 1): 4029.832 MB/s [ 3843.1 IOPS] < 2076.96 us>
SEQ 1MiB (Q= 1, T= 1): 4032.671 MB/s [ 3845.9 IOPS] < 259.69 us>
RND 4KiB (Q= 32, T= 1): 380.773 MB/s [ 92962.2 IOPS] < 333.15 us>
RND 4KiB (Q= 1, T= 1): 220.150 MB/s [ 53747.6 IOPS] < 18.46 us>

Profile: Default
Test: 1 GiB (x5) [C: 1% (61/11920GiB)]
Mode: [Admin]
Time: Measure 5 sec / Interval 5 sec
Date: 2022/01/27 18:58:17
OS: Windows Server 2019 [10.0 Build 17763] (x64)

*******************************************************************************************

AS2124us-TNRP server with 4 x kioxia 12.8 tb nvme 2 x 9560-16i
windows 2022 server
raid 5 with 4 x kloxia kcd6xvul12t8 12.8tb u.3


------------------------------------------------------------------------------
CrystalDiskMark 8.0.2 x64 (C) 2007-2021 hiyohiyo
Crystal Dew World: Crystal Dew World
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
SEQ 1MiB (Q= 8, T= 1): 14770.785 MB/s [ 14086.5 IOPS] < 567.58 us>
SEQ 1MiB (Q= 1, T= 1): 7353.094 MB/s [ 7012.5 IOPS] < 142.43 us>
RND 4KiB (Q= 32, T= 1): 490.657 MB/s [ 119789.3 IOPS] < 266.90 us>
RND 4KiB (Q= 1, T= 1): 113.208 MB/s [ 27638.7 IOPS] < 36.05 us>

[Write]
SEQ 1MiB (Q= 8, T= 1): 4292.017 MB/s [ 4093.2 IOPS] < 1950.82 us>
SEQ 1MiB (Q= 1, T= 1): 4353.206 MB/s [ 4151.5 IOPS] < 240.60 us>
RND 4KiB (Q= 32, T= 1): 448.459 MB/s [ 109487.1 IOPS] < 283.70 us>
RND 4KiB (Q= 1, T= 1): 114.401 MB/s [ 27929.9 IOPS] < 35.66 us>

Profile: Default
Test: 1 GiB (x5) [C: 1% (13/1023GiB)]
Mode: [Admin]
Time: Measure 5 sec / Interval 5 sec
Date: 2022/03/08 8:42:58
OS: Windows Server 2019 Server Standard (full installation) [10.0 Build 20348] (x64)
 

akeen

New Member
Mar 1, 2022
13
0
1
and here is micron raid1 on 9560-16i

AS2124us-tnrp server with lsi9560-16i micron 3.2 tb nvme u.2 disk raid1

------------------------------------------------------------------------------
CrystalDiskMark 8.0.4 x64 (C) 2007-2021 hiyohiyo
Crystal Dew World: Crystal Dew World
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
SEQ 1MiB (Q= 8, T= 1): 6686.923 MB/s [ 6377.1 IOPS] < 1253.88 us>
SEQ 1MiB (Q= 1, T= 1): 2467.476 MB/s [ 2353.2 IOPS] < 424.69 us>
RND 4KiB (Q= 32, T= 1): 884.844 MB/s [ 216026.4 IOPS] < 147.36 us>
RND 4KiB (Q= 1, T= 1): 33.028 MB/s [ 8063.5 IOPS] < 123.84 us>

[Write]
SEQ 1MiB (Q= 8, T= 1): 3119.347 MB/s [ 2974.8 IOPS] < 2680.71 us>
SEQ 1MiB (Q= 1, T= 1): 2634.722 MB/s [ 2512.7 IOPS] < 397.63 us>
RND 4KiB (Q= 32, T= 1): 551.928 MB/s [ 134748.0 IOPS] < 232.23 us>
RND 4KiB (Q= 1, T= 1): 143.435 MB/s [ 35018.3 IOPS] < 28.41 us>

Profile: Default
Test: 1 GiB (x5) [C: 2% (67/2980GiB)]
Mode: [Admin]
Time: Measure 5 sec / Interval 5 sec
Date: 2022/02/02 8:58:04
OS: Windows Server 2022 Server Standard (full installation) [10.0 Build 20348] (x64)