AVOID - 4U, 2x Node, 4x E5 V3/V4, 56x LFF SAS3 3.5" bay - $299 - CISCO UCS C3260

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pcmantinker

Member
Apr 23, 2022
21
32
13
I'm going to need to RMA the server blades at the very least, if not the full chassis. No matter the combination of CPUs, blades and CPU socket, power supply, I am unable to get the CPU voltage faults to clear. The seller requires that the items in question get returned before they will send out replacements. This is unfortunate as it means I'll be out of production for a week or more. I'm hoping that just the server blades is enough for the return. The full chassis would be much more expensive to ship back and would possibly incur more damage during shipping back. On the brightside though, maybe they'd send back a unit that is properly packed and doesn't get damaged during shipping. I will keep the thread updated on status of the RMA process.

EDIT:
The seller agreed for me to only need to ship back the server blades. I will be able to ship them out tomorrow and hopefully by the end of next week I will be able to be back in production again.
 
Last edited:

pcmantinker

Member
Apr 23, 2022
21
32
13
my final step would be to get internet working on this thing, does this really require the cisco cable? even so my routher/switch dont even have 10g port.. one RJ45 is management the other RJ45 says console but no idea what it does.
Unfortunately, I don't think the server blades can be accessed from the 1Gb ports. Those are dedicated CIMC 1Gb links. You'll need official Cisco cables in order for link negotiation to work. In terms of switch compatibility, I use the Brocade IXC6610 which has 48x1Gb, 8x10Gb, and 4x40Gb. You can usually get one for a good price on eBay. I am currently using the 40->4x10Gb breakout cable from Cisco which operates at full 40Gb on the switch side. I followed fohdeesha's guide on flashing the Brocade and it works wonderfully. You'll need an RJ45 serial to USB cable.

This brings me to your question about RJ45 console port. It's not Ethernet and will only work from a serial console emulator. You can use the same RJ45 to USB cable here if you wanted to interface with the unit's CLI. However, you'll have likely better luck with SSH and probably the CIMC UI.
 

jtaj

Member
Jul 13, 2021
75
38
18
Unfortunately, I don't think the server blades can be accessed from the 1Gb ports. Those are dedicated CIMC 1Gb links. You'll need official Cisco cables in order for link negotiation to work. In terms of switch compatibility, I use the Brocade IXC6610 which has 48x1Gb, 8x10Gb, and 4x40Gb. You can usually get one for a good price on eBay. I am currently using the 40->4x10Gb breakout cable from Cisco which operates at full 40Gb on the switch side. I followed fohdeesha's guide on flashing the Brocade and it works wonderfully. You'll need an RJ45 serial to USB cable.

This brings me to your question about RJ45 console port. It's not Ethernet and will only work from a serial console emulator. You can use the same RJ45 to USB cable here if you wanted to interface with the unit's CLI. However, you'll have likely better luck with SSH and probably the CIMC UI.
yea I am not too keen on using USB to RJ45 due to USB 2.0 speed, and with more than couple drives it quickly becomes a bottleneck.

btw on which page did you see the fan control in the CIMC? i still couldnt find it.
 

feffrey

New Member
Oct 3, 2014
11
14
3
@Slothstronaut ... my final step would be to get internet working on this thing, does this really require the cisco cable? even so my routher/switch dont even have 10g port.. one RJ45 is management the other RJ45 says console but no idea what it does.

You could get a couple of "CVR-QSFP-SFP10G" which will convert the qsfp to a sfp+ port and then use a GLC-T transceiver to get 1gb copper.

Never used those myself, but it should work.
 
  • Like
Reactions: Samir

pcmantinker

Member
Apr 23, 2022
21
32
13
yea I am not too keen on using USB to RJ45 due to USB 2.0 speed, and with more than couple drives it quickly becomes a bottleneck.

btw on which page did you see the fan control in the CIMC? i still couldnt find it.
You wouldn't use the RJ45 to USB cable for your network connection. It is just for serial port communication. If you're able to get a network connection with 40Gb down to 1Gb, then you can use SSH for CLI sessions.

Unfortunately, I didn't see any page for changing fan speed. Both of my server nodes were pulled tonight in preparation for mailing them back for replacements. I think the general rule of thumb is to use Cisco compatible parts or hope that U.2 SSD firmware reports 0C to CIMC so that the fans don't spin too high.
 
  • Like
Reactions: Samir

jtaj

Member
Jul 13, 2021
75
38
18
then you can use SSH for CLI sessions.
cant simply just get a connection from 40g to 1g cable?

Unfortunately, I didn't see any page for changing fan speed.
didnt you mention that you were able to change the fan settings but was overriden by the system fault?

can't recall, also @oddball also mentioned theres fan control but I haven't found it anywhere in CIMC yet.
 
  • Like
Reactions: Samir

eduncan911

The New James Dean
Jul 27, 2015
648
508
93
eduncan911.com
Without a unit to play with, I'm going to have to abandon this project. Absolutely great platform to work out the kinks and support hardware.

However, I just can't justify $1000 (chassis + cpu/mem) for another toy to make a website, GitHub downloads, 3d printing (though I will do what I committed to, which is 3.5" and 2.5" bays in 3d printing).
 
Last edited:

oddball

Active Member
May 18, 2018
206
122
43
43
I don't know where this stuff is setup in CIMC. We use UCS Manager and there is a fan policy that can be set to a service profile that is then applied to the server.

I believe all of the same options are available via CIMC, but I don't really know. Maybe it's just the M5's with that option in stand alone?
 
  • Like
Reactions: Samir and jtaj

wifiholic

Member
Mar 27, 2018
50
56
18
38
I don't know where this stuff is setup in CIMC. We use UCS Manager and there is a fan policy that can be set to a service profile that is then applied to the server.
I can't say whether it's the same on this model, or on the M5 series in general, but on a C220 M4, the fan policy is configured here in CIMC:

1652915815325.png

On my C220 M4 that has a PCIe card that it doesn't like, it overrides the fan policy; in that scenario, it will look like this:
1652915984564.png
 
  • Like
Reactions: Samir

feffrey

New Member
Oct 3, 2014
11
14
3
I ended up getting one of these beasts :D

Updated the firmware and I was able to map all the storage to one node. I'm pretty sure you would need a "UCS-S3260-DHBA" to enable pass through of the drives to both nodes simultaneously, the raid cards that come with it don't allow dual node storage.

Got Truenas Scale installed and working without issue, including the VICs. Truenas core however works as well, but no VIC drivers so no networking.

It's definitely my noisiest server, was hoping it would be a bit quieter.

I can't say whether it's the same on this model, or on the M5 series in general, but on a C220 M4, the fan policy is configured here in CIMC:

View attachment 22885

On my C220 M4 that has a PCIe card that it doesn't like, it overrides the fan policy; in that scenario, it will look like this:
View attachment 22886
So that page for me only has the Power Restore Policy, no configured Fan Policy. Maybe the M5 blades for this chassis support turning down the fans? It is not as loud as it first powers on, but really wish I could make quieter. I did drop in a couple Kingston cheapo SSDs for boot and that didn't cause the fans to ram up, which was nice.
 
  • Like
Reactions: Samir and jtaj

Slothstronaut

Member
Apr 27, 2022
36
58
18
I was just about to ask about TrueNAS Core, I was hoping to run that for some testing but that is a bummer about the VIC driver missing. Seems to be a recurring issue when I did a search for Cisco VIC FreeBSD. I might just run it on TrueNAS Scale and not do any virtualization stuff, I only need storage.
 
  • Like
Reactions: Samir

jtaj

Member
Jul 13, 2021
75
38
18
I ended up getting one of these beasts :D

Updated the firmware and I was able to map all the storage to one node. I'm pretty sure you would need a "UCS-S3260-DHBA" to enable pass through of the drives to both nodes simultaneously, the raid cards that come with it don't allow dual node storage.

Got Truenas Scale installed and working without issue, including the VICs. Truenas core however works as well, but no VIC drivers so no networking.

It's definitely my noisiest server, was hoping it would be a bit quieter.



So that page for me only has the Power Restore Policy, no configured Fan Policy. Maybe the M5 blades for this chassis support turning down the fans? It is not as loud as it first powers on, but really wish I could make quieter. I did drop in a couple Kingston cheapo SSDs for boot and that didn't cause the fans to ram up, which was nice.
could you pls share the firmware file. and is that per node you'd flash firmware to or just the chassis itself
 
Last edited:
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
36
58
18
Followup to this project to give some closure because who likes an open-ended thread??

1. Both servers are now deployed and working well. Ended up going with TrueNAS Scale (Because no VIC drivers in FreeBSD). RAID1 dual SSDs for the boot, and 56 drives JBOD direct to TrueNAS. Prod array is all 12TB HC520 HGST drives, DR is 10TB He10 HGST drives (all SAS). Running them both same setup, 14 drives RAIDZ2 x 4 columns. Gives me about 500TB and 420TB usable. Read/Write speeds are very good with no cache drives or anything special. I was easily getting 4GBps between the arrays when I was doing the initial sync. Had one of the 10TB drives that came from the seller act a little wonky with some write errors but that was swapped out and good to go.

2. Second node of the chassis is currently sitting unused. I may throw some RAM in it and make it an ESXi box with a couple large SATA SSDs.

3. An idea I had was to use the second node as a bare-metal edge for NSX-T. However, I cannot for the life of me find how to enable IOMMU on these servers and it is required for NSX-T to utilize DPDK.

I am considering picking up another 1 or 2 of these, since the price is so low. I did notice that his price for a fully-loaded unit (about 11K) is actually WAAAAY more than if you just bought the parts from him separately. Not sure whats up with that. Even his 5 packs of drives end up being considerably more than buying individual drives... Weird.

My lessons learned from this project:
1. Cisco does NOT like 3rd party anything. QSFP modules, RAM, NVMe, HDDs, etc. If you are going to rock Cisco UCS, you really need to stay in their ecosystem unless you like banging your head against obscure errors only to find out that "doesn't technically work in that configuration"
2. ZFS, even in its OOB configuration, blows the doors off HW RAID. I'm sure all you knew that, but as a guy thats gone back and forth from ZFS to HW RAID, it was an eye-opener how much ZFS on Linux has improved.
3. These are a fantastic deal, if you don't mind doing a bit of research into their quirks. Would recommend if you want a shitload of drive bays in a super robust 4U chassis to run ZFS. Just ignore the RAID controller, it will only lead to pain.

So anyone have a clue about IOMMU? You know it's bad when a google search has zero hits!
 

pcmantinker

Member
Apr 23, 2022
21
32
13
Followup to this project to give some closure because who likes an open-ended thread??

1. Both servers are now deployed and working well. Ended up going with TrueNAS Scale (Because no VIC drivers in FreeBSD). RAID1 dual SSDs for the boot, and 56 drives JBOD direct to TrueNAS. Prod array is all 12TB HC520 HGST drives, DR is 10TB He10 HGST drives (all SAS). Running them both same setup, 14 drives RAIDZ2 x 4 columns. Gives me about 500TB and 420TB usable. Read/Write speeds are very good with no cache drives or anything special. I was easily getting 4GBps between the arrays when I was doing the initial sync. Had one of the 10TB drives that came from the seller act a little wonky with some write errors but that was swapped out and good to go.

2. Second node of the chassis is currently sitting unused. I may throw some RAM in it and make it an ESXi box with a couple large SATA SSDs.

3. An idea I had was to use the second node as a bare-metal edge for NSX-T. However, I cannot for the life of me find how to enable IOMMU on these servers and it is required for NSX-T to utilize DPDK.

I am considering picking up another 1 or 2 of these, since the price is so low. I did notice that his price for a fully-loaded unit (about 11K) is actually WAAAAY more than if you just bought the parts from him separately. Not sure whats up with that. Even his 5 packs of drives end up being considerably more than buying individual drives... Weird.

My lessons learned from this project:
1. Cisco does NOT like 3rd party anything. QSFP modules, RAM, NVMe, HDDs, etc. If you are going to rock Cisco UCS, you really need to stay in their ecosystem unless you like banging your head against obscure errors only to find out that "doesn't technically work in that configuration"
2. ZFS, even in its OOB configuration, blows the doors off HW RAID. I'm sure all you knew that, but as a guy thats gone back and forth from ZFS to HW RAID, it was an eye-opener how much ZFS on Linux has improved.
3. These are a fantastic deal, if you don't mind doing a bit of research into their quirks. Would recommend if you want a shitload of drive bays in a super robust 4U chassis to run ZFS. Just ignore the RAID controller, it will only lead to pain.

So anyone have a clue about IOMMU? You know it's bad when a google search has zero hits!
Glad you have your servers up and running! Mine is mostly up and running, but I can't quite figure out networking with unRAID and the VIC. None of my VMs or LXCs get IP addresses from DHCP. However, Docker does seem to be fine. I may end up going Proxmox instead if it's not trivial to figure out.

On my end, I did finally get my RMA complete with the seller and they were kind to send me new server blades. After installing my E5-2620v3 CPUs back in the new blades, they showed the same voltage faults as before. So, I ordered a set of E5-2650v4 CPUs to compare. Once they arrived, I installed and the voltage faults immediately cleared after reading the voltage sensor data. I don't know if this means I just had faulty E5-2620v3 CPUs or if the boards just didn't like them, but I'm happy to confirm all is well there now.

In terms of IOMMU, don't you just need to enable VT-d in the BIOS? From unRAID, I can see these IOMMU groups with VT-d:
IOMMU group 0:[8086:6f80] ff:08.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 0 (rev 01)
[8086:6f32] ff:08.2 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 0 (rev 01)
IOMMU group 1:[8086:6f83] ff:08.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 0 (rev 01)
IOMMU group 2:[8086:6f90] ff:09.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 1 (rev 01)
[8086:6f33] ff:09.2 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 1 (rev 01)
IOMMU group 3:[8086:6f93] ff:09.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 1 (rev 01)
IOMMU group 4:[8086:6f81] ff:0b.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R3 QPI Link 0/1 (rev 01)
[8086:6f36] ff:0b.1 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R3 QPI Link 0/1 (rev 01)
[8086:6f37] ff:0b.2 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R3 QPI Link 0/1 (rev 01)
[8086:6f76] ff:0b.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R3 QPI Link Debug (rev 01)
IOMMU group 5:[8086:6fe0] ff:0c.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe1] ff:0c.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe2] ff:0c.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe3] ff:0c.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe4] ff:0c.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe5] ff:0c.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe6] ff:0c.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe7] ff:0c.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
IOMMU group 6:[8086:6fe8] ff:0d.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe9] ff:0d.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fea] ff:0d.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6feb] ff:0d.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
IOMMU group 7:[8086:6ff8] ff:0f.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ff9] ff:0f.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffa] ff:0f.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffb] ff:0f.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffc] ff:0f.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffd] ff:0f.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffe] ff:0f.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
IOMMU group 8:[8086:6f1d] ff:10.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R2PCIe Agent (rev 01)
[8086:6f34] ff:10.1 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R2PCIe Agent (rev 01)
[8086:6f1e] ff:10.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Ubox (rev 01)
[8086:6f7d] ff:10.6 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Ubox (rev 01)
[8086:6f1f] ff:10.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Ubox (rev 01)
IOMMU group 9:[8086:6fa0] ff:12.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Home Agent 0 (rev 01)
[8086:6f30] ff:12.1 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Home Agent 0 (rev 01)
[8086:6f60] ff:12.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Home Agent 1 (rev 01)
[8086:6f38] ff:12.5 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Home Agent 1 (rev 01)
IOMMU group 10:[8086:6fa8] ff:13.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Target Address/Thermal/RAS (rev 01)
IOMMU group 11:[8086:6f71] ff:13.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Target Address/Thermal/RAS (rev 01)
IOMMU group 12:[8086:6faa] ff:13.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel Target Address Decoder (rev 01)
IOMMU group 13:[8086:6fab] ff:13.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel Target Address Decoder (rev 01)
IOMMU group 14:[8086:6fae] ff:13.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Broadcast (rev 01)
[8086:6faf] ff:13.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Global Broadcast (rev 01)
IOMMU group 15:[8086:6fb0] ff:14.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel 0 Thermal Control (rev 01)
IOMMU group 16:[8086:6fb1] ff:14.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel 1 Thermal Control (rev 01)
IOMMU group 17:[8086:6fb2] ff:14.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel 0 Error (rev 01)
IOMMU group 18:[8086:6fb3] ff:14.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel 1 Error (rev 01)
IOMMU group 19:[8086:6fbc] ff:14.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Interface (rev 01)
[8086:6fbd] ff:14.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Interface (rev 01)
[8086:6fbe] ff:14.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Interface (rev 01)
[8086:6fbf] ff:14.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Interface (rev 01)
IOMMU group 20:[8086:6f68] ff:16.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Target Address/Thermal/RAS (rev 01)
IOMMU group 21:[8086:6f79] ff:16.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Target Address/Thermal/RAS (rev 01)
IOMMU group 22:[8086:6f6a] ff:16.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Channel Target Address Decoder (rev 01)
IOMMU group 23:[8086:6f6b] ff:16.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Channel Target Address Decoder (rev 01)
IOMMU group 24:[8086:6f6e] ff:16.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Broadcast (rev 01)
[8086:6f6f] ff:16.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Global Broadcast (rev 01)
IOMMU group 25:[8086:6fd0] ff:17.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 1 - Channel 0 Thermal Control (rev 01)
IOMMU group 26:[8086:6fd1] ff:17.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 1 - Channel 1 Thermal Control (rev 01)
IOMMU group 27:[8086:6fd2] ff:17.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 1 - Channel 0 Error (rev 01)
IOMMU group 28:[8086:6fd3] ff:17.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 1 - Channel 1 Error (rev 01)
IOMMU group 29:[8086:6fb8] ff:17.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Interface (rev 01)
[8086:6fb9] ff:17.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Interface (rev 01)
[8086:6fba] ff:17.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Interface (rev 01)
[8086:6fbb] ff:17.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Interface (rev 01)
IOMMU group 30:[8086:6f98] ff:1e.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6f99] ff:1e.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6f9a] ff:1e.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6fc0] ff:1e.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6f9c] ff:1e.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
IOMMU group 31:[8086:6f88] ff:1f.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6f8a] ff:1f.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
IOMMU group 32:[8086:6f80] 7f:08.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 0 (rev 01)
[8086:6f32] 7f:08.2 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 0 (rev 01)
IOMMU group 33:[8086:6f83] 7f:08.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 0 (rev 01)
IOMMU group 34:[8086:6f90] 7f:09.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 1 (rev 01)
[8086:6f33] 7f:09.2 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 1 (rev 01)
IOMMU group 35:[8086:6f93] 7f:09.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D QPI Link 1 (rev 01)
IOMMU group 36:[8086:6f81] 7f:0b.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R3 QPI Link 0/1 (rev 01)
[8086:6f36] 7f:0b.1 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R3 QPI Link 0/1 (rev 01)
[8086:6f37] 7f:0b.2 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R3 QPI Link 0/1 (rev 01)
[8086:6f76] 7f:0b.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R3 QPI Link Debug (rev 01)
IOMMU group 37:[8086:6fe0] 7f:0c.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe1] 7f:0c.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe2] 7f:0c.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe3] 7f:0c.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe4] 7f:0c.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe5] 7f:0c.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe6] 7f:0c.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe7] 7f:0c.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
IOMMU group 38:[8086:6fe8] 7f:0d.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fe9] 7f:0d.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6fea] 7f:0d.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6feb] 7f:0d.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
IOMMU group 39:[8086:6ff8] 7f:0f.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ff9] 7f:0f.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffa] 7f:0f.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffb] 7f:0f.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffc] 7f:0f.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffd] 7f:0f.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
[8086:6ffe] 7f:0f.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Caching Agent (rev 01)
IOMMU group 40:[8086:6f1d] 7f:10.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R2PCIe Agent (rev 01)
[8086:6f34] 7f:10.1 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D R2PCIe Agent (rev 01)
[8086:6f1e] 7f:10.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Ubox (rev 01)
[8086:6f7d] 7f:10.6 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Ubox (rev 01)
[8086:6f1f] 7f:10.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Ubox (rev 01)
IOMMU group 41:[8086:6fa0] 7f:12.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Home Agent 0 (rev 01)
[8086:6f30] 7f:12.1 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Home Agent 0 (rev 01)
[8086:6f60] 7f:12.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Home Agent 1 (rev 01)
[8086:6f38] 7f:12.5 Performance counters: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Home Agent 1 (rev 01)
IOMMU group 42:[8086:6fa8] 7f:13.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Target Address/Thermal/RAS (rev 01)
IOMMU group 43:[8086:6f71] 7f:13.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Target Address/Thermal/RAS (rev 01)
IOMMU group 44:[8086:6faa] 7f:13.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel Target Address Decoder (rev 01)
IOMMU group 45:[8086:6fab] 7f:13.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel Target Address Decoder (rev 01)
IOMMU group 46:[8086:6fae] 7f:13.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Broadcast (rev 01)
[8086:6faf] 7f:13.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Global Broadcast (rev 01)
IOMMU group 47:[8086:6fb0] 7f:14.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel 0 Thermal Control (rev 01)
IOMMU group 48:[8086:6fb1] 7f:14.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel 1 Thermal Control (rev 01)
IOMMU group 49:[8086:6fb2] 7f:14.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel 0 Error (rev 01)
IOMMU group 50:[8086:6fb3] 7f:14.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 0 - Channel 1 Error (rev 01)
IOMMU group 51:[8086:6fbc] 7f:14.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Interface (rev 01)
[8086:6fbd] 7f:14.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Interface (rev 01)
[8086:6fbe] 7f:14.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Interface (rev 01)
[8086:6fbf] 7f:14.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 0/1 Interface (rev 01)
IOMMU group 52:[8086:6f68] 7f:16.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Target Address/Thermal/RAS (rev 01)
IOMMU group 53:[8086:6f79] 7f:16.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Target Address/Thermal/RAS (rev 01)
IOMMU group 54:[8086:6f6a] 7f:16.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Channel Target Address Decoder (rev 01)
IOMMU group 55:[8086:6f6b] 7f:16.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Channel Target Address Decoder (rev 01)
IOMMU group 56:[8086:6f6e] 7f:16.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Broadcast (rev 01)
[8086:6f6f] 7f:16.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Global Broadcast (rev 01)
IOMMU group 57:[8086:6fd0] 7f:17.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 1 - Channel 0 Thermal Control (rev 01)
IOMMU group 58:[8086:6fd1] 7f:17.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 1 - Channel 1 Thermal Control (rev 01)
IOMMU group 59:[8086:6fd2] 7f:17.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 1 - Channel 0 Error (rev 01)
IOMMU group 60:[8086:6fd3] 7f:17.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Memory Controller 1 - Channel 1 Error (rev 01)
IOMMU group 61:[8086:6fb8] 7f:17.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Interface (rev 01)
[8086:6fb9] 7f:17.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Interface (rev 01)
[8086:6fba] 7f:17.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Interface (rev 01)
[8086:6fbb] 7f:17.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DDRIO Channel 2/3 Interface (rev 01)
IOMMU group 62:[8086:6f98] 7f:1e.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6f99] 7f:1e.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6f9a] 7f:1e.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6fc0] 7f:1e.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6f9c] 7f:1e.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
IOMMU group 63:[8086:6f88] 7f:1f.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
[8086:6f8a] 7f:1f.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Power Control Unit (rev 01)
IOMMU group 64:[8086:6f00] 00:00.0 Host bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DMI2 (rev 01)
IOMMU group 65:[8086:6f02] 00:01.0 PCI bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 1 (rev 01)
IOMMU group 66:[8086:6f03] 00:01.1 PCI bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 1 (rev 01)
IOMMU group 67:[8086:6f04] 00:02.0 PCI bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 2 (rev 01)
IOMMU group 68:[8086:6f08] 00:03.0 PCI bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 3 (rev 01)
IOMMU group 69:[8086:6f0a] 00:03.2 PCI bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 3 (rev 01)
IOMMU group 70:[8086:6f28] 00:05.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Map/VTd_Misc/System Management (rev 01)
IOMMU group 71:[8086:6f29] 00:05.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D IIO Hot Plug (rev 01)
IOMMU group 72:[8086:6f2a] 00:05.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D IIO RAS/Control Status/Global Errors (rev 01)
IOMMU group 73:[8086:6f2c] 00:05.4 PIC: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D I/O APIC (rev 01)
IOMMU group 74:[8086:8d7c] 00:11.0 Unassigned class [ff00]: Intel Corporation C610/X99 series chipset SPSR (rev 05)
IOMMU group 75:[8086:8d3a] 00:16.0 Communication controller: Intel Corporation C610/X99 series chipset MEI Controller #1 (rev 05)
[8086:8d3b] 00:16.1 Communication controller: Intel Corporation C610/X99 series chipset MEI Controller #2 (rev 05)
IOMMU group 76:[8086:8d2d] 00:1a.0 USB controller: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #2 (rev 05)
Bus 001 Device 001 Port 1-0 ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002 Port 1-1 ID 8087:800a Intel Corp. Hub
IOMMU group 77:[8086:8d10] 00:1c.0 PCI bridge: Intel Corporation C610/X99 series chipset PCI Express Root Port #1 (rev d5)
IOMMU group 78:[8086:8d1e] 00:1c.7 PCI bridge: Intel Corporation C610/X99 series chipset PCI Express Root Port #8 (rev d5)
IOMMU group 79:[8086:8d26] 00:1d.0 USB controller: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #1 (rev 05)
Bus 002 Device 001 Port 2-0 ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 002 Port 2-1 ID 8087:8002 Intel Corp. 8 channel internal hub
Bus 002 Device 003 Port 2-1.1 ID 0930:6545 Toshiba Corp. Kingston DataTraveler 102/2.0 / HEMA Flash Drive 2 GB / PNY Attache 4GB Stick
Bus 002 Device 004 Port 2-1.2 ID 214b:7250 Huasheng Electronics USB2.0 HUB
Bus 002 Device 005 Port 2-1.4 ID 0624:0402 Avocent Corp. Cisco Virtual Keyboard and Mouse
Bus 002 Device 006 Port 2-1.2.1 ID 046d:c52b Logitech, Inc. Unifying Receiver
IOMMU group 80:[8086:8d44] 00:1f.0 ISA bridge: Intel Corporation C610/X99 series chipset LPC Controller (rev 05)
[8086:8d02] 00:1f.2 SATA controller: Intel Corporation C610/X99 series chipset 6-Port SATA Controller [AHCI mode] (rev 05)
IOMMU group 81:[8086:0953] 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)
[N:0:0:1] disk INTEL SSDPE2MX400G4__1 /dev/nvme0n1 400GB
IOMMU group 82:[1137:007a] 03:00.0 PCI bridge: Cisco Systems Inc VIC 1300 PCIe Upstream Port (rev 01)
IOMMU group 83:[1137:0041] 04:00.0 PCI bridge: Cisco Systems Inc VIC PCIe Downstream Port (rev a2)
IOMMU group 84:[1137:0041] 04:01.0 PCI bridge: Cisco Systems Inc VIC PCIe Downstream Port (rev a2)
IOMMU group 85:[1137:0042] 05:00.0 Unclassified device [00ff]: Cisco Systems Inc VIC Management Controller (rev a2)
IOMMU group 86:[1137:0040] 06:00.0 PCI bridge: Cisco Systems Inc VIC PCIe Upstream Port (rev a2)
IOMMU group 87:[1137:0041] 07:00.0 PCI bridge: Cisco Systems Inc VIC PCIe Downstream Port (rev a2)
IOMMU group 88:[1137:0041] 07:01.0 PCI bridge: Cisco Systems Inc VIC PCIe Downstream Port (rev a2)
IOMMU group 89:[1137:0041] 07:02.0 PCI bridge: Cisco Systems Inc VIC PCIe Downstream Port (rev a2)
IOMMU group 90:[1137:0041] 07:03.0 PCI bridge: Cisco Systems Inc VIC PCIe Downstream Port (rev a2)
IOMMU group 91:[1137:0043] 08:00.0 Ethernet controller: Cisco Systems Inc VIC Ethernet NIC (rev a2)
IOMMU group 92:[1137:0043] 09:00.0 Ethernet controller: Cisco Systems Inc VIC Ethernet NIC (rev a2)
IOMMU group 93:[1137:0045] 0a:00.0 Fibre Channel: Cisco Systems Inc VIC FCoE HBA (rev a2)
IOMMU group 94:[1137:0045] 0b:00.0 Fibre Channel: Cisco Systems Inc VIC FCoE HBA (rev a2)
IOMMU group 95:[1000:00ce] 0d:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS-3 3316 [Intruder] (rev 01)
[1:0:9:0] disk ATA Micron_P400e-MTF 0152 /dev/sdb 100GB
[1:0:10:0] disk ATA Micron_P400e-MTF 0152 /dev/sdc 100GB
[1:0:79:0] disk HGST HUH721010AL42C0 A3Z4 /dev/sdd 10.0TB
[1:0:80:0] disk HGST HUH721010AL42C0 A3Z4 /dev/sde 10.0TB
[1:0:82:0] disk HGST HUH721010AL42C0 A3Z4 /dev/sdf 10.0TB
[1:0:83:0] disk HGST HUH721010AL42C0 A3Z4 /dev/sdg 10.0TB
IOMMU group 96:[102b:0522] 0f:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02)
IOMMU group 97:[8086:6f04] 80:02.0 PCI bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 2 (rev 01)
IOMMU group 98:[8086:6f06] 80:02.2 PCI bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 2 (rev 01)
IOMMU group 99:[8086:6f28] 80:05.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Map/VTd_Misc/System Management (rev 01)
IOMMU group 100:[8086:6f29] 80:05.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D IIO Hot Plug (rev 01)
IOMMU group 101:[8086:6f2a] 80:05.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D IIO RAS/Control Status/Global Errors (rev 01)
IOMMU group 102:[8086:6f2c] 80:05.4 PIC: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D I/O APIC (rev 01)

There is something to be said about official Cisco components. I have a couple of Intel 3500 series NVMe SSDs installed and for the most part the fans run at a reasonable RPM. However, if I install my Intel 4510 series NVMe SSDs instead, the fans run at 13k RPM most of the time. My suspicion for the fans ramping up on the 4510 is that they report 80C to CIMC vs the 3500 series seems to report 0C. I'm not sure if there's a way to modify the firmware for the "fake" temperature value reported to CIMC, but it's an idea if someone knows how to do so. My only complaint with the 3500 series 400GB SSD is that it has such a small TBW of just 219TBW. This pales in comparison to my 4510 1TB SSDs that have 1.92PBW.
 
  • Like
Reactions: Samir

oddball

Active Member
May 18, 2018
206
122
43
43
There is a list of Cisco FRU's. I have no idea what FRU stands for, factory recognized unit maybe? It's a list of hardware that Cisco will recognize, it doesn't matter if it's a Cisco SKU or not. If it's on that list it's "official".

Outside of that these things will run with anything. Seems the 3260's are different than the M5 servers in that those you can override the fan even with non-Cisco hardware.
 
  • Like
Reactions: Samir

oddball

Active Member
May 18, 2018
206
122
43
43
I should have mentioned, you can download the FRU list on Cisco's support website. You just need an account, nothing else. It's called the Cisco Capabilities Catalog.
 
  • Like
Reactions: Samir

itronin

Well-Known Member
Nov 24, 2018
1,312
869
113
Denver, Colorado
There is a list of Cisco FRU's. I have no idea what FRU stands for, factory recognized unit maybe? It's a list of hardware that Cisco will recognize, it doesn't matter if it's a Cisco SKU or not. If it's on that list it's "official".

Outside of that these things will run with anything. Seems the 3260's are different than the M5 servers in that those you can override the fan even with non-Cisco hardware.
FRU for me has always meant Field Replaceable Unit. Pretty much what it sounds like. Parts that can (relatively) be easily swapped without returning the device to the factory.

edit
also parts that can be added or upgraded once the unit has been deployed.
 

feffrey

New Member
Oct 3, 2014
11
14
3
Not sure if I'm just lucky or these drives are compatible, but besides the 10tb Cisco SAS drives, I have 2x Crucial MX500, 2x Kingston SA400S37, and a Intel SA400S37, and the fans aren't going nuts, all of them sitting around 7800~ RPMS
 
  • Like
Reactions: Samir