I assume 'build-essential' is installed . If yes, what do you get with checkout of v5.6.0?root@debian:/home/temp/Git/iomemory-vsl# make dkms
make: *** No rule to make target 'dkms'. Stop.
I assume 'build-essential' is installed . If yes, what do you get with checkout of v5.6.0?root@debian:/home/temp/Git/iomemory-vsl# make dkms
make: *** No rule to make target 'dkms'. Stop.
mkfs.xfs -m crc=0 -f /dev/fioa1
. Maybe it's just my setup, but these do not perform the same with ext4. Also, I try to overprovision them, even with these high 11PBW's ... Just my subjective opinionthen go with @Marsh steps to partition them ...fio-detach /dev/fct0
fio-format /dev/fct0
fio-attach /dev/fct0
and do amount /dev/fioa1 /mnt/io2
Of course, you can mount it in /etc/fstab as wellchown gb00s:gb00s /mnt/io2
I know noatime-option may not be necessary with todays xfs, but .... 100 users, 99 opinions .../dev/fioa1 /mnt/io2 xfs defaults,noatime 0 0
BTW, @acquacow maybe you know how to restore gen3 iodrive with lost lebmap information? is it possible without internal tools?
Well, I know it's 2021 now, but anyway.Well, if you can't get sure-erase to complete, (it wipes the leb-map) there might be something else wrong.
There might be some more hidden flags for sure-erase that might get it to complete. It doesn't sound like anyone used --purge, which would also wipe the FPGA and brick the card...
In the past, I have had to load the drivers into minimal mode and alternate sure-erase and format until one eventually completed. I probably went back and forth ~50 times before I had success.
I have a card with the same problem.Added a "iodrive_load_eb_map=0" param to driver and rebooted system
I'm using Linux, so I added the following line toI have a card with the same problem.
How do you add "iodrive_load_eb_map=0" param to driver ?
Could you show an example.
thanks
/etc/modprobe.d/iomemory-vsl4.conf
:options iomemory-vsl4 iodrive_load_eb_map=0
Adapter: ioMono (driver 4.3.7)
Cisco UCS 1000GB MLC Fusion ioMemory PX, Product Number:PFIO1000MP, SN:aaabbbccc
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct0: 05:00.0, Product Number:PFIO1000MP, SN:aaabbbccc
fct0 Attached
ioMemory Adapter Controller, Product Number:PFIO1000MP, SN:aaabbbccc
PCI:05:00.0
Firmware v8.9.9, rev 20200113 Public
1000.00 GBytes device size
Internal temperature: 43.31 degC, max 52.66 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained Virtual Partitions:
fct0: ID:0, UUID:93d62e59-fc51-4c5e-acc0-a219d8391a02
fct0 State: Online, Type: block device, Device: \\?\PhysicalDrive4
ID:0, UUID:93d62e59-fc51-4c5e-acc0-a219d8391a02
1000.00 GBytes device size
Adapter: ioMono (driver 4.3.7)
Cisco UCS 1000GB MLC Fusion ioMemory PX, Product Number:PFIO1000MP, SN:dddeeefff
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct1: 04:00.0, Product Number:PFIO1000MP, SN:dddeeefff
fct1 Attached
ioMemory Adapter Controller, Product Number:PFIO1000MP, SN:dddeeefff
PCI:04:00.0
Firmware v8.9.9, rev 20200113 Public
1000.00 GBytes device size
Internal temperature: 40.36 degC, max 46.26 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained Virtual Partitions:
fct1: ID:0, UUID:cd139c56-129d-4aea-9107-4efd6408f2b5
fct1 State: Online, Type: block device, Device: \\?\PhysicalDrive3
ID:0, UUID:cd139c56-129d-4aea-9107-4efd6408f2b5
1000.00 GBytes device size
Is that the full status? Dump "fio-status -a" as well as "fio-pci-check"I've had a pair each of the Cisco branded 1TB ioMemory PX600s in two separate Cisco C240 M3 servers. We have had them in the servers doing burn-in for some time now, but I just ran some basic benchmarks and discovered for each server, one card's read rate is in half of the other.
The write rates are pretty similar across all 4 drives @ 1.6GB/s; only the read rates seem to be impacted.
Card A : 1.2GB/s
Card B : 2.8GB/s
I've tried a fio-format in 512B vs 4096B - 512B gives me a 20% bump on the slower card to ~1.5GB/s but, still not at the ~2.8GB/s that the other cards can get.
The only thing I can see that's different is that one card is in a x8 and another is in a x16. (TBH, I don't know which is in which.)
However, fio-status reports both are linked at x8 (the cards are x8 afterall). Cisco officially accommodate up to 3 ioMemory cards per C240 M3, so it shouldn't be an issue.
I've attached the brief output from fio-status - If there is any other information from the verbose status output that I could provide to help troubleshoot (remotely) that would be great. The servers have been languishing in a DC for a while now, and due to various lockdown efforts remote is the first approach I need to take.
Any ideas on how to get both drives up to par?
Adapter: ioMono (driver 4.3.7)
Cisco UCS 1000GB MLC Fusion ioMemory PX, Product Number:PFIO1000MP, SN:aaabbbccc
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct0: 05:00.0, Product Number:PFIO1000MP, SN:aaabbbccc
fct0 Attached
ioMemory Adapter Controller, Product Number:PFIO1000MP, SN:aaabbbccc
PCI:05:00.0
Firmware v8.9.9, rev 20200113 Public
1000.00 GBytes device size
Internal temperature: 43.31 degC, max 52.66 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained Virtual Partitions:
fct0: ID:0, UUID:93d62e59-fc51-4c5e-acc0-a219d8391a02
fct0 State: Online, Type: block device, Device: \\?\PhysicalDrive4
ID:0, UUID:93d62e59-fc51-4c5e-acc0-a219d8391a02
1000.00 GBytes device size
Adapter: ioMono (driver 4.3.7)
Cisco UCS 1000GB MLC Fusion ioMemory PX, Product Number:PFIO1000MP, SN:dddeeefff
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct1: 04:00.0, Product Number:PFIO1000MP, SN:dddeeefff
fct1 Attached
ioMemory Adapter Controller, Product Number:PFIO1000MP, SN:dddeeefff
PCI:04:00.0
Firmware v8.9.9, rev 20200113 Public
1000.00 GBytes device size
Internal temperature: 40.36 degC, max 46.26 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained Virtual Partitions:
fct1: ID:0, UUID:cd139c56-129d-4aea-9107-4efd6408f2b5
fct1 State: Online, Type: block device, Device: \\?\PhysicalDrive3
ID:0, UUID:cd139c56-129d-4aea-9107-4efd6408f2b5
1000.00 GBytes device size