You have to flash the firmware with special flags to enable uefi support. I forget what they are right now. And no, it won't work with normal bios because it requires a driver to make it look like a block device. That's the magic of uefi, you can load a low level driver into the bios before the OS boots.Will test it this weekend and report on the results
If it works with UEFI shouldn't it work with bios too?
Thanks, I think I may end up getting the 900p for my ESXi.You can't boot from the iodrive2.
Performance wise the optane 900p outperforms the iodrive in all (synthetic & "real life" )benchmarks.
The 800k iops was for the iodrive2 duo (two normal iodrive2 on one pcb).
is that special firmware publicly available, or is it just something that was internally developed and never made public?You have to flash the firmware with special flags to enable uefi support. I forget what they are right now. And no, it won't work with normal bios because it requires a driver to make it look like a block device. That's the magic of uefi, you can load a low level driver into the bios before the OS boots.
I think you'll need an HP/dell/supermicro motherboard to get it to work.
Sent from my Moto Z (2) using Tapatalk
Yeah, it's in the firmware file, when you flash it, there's a flag like --enable-uefi or something, but it's not documented. Gotta run strings on it and see what uefi flags exist.is that special firmware publicly available, or is it just something that was internally developed and never made public?
Rip the shroud off and stick a larger passive heatsink on, but make sure it has airflow.I purchased a Fusion-IO ioScale2 1.65TB from eBay a few weeks ago.
The fan on the board is quite loud and whiny.
Has anyone replaced the fan on one of these boards or found a way to adjust it's use when not required?
That would be correct. The DRAM requirements are based on worst-case.As far as I am aware vsphere's VMFS is still 512b so that might create high RAM demands but based on a post from acquacow it would depend on the guest os doing the writing correct? So the ram usage mentioned in the docs is worst case based on heavy 512b io?
This would at worst case put you at the max memory utilization for 4k workloads, but could create additional write amplification/etc, which really also isn't an issue for home use. If you are running MS SQL/Oracle, and some other databases, they won't install correctly w/o 512b low level formatting.Would formatting the drive as 4k and configuring the drives with 512e in vsphere 6.5 be possible and provide any benefits?
This is a long answer, but the short answer is that the 4k mapping happens both at the drive and driver level. 512b writes would be slightly slower, but 4k and above would be at normal speeds.I am not sure at what level the vsl memory mapping is happening, so maybe when the os talks 512b but the drive internally handles everything as 4k it would require less memory while sacrificing some speed.
Sorry, what is ZoL?I haven't really tried the cards with proxmox or recent linux kernels so sorry can't help you there.
I did upgrade one of my whitebox machines with FX-4100 CPU running ESXi 6.5 to 6.7 with the fusion IO card in it and so far the drivers seem to be working fine so we can hopefully run these cards in our ESX labs for a few more years .
As soon as my schedule allows for some tinkering time I will take a look at running a Fusion-IO card with proxmox or Debian. I'd really like to test it with ZFS but FreeBSD is not really supported atm so ZoL might be a good alternative.