@iguy
Thanks again for your helpful and detailed response. I have now worked through you suggestions – unfortunately it did not work.
But let me explain and clarify further what I have done and also answer the questions you posed.
======
First, as you suggested, I needed to verify that I did not have defective hardware. This was an excellent suggestion that I had somewhat ignored because doing so involved considerable work. In short, here is what I did:
I booted up a newer MSI consumer motherboard with an Intel i5-7600K CPU. This motherboard has a built-in MVMe slot, so I know that it supports NVMe drives. I then installed the NVMe-PCIe adapter card with the Intel 660p drive in an expansion slot and booted into Windows 10 Pro. I did not touch the BIOS – I just booted straight into W10. Short story - W10 immediately recognized the Intel 660p drive as verified in both Device Manager (drive shows up under Disk drives and NVMe controller under Storage controllers) and in Disk Management (for good measure I initialized the disk here with a GPT partition table).
I did not even have to install the Intel NVMe drivers as you suggested in your last post. The drive was immediately recognized by W10 using the built-in Microsoft NVMe driver. For good measure, I also installed the Intel driver which just replaced the Microsoft storage controller with the Intel one. In both cases, the drive worked fine.
Now I knew the hardware worked.
=====
Back to the Quanta Winterfell node.
Next, I wanted to make sure I did not have a software/OS issue. That is why, as I explained in a prior post here, I tried several different operating systems on the Quanta node.
To recap and further clarify:
With your modded B10 BIOS successfully flashed, I have successfully booted and run the following operating systems:
1) Windows Server 2019 (share code-base with recent version of Windows 10)
This OS runs great on the Quanta node and installs w/o any issues. However, it does not recognize the NVMe drive OR controller in Device Manager or Disk Management.
2) VMWare VSphere ESXi v. 6.7 U1 – build 11675023 (latest version)
ESXi also runs great on the Quanta node and fully supports NVMe. This is my daily driver and I have run this OS for some time without any problems. ESXi also fails to recognize the NVMe drive even though the other controllers like SATA and SAS are fully recognized.
3) Intel Clear Linux
This is Intel’s version of Linux which is optimized to run on Intel hardware. As expected, this OS also runs great on the Quanta node since the motherboard is based on the Intel C602 chipset.
I have also been running this OS for a while with Linux kernels 4.19.x, 4.20.x and the latest available 5.03.x kernel. As a side note, this is my recommended Linux distro for the Quanta Winterfell nodes. Everything works great and it is a very speedy Linux distro.
As you might have guessed, this OS also fails to recognize the NVMe drive. Among other things, no NVMe devices show up in /dev, nothing about nvme when running lspci, lsmod or lsblk.
I believe that running three operating systems as different as the above, proves that there are no software issues that would prevent the NVMe drive being recognized in the Quanta node. Even if one of the above OS would fail to recognize the NVMe for some reason, it is unlikely that they all three would.
=====
I have now ruled out anything wrong with the NVMe hardware or the software configuration.
As I see it, the only thing left is the combination of this particular Intel 660p NVMe drive running in the Quanta node. My node is the same hardware version as yours – it has the A07 sticker next to the Ethernet port.
I have also ruled out a problem with the PCIe riser card as I have been using this riser for a while with an AMD HD 3450 graphics card. I have also tried booting the Quanta with or without the graphics card just to see if it would make a difference.
I have verified the BIOS boot settings as you suggested. I disabled all the boot devices and just enabled UEFI boot. This works as expected as I can press F11 during POST and select the boot device. To boot windows, I select the Windows Boot Loarder. To boot Intel Clear Linux, I select the internal (UEFI) SATA harddrive directly (Clear Linux and Windows Server are installed on two different primary partitions on the SATA drive). To boot VMWare ESXi, I select the USB port that has a USB stick attached where ESXi is installed. Everything works great that way and enables me to boot three different operating systems on the same node.
I have also looked at all the settings in the BIOS. I do not claim to understand all of them, but I have looked numerous times (more than I care to admit) for anything that would indicate the NVMe drive being recognized in the PCIe slot. In fact, I do not see a difference in any of the BIOS settings whether the NVMe card/drive is mounted in the PCIe slot or not.
Specifically, I do not see, anywhere in the BIOS, the PATA3 boot device option, as you mentioned in your original response. I have the exact same boot device options whether the NVMe card/drive is mounted in the PCIe slot or not. If seeing PATA3 is supposed to be the “proof” that the NVMe drive is recognized by the BIOS, I can say that it isn’t.
Nothing in my BIOS settings indicate that the NVMe drive is recognized. I am not sure if any settings in the BIOS needs to be changed for the BIOS itself to recognize the drive. In my experience with other motherboards, the BIOS has always recognized the attached hardware directly, but sometimes you need to change BIOS settings for the OS to recognize the hardware.
I should also mention again, that I do not even care if I can boot from the NVMe drive or not. I just want the Quanta to recognize it as any other storage device, as I plan to boot from either the SATA drive or USB stick as I have been doing so far. I have noticed many people having problem booting from NVMe drives on older hardware, but this is not what I am trying to do.
Any suggestions on how to proceed now? I believe I am now at a point where the problem has been narrowed down specifically to be centered around the BIOS itself recognizing the Intel 660p drive.
Thanks for all your help!
Sorry about the delay. Busy with work and some home chores. Finally, the snow melted over here. Anyways, I've looked at your bios settings and would change these lines..@iguy
See attached BIOS dump from my Quanta node flashed with your modded B10 BIOS file.
I am also very curious to see where your settings differ from mine. Would you mind comparing my file with a file from one of your B10 BIOS nodes?
I don't know if that's new or not, but IPMI for the server is working from Supermicro's IPMIViewhi,
anyone was able to use ipmi to remote control the server? i tried ipmitool, but without success.
maybe i don't know the proper username/password. I see, that the server bmc get a ip from dhcp,
but i was not able to establish communication via ipmi to that ip address.
tnx
Jan
Are work on it (F03A)MB KVMThe bios, drivers (windows and Linux) can be downloaded for the Quanta system here. Please note this is not the exact system but is a compatible motherboard. (download the bios and read the release notes to verify if you want.) I wonder if it would work on a Wiwynn? The mobo bios chip is supposed to be replaceable...might be worth trying for someone who needs windows.
Download Center
The system is considered an F03B motherboard.
A better manual with bios settings and descriptions can be found here...
QUANTA RACKGO X SERIES F03A TECHNICAL MANUAL Pdf Download.
This manual is for a different model so although the bios looks the same the rest of the manual is partially usable. i.e. the info about IPMI and remote KVM do not apply to the windmill boards we have. Other info may or may not apply directly but is very similar.
Hope it helps.
nmap say: available only ipmi port(623/udp) and and I do not found any answer for itI don't know if that's new or not, but IPMI for the server is working from Supermicro's IPMIView
Just scan the network for IPMI 2.0, it will auto find IPMI of your nodes.
Log in with default admin:admin credentials.
It was working to Power Up/Power Down/Reset for me. Other functions did not work, though may need additional testing.
@iguySorry about the delay. Busy with work and some home chores. Finally, the snow melted over here. Anyways, I've looked at your bios settings and would change these lines..
Lines:
526 =1 - Show hidden settings**
4752= *[02]Auto - PCIe lane port config***
4779= *[02]Auto - PCIe lane port config***
4889= *[00]Disabled - Disable 4G decoding ( Might be interfering with PCIe storage drive )*
*Do you need 4G decoding in order to use your graphics card ? Do you use multiple GPUs? Coin mining?
I have 2 GTX 1060's(same node) without this setting turned on and they are working great.
If you would like to read about it check this out:
What is "Above 4G decoding"?
**Will enable you to view all adjustable settings.
***You might be assigning more PCIe lanes than needed. Leaving at "Auto" might fix that.
OBS: The setting above is for a port that is not implemented physically on this node, while the BIOS/CPU/South Bridge can assign lanes to the non-existent port, it'll be waste nonetheless.
Please let me know how it goes.
Thanks
@iguy
Success!
In the end, it was very simple. All i had to do was to manually enable PCIe bifurcation in the BIOS. Both you and I had settings on AUTO in the BIOS so I did not focus on this at first. However, force enabling bifurcation solved all my NVMe problems. I can now access my Intel 660p in Windows.
Hopefully this will help others with similar issues.
As expected, at lot of the things I tried along the way turned out not to matter. For example, installing the Intel NVMe driver was not necessary. In fact, I believe if I had just enabled bifurcation right away without any other changes, I am pretty sure it would have worked right away.Awesome!!! Bifurcation!? WT*! Lol! I'm glad it works! Quick question.
What exactly works ?
Can you see the PATA3 drive in the BIOS?
Can you see the EFI boot loader label in the bios? (shows "Windows Boot Manager" for windows)
Can you boot from it ?
Thanks
Cool, thanks for letting me know. Yeah the NVMe module definitely loaded properly. The "Windows Boot Manager"option is showing up from the SSD bootloader. Anyways i'm glad it works.As expected, at lot of the things I tried along the way turned out not to matter. For example, installing the Intel NVMe driver was not necessary. In fact, I believe if I had just enabled bifurcation right away without any other changes, I am pretty sure it would have worked right away.
So far, I have tested Windows Server 2019 and VMWare ESXi v6.7 and everything works as expected. The NVMe drive shows up in Disk Management in Windows and can be manipulated just as any other storage device. Similar for ESXi.
As I mentioned, it was never my intention to boot from the NVMe drive. For that, I use a regular SATA harddrive for Windows and a USB drive for ESXi. I just need a fast storage device for a sizeable database (1.5TB+) that I am working with.
In the BIOS, under BBS Priorities (Boot order), the NVMe drive shows up as "PATA :SS".
The EFI boot loaders (including "Windows Boot Manager"), were there all along in the BIOS - that did not change when I enabled bifurcation. The only thing that changed was the appearance of "PATA :SS".
Question for you: is it worth updating from the B10 to the B11 BIOS? When I tried to update directly from my original B08 to your B11 BIOS, I received an error message, but maybe I can update to B11 from B10?
Since you asked, I tested and was able to boot directly from the NVMe drive with Windows Server 2019.Cool, thanks for letting me know. Yeah the NVMe module definitely loaded properly. The "Windows Boot Manager"option is showing up from the SSD bootloader. Anyways i'm glad it works.
I strongly recommend installing the NVMe driver from Intel if you are running windows as host OS. The driver might have more features dealing with cache/firmware on the drive.(e.g Samsung 960 Pro - The cache is disabled if not using the Samsung driver, pain, i know..) Try running benchmark with/without the Intel driver.
On the BIOS update.. Honest answer, no. The change is minimal, if any. I noted some minor module upgrades and a couple of power/logging variables changed. Are you interested in performance ? Computing performance? Power saving?/Idle power saving? Disk I/O speed?
Nice, I didn't know any were manufactured with the connectors populated! I soldered the MiniSAS header for 4 additional SATA ports.I also have 3 of the quanta boards with all of the MiniSAS connectors populated. Two are for SATA and one for SAS (if I remember correctly).