Potential Deal: 2 x Dual 2011 nodes @$199, Quanta Openrack

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kfriis

Member
Apr 8, 2015
54
7
8
48
@iguy

Thanks again for your helpful and detailed response. I have now worked through you suggestions – unfortunately it did not work.
But let me explain and clarify further what I have done and also answer the questions you posed.

======

First, as you suggested, I needed to verify that I did not have defective hardware. This was an excellent suggestion that I had somewhat ignored because doing so involved considerable work. In short, here is what I did:

I booted up a newer MSI consumer motherboard with an Intel i5-7600K CPU. This motherboard has a built-in MVMe slot, so I know that it supports NVMe drives. I then installed the NVMe-PCIe adapter card with the Intel 660p drive in an expansion slot and booted into Windows 10 Pro. I did not touch the BIOS – I just booted straight into W10. Short story - W10 immediately recognized the Intel 660p drive as verified in both Device Manager (drive shows up under Disk drives and NVMe controller under Storage controllers) and in Disk Management (for good measure I initialized the disk here with a GPT partition table).

I did not even have to install the Intel NVMe drivers as you suggested in your last post. The drive was immediately recognized by W10 using the built-in Microsoft NVMe driver. For good measure, I also installed the Intel driver which just replaced the Microsoft storage controller with the Intel one. In both cases, the drive worked fine.

Now I knew the hardware worked.

=====

Back to the Quanta Winterfell node.

Next, I wanted to make sure I did not have a software/OS issue. That is why, as I explained in a prior post here, I tried several different operating systems on the Quanta node.

To recap and further clarify:

With your modded B10 BIOS successfully flashed, I have successfully booted and run the following operating systems:

1) Windows Server 2019 (share code-base with recent version of Windows 10)

This OS runs great on the Quanta node and installs w/o any issues. However, it does not recognize the NVMe drive OR controller in Device Manager or Disk Management.


2) VMWare VSphere ESXi v. 6.7 U1 – build 11675023 (latest version)

ESXi also runs great on the Quanta node and fully supports NVMe. This is my daily driver and I have run this OS for some time without any problems. ESXi also fails to recognize the NVMe drive even though the other controllers like SATA and SAS are fully recognized.


3) Intel Clear Linux

This is Intel’s version of Linux which is optimized to run on Intel hardware. As expected, this OS also runs great on the Quanta node since the motherboard is based on the Intel C602 chipset.

I have also been running this OS for a while with Linux kernels 4.19.x, 4.20.x and the latest available 5.03.x kernel. As a side note, this is my recommended Linux distro for the Quanta Winterfell nodes. Everything works great and it is a very speedy Linux distro.

As you might have guessed, this OS also fails to recognize the NVMe drive. Among other things, no NVMe devices show up in /dev, nothing about nvme when running lspci, lsmod or lsblk.

I believe that running three operating systems as different as the above, proves that there are no software issues that would prevent the NVMe drive being recognized in the Quanta node. Even if one of the above OS would fail to recognize the NVMe for some reason, it is unlikely that they all three would.

=====

I have now ruled out anything wrong with the NVMe hardware or the software configuration.

As I see it, the only thing left is the combination of this particular Intel 660p NVMe drive running in the Quanta node. My node is the same hardware version as yours – it has the A07 sticker next to the Ethernet port.

I have also ruled out a problem with the PCIe riser card as I have been using this riser for a while with an AMD HD 3450 graphics card. I have also tried booting the Quanta with or without the graphics card just to see if it would make a difference.

I have verified the BIOS boot settings as you suggested. I disabled all the boot devices and just enabled UEFI boot. This works as expected as I can press F11 during POST and select the boot device. To boot windows, I select the Windows Boot Loarder. To boot Intel Clear Linux, I select the internal (UEFI) SATA harddrive directly (Clear Linux and Windows Server are installed on two different primary partitions on the SATA drive). To boot VMWare ESXi, I select the USB port that has a USB stick attached where ESXi is installed. Everything works great that way and enables me to boot three different operating systems on the same node.

I have also looked at all the settings in the BIOS. I do not claim to understand all of them, but I have looked numerous times (more than I care to admit) for anything that would indicate the NVMe drive being recognized in the PCIe slot. In fact, I do not see a difference in any of the BIOS settings whether the NVMe card/drive is mounted in the PCIe slot or not.

Specifically, I do not see, anywhere in the BIOS, the PATA3 boot device option, as you mentioned in your original response. I have the exact same boot device options whether the NVMe card/drive is mounted in the PCIe slot or not. If seeing PATA3 is supposed to be the “proof” that the NVMe drive is recognized by the BIOS, I can say that it isn’t.

Nothing in my BIOS settings indicate that the NVMe drive is recognized. I am not sure if any settings in the BIOS needs to be changed for the BIOS itself to recognize the drive. In my experience with other motherboards, the BIOS has always recognized the attached hardware directly, but sometimes you need to change BIOS settings for the OS to recognize the hardware.

I should also mention again, that I do not even care if I can boot from the NVMe drive or not. I just want the Quanta to recognize it as any other storage device, as I plan to boot from either the SATA drive or USB stick as I have been doing so far. I have noticed many people having problem booting from NVMe drives on older hardware, but this is not what I am trying to do.

Any suggestions on how to proceed now? I believe I am now at a point where the problem has been narrowed down specifically to be centered around the BIOS itself recognizing the Intel 660p drive.

Thanks for all your help!
 

iguy

New Member
Feb 23, 2017
13
7
3
41
@iguy

Thanks again for your helpful and detailed response. I have now worked through you suggestions – unfortunately it did not work.
But let me explain and clarify further what I have done and also answer the questions you posed.

======

First, as you suggested, I needed to verify that I did not have defective hardware. This was an excellent suggestion that I had somewhat ignored because doing so involved considerable work. In short, here is what I did:

I booted up a newer MSI consumer motherboard with an Intel i5-7600K CPU. This motherboard has a built-in MVMe slot, so I know that it supports NVMe drives. I then installed the NVMe-PCIe adapter card with the Intel 660p drive in an expansion slot and booted into Windows 10 Pro. I did not touch the BIOS – I just booted straight into W10. Short story - W10 immediately recognized the Intel 660p drive as verified in both Device Manager (drive shows up under Disk drives and NVMe controller under Storage controllers) and in Disk Management (for good measure I initialized the disk here with a GPT partition table).

I did not even have to install the Intel NVMe drivers as you suggested in your last post. The drive was immediately recognized by W10 using the built-in Microsoft NVMe driver. For good measure, I also installed the Intel driver which just replaced the Microsoft storage controller with the Intel one. In both cases, the drive worked fine.

Now I knew the hardware worked.

=====

Back to the Quanta Winterfell node.

Next, I wanted to make sure I did not have a software/OS issue. That is why, as I explained in a prior post here, I tried several different operating systems on the Quanta node.

To recap and further clarify:

With your modded B10 BIOS successfully flashed, I have successfully booted and run the following operating systems:

1) Windows Server 2019 (share code-base with recent version of Windows 10)

This OS runs great on the Quanta node and installs w/o any issues. However, it does not recognize the NVMe drive OR controller in Device Manager or Disk Management.


2) VMWare VSphere ESXi v. 6.7 U1 – build 11675023 (latest version)

ESXi also runs great on the Quanta node and fully supports NVMe. This is my daily driver and I have run this OS for some time without any problems. ESXi also fails to recognize the NVMe drive even though the other controllers like SATA and SAS are fully recognized.


3) Intel Clear Linux

This is Intel’s version of Linux which is optimized to run on Intel hardware. As expected, this OS also runs great on the Quanta node since the motherboard is based on the Intel C602 chipset.

I have also been running this OS for a while with Linux kernels 4.19.x, 4.20.x and the latest available 5.03.x kernel. As a side note, this is my recommended Linux distro for the Quanta Winterfell nodes. Everything works great and it is a very speedy Linux distro.

As you might have guessed, this OS also fails to recognize the NVMe drive. Among other things, no NVMe devices show up in /dev, nothing about nvme when running lspci, lsmod or lsblk.

I believe that running three operating systems as different as the above, proves that there are no software issues that would prevent the NVMe drive being recognized in the Quanta node. Even if one of the above OS would fail to recognize the NVMe for some reason, it is unlikely that they all three would.

=====

I have now ruled out anything wrong with the NVMe hardware or the software configuration.

As I see it, the only thing left is the combination of this particular Intel 660p NVMe drive running in the Quanta node. My node is the same hardware version as yours – it has the A07 sticker next to the Ethernet port.

I have also ruled out a problem with the PCIe riser card as I have been using this riser for a while with an AMD HD 3450 graphics card. I have also tried booting the Quanta with or without the graphics card just to see if it would make a difference.

I have verified the BIOS boot settings as you suggested. I disabled all the boot devices and just enabled UEFI boot. This works as expected as I can press F11 during POST and select the boot device. To boot windows, I select the Windows Boot Loarder. To boot Intel Clear Linux, I select the internal (UEFI) SATA harddrive directly (Clear Linux and Windows Server are installed on two different primary partitions on the SATA drive). To boot VMWare ESXi, I select the USB port that has a USB stick attached where ESXi is installed. Everything works great that way and enables me to boot three different operating systems on the same node.

I have also looked at all the settings in the BIOS. I do not claim to understand all of them, but I have looked numerous times (more than I care to admit) for anything that would indicate the NVMe drive being recognized in the PCIe slot. In fact, I do not see a difference in any of the BIOS settings whether the NVMe card/drive is mounted in the PCIe slot or not.

Specifically, I do not see, anywhere in the BIOS, the PATA3 boot device option, as you mentioned in your original response. I have the exact same boot device options whether the NVMe card/drive is mounted in the PCIe slot or not. If seeing PATA3 is supposed to be the “proof” that the NVMe drive is recognized by the BIOS, I can say that it isn’t.

Nothing in my BIOS settings indicate that the NVMe drive is recognized. I am not sure if any settings in the BIOS needs to be changed for the BIOS itself to recognize the drive. In my experience with other motherboards, the BIOS has always recognized the attached hardware directly, but sometimes you need to change BIOS settings for the OS to recognize the hardware.

I should also mention again, that I do not even care if I can boot from the NVMe drive or not. I just want the Quanta to recognize it as any other storage device, as I plan to boot from either the SATA drive or USB stick as I have been doing so far. I have noticed many people having problem booting from NVMe drives on older hardware, but this is not what I am trying to do.

Any suggestions on how to proceed now? I believe I am now at a point where the problem has been narrowed down specifically to be centered around the BIOS itself recognizing the Intel 660p drive.

Thanks for all your help!

Hi there, great steps you took troubleshooting this. Thanks for being clear with the write up as well.

Ok, now I see where you stand.. I think some setting in your bios might be different than mine. I can't think of a setting that might be causing this though.. Oddly enough not even the OS is able to read the drive.

It might not matter much but what's the pinout configuration you have on your raiser?
X8 and X8 ? That would be advisable.

Please download the file I shared a few posts back. You will find a folder named SCEWIN(AMISCE tool), that's an application to dump the BIOS settings to a plain text file. Check "Command to run.txt" file.
There is a "nvram.txt" file in the folder, before dumping your configuration, rename this file so you can compare the settings later.

If you are running a 64-bit system(Win 2019 sure is)..
Open a command prompt window(cmd) as administrator, navigate to the folder..
The command is: SCEWIN_64 /o /s nvram.txt /h Hii.db /v /q
This will dump your current settings to a text file(nvram.txt)
Compare this file with the one you previously renamed.
I use winmerge( WinMerge - You will see the difference… ) to compare the files.

Please let me know what you find.
Share your config file. I'm curious to see what might be causing this.

Thanks
 

kfriis

Member
Apr 8, 2015
54
7
8
48
@iguy

See attached BIOS dump from my Quanta node flashed with your modded B10 BIOS file.

I am also very curious to see where your settings differ from mine. Would you mind comparing my file with a file from one of your B10 BIOS nodes?
 

Attachments

iguy

New Member
Feb 23, 2017
13
7
3
41
@iguy

See attached BIOS dump from my Quanta node flashed with your modded B10 BIOS file.

I am also very curious to see where your settings differ from mine. Would you mind comparing my file with a file from one of your B10 BIOS nodes?
Sorry about the delay. Busy with work and some home chores. Finally, the snow melted over here. Anyways, I've looked at your bios settings and would change these lines..

Lines:
526 =1 - Show hidden settings**
4752= *[02]Auto - PCIe lane port config***
4779= *[02]Auto - PCIe lane port config***
4889= *[00]Disabled - Disable 4G decoding ( Might be interfering with PCIe storage drive )*

*Do you need 4G decoding in order to use your graphics card ? Do you use multiple GPUs? Coin mining?
I have 2 GTX 1060's(same node) without this setting turned on and they are working great.
If you would like to read about it check this out:
What is "Above 4G decoding"?

**Will enable you to view all adjustable settings.

***You might be assigning more PCIe lanes than needed. Leaving at "Auto" might fix that.
OBS: The setting above is for a port that is not implemented physically on this node, while the BIOS/CPU/South Bridge can assign lanes to the non-existent port, it'll be waste nonetheless.

Please let me know how it goes.

Thanks
 
Last edited:

liqserv

New Member
Dec 30, 2015
9
3
3
40
hi,

anyone was able to use ipmi to remote control the server? i tried ipmitool, but without success.
maybe i don't know the proper username/password. I see, that the server bmc get a ip from dhcp,
but i was not able to establish communication via ipmi to that ip address.

tnx

Jan
I don't know if that's new or not, but IPMI for the server is working from Supermicro's IPMIView

Just scan the network for IPMI 2.0, it will auto find IPMI of your nodes.
Log in with default admin:admin credentials.
It was working to Power Up/Power Down/Reset for me. Other functions did not work, though may need additional testing.
 

Attachments

Dmytro Burianov

New Member
Mar 30, 2019
4
0
1
The bios, drivers (windows and Linux) can be downloaded for the Quanta system here. Please note this is not the exact system but is a compatible motherboard. (download the bios and read the release notes to verify if you want.) I wonder if it would work on a Wiwynn? The mobo bios chip is supposed to be replaceable...might be worth trying for someone who needs windows.

Download Center

The system is considered an F03B motherboard.

A better manual with bios settings and descriptions can be found here...

QUANTA RACKGO X SERIES F03A TECHNICAL MANUAL Pdf Download.

This manual is for a different model so although the bios looks the same the rest of the manual is partially usable. i.e. the info about IPMI and remote KVM do not apply to the windmill boards we have. Other info may or may not apply directly but is very similar.

Hope it helps.
Are work on it (F03A)MB KVM
Ipmi via ipmitool work but GUI do not work
 

Dmytro Burianov

New Member
Mar 30, 2019
4
0
1
I don't know if that's new or not, but IPMI for the server is working from Supermicro's IPMIView

Just scan the network for IPMI 2.0, it will auto find IPMI of your nodes.
Log in with default admin:admin credentials.
It was working to Power Up/Power Down/Reset for me. Other functions did not work, though may need additional testing.
nmap say: available only ipmi port(623/udp) and and I do not found any answer for it
 

kfriis

Member
Apr 8, 2015
54
7
8
48
Sorry about the delay. Busy with work and some home chores. Finally, the snow melted over here. Anyways, I've looked at your bios settings and would change these lines..

Lines:
526 =1 - Show hidden settings**
4752= *[02]Auto - PCIe lane port config***
4779= *[02]Auto - PCIe lane port config***
4889= *[00]Disabled - Disable 4G decoding ( Might be interfering with PCIe storage drive )*

*Do you need 4G decoding in order to use your graphics card ? Do you use multiple GPUs? Coin mining?
I have 2 GTX 1060's(same node) without this setting turned on and they are working great.
If you would like to read about it check this out:
What is "Above 4G decoding"?

**Will enable you to view all adjustable settings.

***You might be assigning more PCIe lanes than needed. Leaving at "Auto" might fix that.
OBS: The setting above is for a port that is not implemented physically on this node, while the BIOS/CPU/South Bridge can assign lanes to the non-existent port, it'll be waste nonetheless.

Please let me know how it goes.

Thanks
@iguy

Thank you for all your help - it is very much appreciated.

I am sorry to say that none of the suggestions worked.

I changed the BIOS settings as you suggested and even some others to make my settings as close as possible to yours - including turning off above 4G decoding. No matter what, the BIOS is clearly not recognizing the NVMe drive.

And - as before - neither Windows, Linux or VMWare ESXi is recognizing the drive either. There are no PATA or similar drives in the Boot devices area of the BIOS. Nothing anywhere indicates anything related to NVMe or an extra storage device.

I know the hardware is working as I have tested the drive/PCIe adapter in a consumer motherboard - as previously mentioned.

More and more, I am thinking that the particular BIOS mod you provided is not working in my case - for some reason.

I am reaching a point of desparation where I am considering trying another BIOS mod even though it doesn't make much sense as you have yours working on the exact same board as mine.

Do you have the UNmodified B10 and B11 files? Or are they already included in the ZIP file you posted above?

I am considering following this procedure - what do you think about it? They claim it should work in ANY Intel board:
[Guide] How to get full NVMe support for all Systems with an AMI UEFI BIOS

Thanks.
 

kfriis

Member
Apr 8, 2015
54
7
8
48
@iguy

Success!

In the end, it was very simple. All i had to do was to manually enable PCIe bifurcation in the BIOS. Both you and I had settings on AUTO in the BIOS so I did not focus on this at first. However, force enabling bifurcation solved all my NVMe problems. I can now access my Intel 660p in Windows.

Hopefully this will help others with similar issues.
 

iguy

New Member
Feb 23, 2017
13
7
3
41
@iguy

Success!

In the end, it was very simple. All i had to do was to manually enable PCIe bifurcation in the BIOS. Both you and I had settings on AUTO in the BIOS so I did not focus on this at first. However, force enabling bifurcation solved all my NVMe problems. I can now access my Intel 660p in Windows.

Hopefully this will help others with similar issues.

Awesome!!! Bifurcation!? WT*! Lol! I'm glad it works! Quick question.

What exactly works ?
Can you see the PATA3 drive in the BIOS?
Can you see the EFI boot loader label in the bios? (shows "Windows Boot Manager" for windows)
Can you boot from it ?

Thanks
 

kfriis

Member
Apr 8, 2015
54
7
8
48
Awesome!!! Bifurcation!? WT*! Lol! I'm glad it works! Quick question.

What exactly works ?
Can you see the PATA3 drive in the BIOS?
Can you see the EFI boot loader label in the bios? (shows "Windows Boot Manager" for windows)
Can you boot from it ?

Thanks
As expected, at lot of the things I tried along the way turned out not to matter. For example, installing the Intel NVMe driver was not necessary. In fact, I believe if I had just enabled bifurcation right away without any other changes, I am pretty sure it would have worked right away.

So far, I have tested Windows Server 2019 and VMWare ESXi v6.7 and everything works as expected. The NVMe drive shows up in Disk Management in Windows and can be manipulated just as any other storage device. Similar for ESXi.

As I mentioned, it was never my intention to boot from the NVMe drive. For that, I use a regular SATA harddrive for Windows and a USB drive for ESXi. I just need a fast storage device for a sizeable database (1.5TB+) that I am working with.

In the BIOS, under BBS Priorities (Boot order), the NVMe drive shows up as "PATA :SS".

The EFI boot loaders (including "Windows Boot Manager"), were there all along in the BIOS - that did not change when I enabled bifurcation. The only thing that changed was the appearance of "PATA :SS".

Question for you: is it worth updating from the B10 to the B11 BIOS? When I tried to update directly from my original B08 to your B11 BIOS, I received an error message, but maybe I can update to B11 from B10?
 

iguy

New Member
Feb 23, 2017
13
7
3
41
As expected, at lot of the things I tried along the way turned out not to matter. For example, installing the Intel NVMe driver was not necessary. In fact, I believe if I had just enabled bifurcation right away without any other changes, I am pretty sure it would have worked right away.

So far, I have tested Windows Server 2019 and VMWare ESXi v6.7 and everything works as expected. The NVMe drive shows up in Disk Management in Windows and can be manipulated just as any other storage device. Similar for ESXi.

As I mentioned, it was never my intention to boot from the NVMe drive. For that, I use a regular SATA harddrive for Windows and a USB drive for ESXi. I just need a fast storage device for a sizeable database (1.5TB+) that I am working with.

In the BIOS, under BBS Priorities (Boot order), the NVMe drive shows up as "PATA :SS".

The EFI boot loaders (including "Windows Boot Manager"), were there all along in the BIOS - that did not change when I enabled bifurcation. The only thing that changed was the appearance of "PATA :SS".

Question for you: is it worth updating from the B10 to the B11 BIOS? When I tried to update directly from my original B08 to your B11 BIOS, I received an error message, but maybe I can update to B11 from B10?
Cool, thanks for letting me know. Yeah the NVMe module definitely loaded properly. The "Windows Boot Manager"option is showing up from the SSD bootloader. Anyways i'm glad it works.

I strongly recommend installing the NVMe driver from Intel if you are running windows as host OS. The driver might have more features dealing with cache/firmware on the drive.(e.g Samsung 960 Pro - The cache is disabled if not using the Samsung driver, pain, i know..) Try running benchmark with/without the Intel driver.

On the BIOS update.. Honest answer, no. The change is minimal, if any. I noted some minor module upgrades and a couple of power/logging variables changed. Are you interested in performance ? Computing performance? Power saving?/Idle power saving? Disk I/O speed?
 

kfriis

Member
Apr 8, 2015
54
7
8
48
Cool, thanks for letting me know. Yeah the NVMe module definitely loaded properly. The "Windows Boot Manager"option is showing up from the SSD bootloader. Anyways i'm glad it works.

I strongly recommend installing the NVMe driver from Intel if you are running windows as host OS. The driver might have more features dealing with cache/firmware on the drive.(e.g Samsung 960 Pro - The cache is disabled if not using the Samsung driver, pain, i know..) Try running benchmark with/without the Intel driver.

On the BIOS update.. Honest answer, no. The change is minimal, if any. I noted some minor module upgrades and a couple of power/logging variables changed. Are you interested in performance ? Computing performance? Power saving?/Idle power saving? Disk I/O speed?
Since you asked, I tested and was able to boot directly from the NVMe drive with Windows Server 2019.

I actually installed Windows as a VM/guest running in an ESXi host first (with the NVMe drive as a "passthrough" PCIe device). After verifying functionality in Windows as a guest, I then booted directly to Windows on the NVMe. That way, I can either boot ESXi and run a Windows guest with exclusive access to the NVMe OR I can boot directly to Windows without changing anything other than the boot device.

The NVMe now shows up as a EFI boot device ("Itntel xxx xxx") in tbe BIOS so I can just select it after pressing F11 during POST.

I also did some tests with the Intel NVMe driver installed or not. The tests are not conclusive. In most cases, the drive is a little bit faster using the Intel driver (as opposed to the built-in MS driver in Windows), but not in all cases. So depending on your workload, it is not a slam-dunk using the Intel driver.

My main goal is read performance on a database that is approx. 1.5TB. With 40 CPU cores, I will have sufficient CPU power, so I need to optimize I/O. My budget right now does not allow for more NVMe drives in order to run some kind of RAID, so I am trying to get as much read performance as possible for the money (some writes will also be done to temp tables etc. but not nearly as much as reads). For less than $200, I thought this NVMe drive would give me the most bang for the buck for my needs. Due to the size of the database, I don't think investing the same amount in RAM would be as efficient.
 

Dmytro Burianov

New Member
Mar 30, 2019
4
0
1
Hi All
I sow document with error codes from ipmi sol but I can not found it now
I bot 4 nodes, but start only 2 and first work good. i move CPUs(2*2609) and 1/3 ram to second node and it node do not start, I do not have video via external pcie card on monitor. On ipmi sol i see next codes in circle: [00][B6][30][15]
can you help me with it codes?
 

wladeeck

New Member
Apr 16, 2019
1
0
1
how to configure raid via sata? 2 ssd I try to connect, I select raid in bios, but the ctrl + i prompt to enter the raid array setup utility does not appear. I tried resetting the BIOS, different BIOS parameters, different ssd and hdd drives, different cables.
 

semidetached

Active Member
Sep 18, 2018
110
129
43
If anyone is interested in dual port 10Gb mezz cards for these, I have 3. They are interesting little cards (see ON 10GbE CX3 | OCP Network Mezzanine | QCT.io)
VGA on-board with basic KVM over IP

I also have 3 of the quanta boards with all of the MiniSAS connectors populated. Two are for SATA and one for SAS (if I remember correctly).

(Sorry if this is the wrong place to offer. I followed this thread when setting up my units and I ended up going a different way with all my LGA2011 equipment)
 
Last edited:

hmartin

Active Member
Sep 20, 2017
316
243
43
37
I also have 3 of the quanta boards with all of the MiniSAS connectors populated. Two are for SATA and one for SAS (if I remember correctly).
Nice, I didn't know any were manufactured with the connectors populated! I soldered the MiniSAS header for 4 additional SATA ports.

The other two MiniSAS are really SAS (see: page 6), but apparently an "Upgrade ROM" is required to use SAS ports 4-7.