X10-SL7-f: Any real world advantage to LSI 2308 vs Intel SATA ports?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Vocalpoint

New Member
Mar 25, 2016
27
1
3
Calgary, AB
All

Been rocking a pair of Supermicro x10sl7-f for several years now. Reliable as the day they were purchased. One of my boards is used for our file server and another for a deployment server/Hyper-V Test environment where I work on Windows deployment (MDT) and test Windows 10/Windows Server VMs. This box also hosts ManageEngine Desktop Central for Windows patching and I am planning a virtual AD install on Win Server Core.

This month I am rebuilding the deployment box with Windows Server 2019 and decided I wanted to try the LSI 2308 controller on this board. When I first put these boards into service - I simply used the INTEL SATA ports and left it like that. This time I wanted to build the machine up around the LSI controller and the first thing I did was flash it to IT mode so I could present the control as JBOD to the Win Server install.

This weekend I built out the box. There was a number of annoying UEFI hoops I had to go thru to get this thing to boot off a Samsung SSD but it's finally working - I have the Samsung on SAS0 and two Seagate NAS drives on SAS1 and SAS2.

Given the hassle factor and obscure data I had to scrape to get this thing working - I got to wondering if there is any actual advantage to using the LSI for the long term vs simply sticking with the Intel SATA ports for the drive connections and simplifying the overall storage scene.

Given my intended use of this thing - is it worth it to deal with the LSI controller? Is there a marked increase in performance by using the SAS ports vs just plugging into the normal Intel SATA ports. I am not using anything "enterprise" or fancy by any means - standard Samsung EVO 860 for the Boot drive, a pair of Seagate Ironwolf 2TB for local MDT/Imaging storage and a pair of 500GB Samsung EVo 860's for the Hyper-V environment. The Ironwolfs and the pair of EVO's are being handled by DrivePool to display each pair as D: and V:.

Appreciate any comments of suggestions on how to ensure I am getting the best from these boards and the drives using Win Server 2019.

Cheers

VP
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
For your use case there isn't a particular benefit IMO as long as you have enough sata ports.
Pros:
- Can use SAS drives
- Can add expanders and get 50+ drives (HDD) ~ or 16 SSD
- Reduce cable clutter 2x cables to 8x drives vs 8 separate cables
- Minimally less stress on the mobo using PCIE vs SATA

Con:
- Extra point of failure
- Card uses power and generates heat (likely needs active cooling)
- Additional cost for the card/cables
- Takes up a PCIE slot
 
  • Like
Reactions: dontwanna

Vocalpoint

New Member
Mar 25, 2016
27
1
3
Calgary, AB
Spartacus

Thanks for the update. I should clarify that the LSI 2308 is on chip for this board so no real slot issues etc.

But dang - you hit the nail on the head for heat - is it ever hot! I was poking around in the case yesterday (with the sides off) and the heatsink for the LAS was on fire with just three drives plugged in and not much going on. I can't image what the temps would be in there with a full 8 drives connected and blaring away.

Will most likely move away from this scenario and go back to the mobo SATA ports for this use case.

VP
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Ah didn't look at the board model closely, yeah not much advantage then, that heatsink will likely still run fairly hot even with little/no load.

I needed all the slots on the board, the onboard SATA, and a PCIE card for my setup, so I just got a cross sectional PCIE mount that I attached a 140mm fan that blows down on the heatsinks.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
There is a drastic difference in Queue Depth between onboard\SATA and LSI SAS controllers as well as overall performance boost with changing nothing but the controller.

I forget who\where did a test\review but someone compared:
- Onboard SATA
- LSI 2008
- LSI 3008
- HP vs LSI controller firmware

Sorry I don't have the link\information but for SSD performance and a virtualized environment a SAS card yields > performance with the same drives there's no debating that.

Hopefully someone here has the link to the performance test with the numbers.
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
Other advantages of the LSI controller:
  • You can pass it through to a VM separately from the Intel ports that you can use for the OS bootdrives or other stuff.
    • An example of this is the bastardized setups where you have a FreeNAS VM that runs ZFS on the disks on the LSI controller and makes it accessible to the host hypervisor (could be ESXi) via iSCSI or NFS.
  • If you put it in IR mode, you can run a RAID setup on an OS that doesn't support software raid (like ESXi).
 

XeonLab

Member
Aug 14, 2016
40
13
8
There is a drastic difference in Queue Depth between onboard\SATA and LSI SAS controllers as well as overall performance boost with changing nothing but the controller.

I forget who\where did a test\review but someone compared:
- Onboard SATA
- LSI 2008
- LSI 3008
- HP vs LSI controller firmware

Sorry I don't have the link\information but for SSD performance and a virtualized environment a SAS card yields > performance with the same drives there's no debating that.

Hopefully someone here has the link to the performance test with the numbers.
This probably isn't the test what you meant, but interesting results nonetheless. Comparing onboard controller vs. external cards would actually make a very nice article for STH...

https://calomel.org/zfs_raid_speed_capacity.html
(Scroll down for SATA controller comparision)

Also somewhat related:
Why Queue Depth matters!

I vaguely remember that some HBAs allow you to tune queue depth and hence gain some performance.
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
While these are valid points to consider, I think OP’s use case may not see notable performance difference using onboard SAS controller vs Mobo SATA ports.
Having said that, not sure if there is an option to disable the SAS controller to save power and cooling X10SL7 board. So may be better of using it since OP already flashed it IT and make sure sufficient airflow if its in a desktop type chassis.
 
  • Like
Reactions: dontwanna