9364-8i Storage Spaces Direct Compatibility

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

HomeLabber

Member
Dec 13, 2015
39
19
8
33
So, I think I'm SOL on this, but I just went to deploy a storage spaces direct cluster and discovered that it will not accept disks on a raid controller, even if set to JBOD mode.

The 3 nodes I am clustering all have lenovo 720ix raid controllers (4 once I migrate from the current in-service node), which are lsi 9361-8i equivalents with a cache (they are technically 9364's, due to additional cache sizes vs the 9361). I would cross-flash these, but my understanding is that this isn't possible with these variants. If anyone has any input on this, it would be much appreciated as I am hesitant to flash these and risk bricking them or losing the pre-installed feature keys.

Thanks!
 

mattr

Member
Aug 1, 2013
120
11
18
Yeah this is mentioned in the requirement for Storage Spaces. Storage Spaces Overview
You need an HBA running in IT mode.
In order to crossflash to IT mode a compatible IT mode firmware must already exist. Which is not the case for LSI/Avago RoC based cards which the SAS3108 is.
You'd want a SAS3008 or SAS2008 based card.

Edit: Storage Spaces is super finicky with what it will detect as compatible. If you're clustering with shared storage you need a compatible JBOD enclosure. Here is the certified list: Windows Server Catalog

If you're just doing this at home then no harm in trying to get whatever you have working. If at work then I'd definitely purchase from that list.
 

Dev_Mgr

Active Member
Sep 20, 2014
135
48
28
Texas
I set up a 4-node storage spaces direct cluster in my VMware environment. I had to use the web client as you have to use the virtual SATA controller option (not available in the vSphere client). I also found that the 4 servers I created each had to use different SATA IDs for their 'shared' disks:

Server 1:
SATA 0:0 - boot
SATA 0:1 - cluster disk 1
SATA 0:2 - cluster disk 2
SATA 0:3 - cluster disk 3
SATA 0:4 - cluster disk 4

Server 2:
SATA 0:0 - boot
SATA 0:5 - cluster disk 1
SATA 0:6 - cluster disk 2
SATA 0:7 - cluster disk 3
SATA 0:8 - cluster disk 4

Server 3:
SATA 0:0 - boot
SATA 0:9 - cluster disk 1
SATA 0:10 - cluster disk 2
SATA 0:11 - cluster disk 3
SATA 0:12 - cluster disk 4

Server 4:
SATA 0:0 - boot
SATA 0:13 - cluster disk 1
SATA 0:14 - cluster disk 2
SATA 0:15 - cluster disk 3
SATA 0:16 - cluster disk 4

Even though the disk signatures and volume IDs should be different, 2016 TP5 didn't like it till I changed the SATA IDs for the cluster disks so that they didn't match between the servers (boot disk didn't matter for cluster storage).
 

HomeLabber

Member
Dec 13, 2015
39
19
8
33
@HomeLabber are you planning on a single server or storage spaces direct with multiple servers?

Chris
I have a 3 node cluster I am working on deploying initially and then I'll likely move my 4th node (currently serving all of the VMs into the cluster once everything is migrated.

I couldn't see any easy / cost efficient way to implement storage spaces using all of the 2.5" drives I deployed in these nodes (with their trays). I was hoping I could run sas from a pci card to the backplane but it doesn't appear to be possible with these machines.

I installed a vcenter trial with vsan but wasn't impressed enough to justify their licensing costs. Just installed xenserver 7 so I'll be playing with that tonight to see how it does as an alternative.

For anyone reading this thread from google I have lenovo RD550 nodes with 720ix cards (with backplanes and 1 gb cache but not battery backup). These do support "jbod" mode as I have the cache, but it is not a straight pass-through and windows still recognizes the drives' RAID subsystem. For the vmware install, the lenovo system x image seemed to work fine (loading all of the needed drivers not included in the base iso). The current lenovo vmware partner pack refused to deploy for 6.0 U2, which admittedly was unlisted in the documentation as supported.

If someone has any cost efficient ideas on implementing storage spaces direct with the 720ix/rd550, I'd be willing to try something creative. With that said, I ruled out buying 2.5 to 3.5 inch trays for all of my disks and putting them in external chassis (routed to external sas passthrough cards).
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,420
470
83
S2D is very strict on the BUS type that it supports.

SATA/SAS/NVMe are the 3 that it supports.

A few months ago when I setup my 6 node S2D cluster I ran into the same issue and I had to buy new HBA cards for my servers.

a raid card that you put in to JBOD mode or any disks that you put into JBOD mode still shows up as a bus type of RAID.

Chris
 

Toddh

Member
Jan 30, 2013
122
10
18
Had the same problem here. Purchased raid cards because we thought we are using different San solution.

Going to Storage Spaces Direct we had to remove the raid cards and install LSI 9207-8i cards.

.