Another Noob Questions - Esxi-Omni-Nappit

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Kosti

New Member
Jan 7, 2016
22
0
1
54
Hey gea, thanks for popping in to explain As for the Marvell I've been reading the Marvell 88SE9123 PCIe SATA 6.0 Gb/s controller is hit and miss :(

Code:
~ # lspci -v | grep "Class 0106" -B 1
0000:01:00.0 SATA controller Mass storage controller: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller
         Class 0106: 1b4b:9123
~ #
It looks like VMware has dropped support for Marvell 88SE9128 in ESXi 5.5. But apparently it worked in 5.1 So what if I install 5.1 and confirm Marvell is working then could I upgrade to 5.5 or 6 and still have it working or will it remove or overwrite such support and drivers?

According to From 32 to 2 ports: Ideal SATA/SAS Controllers for ZFS & Linux MD RAID - Zorinaq - on GitHub, Marvell 88SE9128 should be supported under Illumos: 3815 AHCI: Support for Marvell 88SE9128 Reviewed by: Johann 'Myrkrave… · joyent/illumos-joyent@257c04e · GitHub

There is also this site How to make your unsupported SATA AHCI Controller work with ESXi 5.5 and 6.0 which indicates that it can work by adding in the VIB file or via the Offline Bundle format but this is where my limited knowledge fails me as it is mentioned it was added to the list under version

Code:
Version History
Device Vendor Device Name PCI ID added in
Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller2) 1b4b:9123 1.1
Sadly I cannot comment in this blog since it has been closed and version is like 1.33 or something like that

In the same blogg there is mention to use the following to add in the support - but I am not sure if this is valid for my controller
Code:
esxcli software acceptance set --level=CommunitySupported
esxcli network firewall ruleset set -e true -r httpClient
esxcli software vib install -d http://vibsdepot.v-front.de -n sata-xahci
Can I just add this via a command shell, do I need to do anything after than other than a reboot?

I even went to the extreme of hacking my BIOS to load in the latest firmware for the controllers and updated a few other things while in the BIOS I added in the following updates - The intel controller has a later version however the version I added was deemed to have the best throughput and TRIM support

Code:
Intel ICH10R SATA RAID Controller: v11.2.0.1527 (TRIM mod)
Marvell 91xx SATA 6G Controller - Bios: v1.0.0.1038
Marvell 91xx SATA 6G Controller - Firmware: v2.2.0.1125b
Marvell 91xx SATA 6G Controller - Bootloader: v1.0.1.0002b
JMicron JMB36X Controller: v1.08.01
Intel 82567V-2 Gigabit Network: v1.5.43
SLIC 2.1: ASUS (SSV3/MMtool method)
For now I am going to grab 5.1 from the NET and see if this yields support for the controller, as I need to have this working.

If anyone has any further suggestions or ideas on how to get this controller working, please help me

Thanking you all in advance
PEACE
Kosti
This Marvell 88SE9123 for esx 5.5 is doing my bloody head in - why is this so hard - even if I passthrough as PCI to nappit it doesn't load up or boot fails to start VM.

I've read a ton of info and heaps of people have this issue, YES I know it's old tech but it seems one particular thread suggests a fix but it goes above my head I do not know what it means as it must be for a linux kernel not for ESXI

HERE

I also found another similar issue via a page here on this forums that pointed to a Japanese site that made a fix for ESXI via another VIB

Code:
In our example, it is assumed that it is put in / tmp.

# Esxcli software acceptance set --level = CommunitySupported
# Esxcli software vib install -v /tmp/sata-mv-0.1.x86_64.vib
I've downloaded the vib, but have not yet tried it since this was for esxi v4.0 from memory although I did attemp the one form Redirecting ... -n sata-xahci however this made stuff all difference and I still could not see this in ESXI 5.5u2 or any VM's

YES I'm in over my head with this stuff but with a little some guidance or someone with some linux background to explain all the stuff just maybe I can get it to work

More finding like these two below seems to have been tracking the bug..

Bug 42679 – DMA Read on Marvell 88SE9128 fails when Intel's IOMMU is on
&
Attachment #124001 for bug #42679

If I add some of these VIBS I am finding, what damage can be done and or can I remove them afterwards?

Kosti
Funny, there appears to be some support for "linux" "FreeNAS" and from what I've read "illumos" and even "unRaid"!!! just nothing for ESXI or vmware - YES VMWARE hates Marvel, LOL they just don't get along sadly I maybe should have just gone with a different approach, but I am committed to use nappit, i want to use it and learn on it, but I can't as I am being forced to a more commercial approach link windowz!

I'm hoping somone may know if it's possible in porting the linux drvier into ESXI / VMware?? Or someone who can help me diagnose the VIB errors I get when I add the drivers into ESXI - I'm close but I need help

Buy off the shelf supported hardware is good if you are rich this is not a commercial project just try to build a storage system from stuff i have as cheap as possible. As I mentioned in my OP this is my stuff I have not being used and want to get use out of them since they are doing nothing...Hell I even got my broken LSI M1015 working and this is still NOT even seen by NAPPIT in passthrough at all even when I try to initialize the HDD attached since they are new 4TB drives unformatted

Windows see all drives and even the marvell 9123 works fine out of the box and sees all my drives, there is driver support out there but my question is now around hardware support for it in this setup and seeking assistance. Is there someone who has some experience in this area of porting or can help with ESXI and the Marvel Controller that is onboard my motherboard?

I would have thought someone else had the issue and maybe someone out there that had some experience or work around I could try since this is obviously older hardware.

Thanks
Kosti
Clearly its ESXI! downgraded to 5.1 and WOW look its supported and detected now to figure out how to get it working under 5.5 & 6


Code:
~ # lspci -v | grep "Class 0106" -B 1
00:00:1f.2 SATA controller Mass storage controller: Intel Corporation ICH10 6 port SATA AHCI Controller [vmhba0]
         Class 0106: 8086:3a22
--
[b]00:01:00.0 SATA controller Mass storage controller: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller [vmhba2]
         Class 0106: 1b4b:9123[/b]
~ #
I also added in a further support driver using the list command of VIBS you can see it here

esxcli software vib list
sata-mv 0.1 daoyama CommunitySupported 2016-01-18

The only issue so far is that it take about 5-8 min to boot to esxi when loading the AHCI module and I am yest to see the attached drvies

Some positive progress but I need some help to work out how to get it to see the attached devices
I would like to get this working and appreciate those who want and can possible assist to please chime in. Perhaps what I should use or how I should use it or critiquing my setup best be left for when its resolved

Thank you
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Is Nappit supposed to see the LSI card if its in passthrough? I do not see the attached 4TB drvies attached, do i need to configure anything else in the VM profile

In nappit Ive gone to drive and initialise but nothing found. Im sure its something I've missed or done, but if I take it out of passthrough I can see it in Nappit and the drives on the LSI card

Since the Marvell kills opensolaris/nappit and hangs



I cannot use this controller so how should I use the 2 x 120GB SSD drives to add VM's, do I put this on the LSI card or on the onboard Intel controller ? if its on the Intel, how can I pass it through as it has the 80G All in One and used as the DS for ESXI
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
A typical setup.
- Use a small SSD 30GB+ on Sata where you boot ESXi and use the rest as a local datastore
- Put the OmniOS storage VM onto this local datastore

- use an LSI HBA controller and pass-through to OmniOS
all connected disks must be seen in the LSI firmware at boootup and within OmniOS
You can then create a pool from this disks and share via NFS for ESXi or SMB for general use.

OmniOS supports LSI HBAs out of the box.
Problem may persist if you use a genuine IBM M1015 as there is a raid-5 firmware installed per default.
You must reflash with the HBA firmware from an LSI 9211 see IBM ServeRAID M1015 Part 4: Cross flashing to a LSI9211-8i in IT or IR mode
There are also reports that the LSI controllers have compatibility problems with some (nonserver) mainboards.


btw
I have updated my HowTo
http://www.napp-it.org/doc/downloads/napp-in-one.pdf
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
- Use a small SSD 30GB+ on Sata where you boot ESXi and use the rest as a local datastore
- Put the OmniOS storage VM onto this local datastore
This is already done and previously mentioned sitting on the Intel onboard controller via the 80GB X.25 SSD drive
use an LSI HBA controller and pass-through to OmniOS
all connected disks must be seen in the LSI firmware at boootup and within OmniOS
You can then create a pool from this disks and share via NFS for ESXi or SMB for general use.
Followed the guide and LSI is already in passthrough - Not seen by OmniOS or Nappit
IBM M1015 is in IT mode with FW P19 as I flashed this after I repaired the card although I have not loaded the mptsas2.rom file into the card so it boots faster. The LSI is not seen in omniOS when in passthrough but only when passthrough is removed
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
This indicates a compatibility problem around mainboard and ESXi passthrough.
Only option here may be trying a newer bios and/or another ESXi version.
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Cmon gea, you need to stop using compatibility as a symptom for every hurdle I come across and from my previous comments and you will read that I am using new bios tried other things and yes I accept that Solaris doesn't support Marvell, but every hurdle is not HCL related and maybe just a configuration error on my part

In Nappit do you need to edit the VM LSI configuration at all?

Just as a test I loaded up FreeNAS on a USB and everything is seen!
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
I just tried XenServer and can see the LSI and attached drives without any config just worked out of the box the Marvell was not seen :(
What I will try next is download the image of omni and install it without using the OVF template
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Napp-it is based on a vanilla setup of Solaris or OI/ OmniOS. It does not modify any OS behaviours. But what I have learned in many years is that every OS or environment has its own requirements and problems. Some configs are known to be trouble free, others not - especially with ESXi but you will find similar with any other OS.
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Thanks gea, I know what you are saying however I wonder how difficult it is to fix the actual issue I am having, as it appears and as I previously noted by turning off VT-D the controller is fully functional and it can detect the drives that are attached to the Marvell Controller in ESXI...!






So with Visualisation being off in the BIOS of the motherboard it works completely fine in 5.1, but with it enabled Intel VT-d in the BIOS the controller is seen but not the attached drives in ESXI so something breaks..

I am not sure what is happening with Intel's IOMMU when is on via VT-d in BIOS as it breaks the Marvell Controller, it must be addressing issues and somehow could be fixed if one could code??

So how hard would it be to fix it? Could it be possibly to coded a fix? Where should I seek such support?

Obviously I cannot leave the VT-d off as then I cannot passthrough anything :( fark!!!
 
Last edited:

Kosti

New Member
Jan 7, 2016
22
0
1
54
@gea
So here is a fresh install of 5.5u2 - nothing connected to the Marvell and only using the Intel SATA onboard - this is where the Datastore is and where the CDROM is connected too as well as the 2 other SSD 120GB drives

What am I doing wrong as I cannot see anything no drives and nothing under disk - I should see the LSI at least as well as the Intel controller











 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Well Color me RED! FFS - I didn't realise that i needed to add the LSI as a PCI controller!! as from the VM since I saw the LSI already in there I figured that it was installed???? So I add in a new PCI, pick the LSI and I see my two drives - phew!!







Bloody hell!!!
OK please tell me this is now correct?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
yes, correct,

Now goto Pool > create and create a pool, then
Filesystem > create and create a filesystem and share it via NFS and SMB
Use the NFS share for ESXi (put other VMs there) and the SMB share to clone/move/backup the VMs.
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Yep, don't that last night!



Now just trying to go through the basic steps to setup mail, jobs etc, some are a little confusing as I am not sure where the ntp time setting is and or why it doesn't grab it from ESXI settings also is there a way to make the IP static?

Currently my ISP has port 25 blocked and I am not sure if I want to use push so getting email notifications may not work for a while

Now i wasn't going to put the other VM's on these 4TB drvies - I am going to put my other VMs on the 2 spare 120gb ssd drives which are on the same Intel controller as the ESXI DS and nappit install on the 80GB I also used the remaining 40 GB off the 80GB for a cache for ESXI.

If I do this will these other VM's be able to see the 4TB drives?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
-email
switch to tls encrypted mail on port 587 in menu jobs > tls email

-static ip
use menu system > network eth

-your ssd
I would use them on the ibm as a fast zfs pool (mirror) not as a local vmfs datastore

-l2arc
you can create a virtual disk on the local datastore and use for l2arc

you can use your zfs filesystem via NFS, SMB and iSCSI
- in esxi, add storage > network file system (NFS) > ip/pool/filesystem
and store there VMs
- from a guest/client pc via nfs or smb or iSCSI
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
-email
switch to tls encrypted mail on port 587 in menu jobs > tls email

-static ip
use menu system > network eth

-your ssd
I would use them on the ibm as a fast zfs pool (mirror) not as a local vmfs datastore
is this decision only for performance or so I can use them for ZIL and L2ARC - Im not sure how to do this as I only have 12GB of RAM in my system


-l2arc
you can create a virtual disk on the local datastore and use for l2arc
which one the ESXI "local DS"?

you can use your zfs filesystem via NFS, SMB and iSCSI
- in esxi, add storage > network file system (NFS) > ip/pool/filesystem
and store there VMs
- from a guest/client pc via nfs or smb or iSCSI
I don't want to use my 4TB drives to store VM's this 4TB drive is for storage / backup and when i get funds will increase this size to as much as I can when i can

so I have my local Vmfs DS on the 80GB ssd and I have the storage cache on the remaining space of the Nappit install about 43GB is this not a good idea?

if I move the 2 x 120g SSD 's on to the IBM controller to make the ZIL then it will force me to add the VMs onto the 4TB drive which I want to avoid

Thanks
Kosti
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
I suggested to use the two SSDs not as a ZIL but as a high performance 120g pool (raid-1) for VMs.