Another Noob Questions - Esxi-Omni-Nappit

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Kosti

New Member
Jan 7, 2016
22
0
1
54
Happy New Year All!

So during the holiday period I put together an X.58 platform from bits I had lying around in the man cave and since my last attempt of a storage system failed I'd thought I'd give this one a go with the all in one Nappit

For my own sanity can you please confirm as rebuilt new machine and will use nappit

Currently this is how I built this box apology in advance as this is a desktop build so nothing too fancy just spare parts I had and wanted to put to use.

Motherboard - Asus Rampage III Extreme ROG
RAM - 6G (3x2)Corsair Tri Set (yes need more) ( I had some ECC lying around 4GB set of 3, but these do not post in this board, seems they are tuned for HP?? Labelled as KTH-PL313SK3/12G
CPU - Intel Xeon X5670
GPU - Radeon 7970
SSD - Intel X.25 80GB
SSD - 2 x Sandisk 120GB (recently purchased for this project)
USB2 - Kingston 16GB
DVDRW - Sata
USB3 - Kingston 32GB (recently purchased for this project)
USB3 Sandisk 32GB (recently purchased for this project)

System is watercooled again only since I had all the waterblocks and res and pump sitting around I'd thought I'd better put them to good use :p - of course overkill, LOL



Will be using the onboard Intel ICH10 6 port SATA AHCI Controller for hot swaps since I have a few drives about various sizes with various files/data to back up and store, these are 2 x 4TB seagate drives, 1 x 4TB WD book drive and 3TB WD green drive and a few 1TB drives all full of photos music and other data I want to keep safe

Non of these drives connected yet only theDVDRW is connected to this controller and 1 x 80gb SSD drive

Q.1 Can I passthrough all of the ports to VM even if the DVDROM and 80Gb SSD drive are connected, it appears I can do individual ports right?

Next is the Onboard Marvell Technology Group Ltd 88SE9123 PCIe SATA 6.0 Gb/s controller which has the has 2 x 120GB SSD drives connected, this is where I want to add my datastore on one of them and then mirror it

Q.2 Is this the correct method, is this done at the OMNI level of ZFS mirror level?

I cannot find my M1015 controller so it's not yet installed, I will be adding this in when I find it and will be holding all of my zfs pools across however many drives I can afford to add eg. 8 x 5TB

OK so that is a quick summary of the HW now onto the questions which I know has been asked a few times and I just seem to complicate things but I just want to make sure for my own sanity this is how I was going to setup the All In One Nappit

This was taken from the instruction GEA added a while ago from HERE

basic steps:
- you need two Sata disks for ESXi (optionally an additional USB stick for ESXi)

Is this only for failure purposes? As I have a 16GB USB stick that is primary only to boot into esxi - using 5.5u2 - do I now need to add in say the 80GB sata ssd drive to have another esxi install for backup - could I use a different USB port as the back up since I have USB 2 and USB3 support? Alternatively I wanted to use the 80gb SSD connected to the intel controller ?

- install ESXi on first disk or on a USB stick as usual

already done on 16gb USB stick is there any benifits to go to v6? Will it help speed it up if this is on a USB 3 port and use one of the USB3 stick mentioned above. I know esxi 5.5u2 has usb3 support just how much of a difference in using exsi 6? instead Boot up speeds does not bother me for this project

- install vsphere on a PC
done

- use vsphere to create a datastore on both disk

OK which disc in particular - I don't want to use the USB for datastore and would like to use the 2 x 120GB SDD that are connected to the onboard Marvell Technology

- use vsphere to create a virtual disk (20 GB+) on both datastores
- Install Omni/OI on the disk on the first datastore (which is first in boot-order)

straight forward

- Install napp-it, connect via browser and http://ip:81
- Goto menu Disk - mirror bootdisk and mirror rpool to your second disk

straight forward

optionally
- Edit ESXi VM-settings to modify Bios of this VM: setup boot order to boot from both disks, with second first if ESXi is on first disk

If you need to use a second disk with a different ESXi install, it does not affect Omni/OI beside an optional remirror
If you use an USB stick for ESXi, you have a fully independant mirrorred Storage VM

will give it a go, but I first need to sort out the above questions

So That's it, my limited knowledge and how I wanted to get this box running.

Hope you can assist and guide me to setting this up in a way I can manage it due to my limited skills and wanting something that could be set and forget and thank you again!

PS - I PM'd gea on another forum, so sorry mate, it was a double PM hope you can chime in

edit - corrected RAM part number that I was trying to use and added some pictures

PEACE
Kosti
 
Last edited:

Deci

Active Member
Feb 15, 2015
197
69
28
ESXi 5.5 vswitch network setup - All-in-one

there is a good guide for an all in one setup.

you really want more ram than 6gb if you want to run virtual machines an ZFS for the storage.

1x usb stick for ESXI, esxi itself does not support any kind of USB drive mirroring, hence his post mentions SATA drives.

you can not pass individual ports through via VT-d, its the entire controller or none of it. you can do Raw Device Maps (RDM) for hard disks to map individual ones to the VM, this is not ideal though.

ignore the "install vmware on both disks" part, this is irrelevant if you are installing esxi on a usb drive, you do still need at least 1 disk connected to vmware to hold the vmdk/vmx files for the omnios/nappit virtual machine, so installing vmware to your 80gb ssd instead makes more sense in an AIO build.
 
  • Like
Reactions: Kosti

Kosti

New Member
Jan 7, 2016
22
0
1
54
Hello Deci

Thanks for the reply

Indeed, the RAM is to be upgraded to as much as possible but for now this is all I had available, so once I get more I will add it in! I have some ECC lying around but it seems it will not post in this board, this would have gave me 12G but sadly I'm stuck at 6G for now

I will read the guide you posted now so thanks for the link!!

OK so Omni/Nappit can live on the 80g SSD which is fine, does that mean this becomes a "datastore" drive? I wanted to make the 120GB SSDas a Datastore to place all of the VM's and vmx files and mirror it to the other 120Gb SSD

Now on passthrough I only passthrough say HBA controller so that the VM's can use them too right everything else remains untouched

Off to read the link

PEACE
Kosti
 

Deci

Active Member
Feb 15, 2015
197
69
28
you use the 80gb as a datastore purely for esxi install & omnios/nappit.

you pass the SSD drives to the omnios VM and pass it out of that to a virtual switch in vmware as NFS storage so that esxi can then use it to store virtual machine data.
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
OK, I will give it a shot, forgive me but for clarity

I will have more than 1 datastore(s)

Therefore one DS will be the 80gb SSD for omni/nappit which I will make it either use the full 80gb and the 2nd DS will be for remaining VM's like WIN/Linux/PFS ect

Cool, I just need to get my head around the virtual switch and jumbo frames I am reading now

EDIT - This motherboard only has one NIC, so hope this is still useful or will I need to install another NIC card

PEACE
Kosti
 

Deci

Active Member
Feb 15, 2015
197
69
28
yes,

1st datastore is the 80gb ssd for ESXI Boot and omnios/nappit VM
2nd datastore is via virtual switch from the omnios/nappit vm

this does not require you have a 2nd nic port, that virtual switch is purely software.
 
  • Like
Reactions: T_Minus

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
About your boot mirror considerations:

When you build a ZFS All-In-One, you need three software layers
- 1.) a virtualization layer
- 2.) a storage layer where you can put your VMs on
- 3.) a service layer where services are either on VMs or included in 1.) or 2.)

You have now three options:
A.
Separate the three layers as much as possible. Use minimalistic and just enough OS services for 1.) and 2.)
Eeverything that requires configuration or backup is realized with VMs on ZFS storage for easy backup/ clone or versioning with snaps

This is the basic idea of napp-in-one as it use ESXi - not a full size OS, more like a firmware. You do not need to backup ESXi as you can prepare a boot clone stick or reinstall within minutes without a complex setup. You then add a storage layer for iSCSI, NFS and SMB - not only a storage layer but a specialized and full featured enterprise NAS/ SAN solution based on Solaris/ OmniOS. With the ova template it is as minimalistic as possible for a storage solution, you can use without any special configuration. You can install and setup withing 5 minutes if you do not add any other services to this VM.

Every other service that you need is realized with VMs on ZFS -
based on BSD, OSX, Linux, Solaris or Windows.

B.
Same as above but you add services to the Storage VM like a webserver, a mediaserver, databases etc

C.
All-In-One solutions, that combine 1.), 2.) and 3.) like Hyper-V on Windows or Proxmos on Linux


The effect on backup/restore:
All three solutions A-C share the problem that the bootdisk is on "unsecure" storage, either VMFS, ntfs or ext4 that you cannot trust the same way as you trust ZFS. As solution B. and C. adds complexity to the base OS/ layer, you need to care about backup/restore with a suggested boot mirror.

If you use A., you do not need to care about boot-mirror or backup/restore.
On problems, simply reinstall or use a boot clone disk. Even a complete reinstall is done in 30minutes in a straight forward way inclusive readding the VMs as this is no more as a right-mouse click on the .vmx file in the VM folder.


So do a as simple as possible setup
- use an Sata SSD (30 GB minimum), install ESXi onto and use the rest as a local datastore where you put OmniOS/ napp-it onto - nothing else.

- add an HBA in pass-through mode (or use ohysical RDM) for ZFS storage.
Share the storage via NFS (or iSCSI, but this is mainly if you need it ex for Hyper-V)

- add VMs on ZFS
You can use the SMB storage also for general filer use.
 
Last edited:
  • Like
Reactions: Kosti

Kosti

New Member
Jan 7, 2016
22
0
1
54
Hey Gea,

Thanks for chiming in, I am currently go through the motions and will start with options A

I only was thinking of backing up the DS to the other 120gb SSD so I guess I need to just dive in and try the method that works best

Once I finish I will advise how I went.

Thanks for putting together the OVA and the ongoing support buddy!!

BTW I PM'd u on OCAU so feel free to ignore it now :p
PEACE
Kosti
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Hey All

So I finally got some more RAM and added this into my watercooled system getting me to a total of 12GB, I know this is not a huge amount but for now this is it. I also found in my junk bin a faulty not working IBM M1015, so i was board today and was trying to fix it, turns out the fuse was blown which is located near the PCI connector its blue :p

Below picture is not mine, but grabbed it from here in another thread to show the fuse - I hope this is ok with original poster if not please let me know and I will remove it :D



I fixed it by just using a hookup wire since I do not have a replacement fuse and will order a 2A one when I get a chance. While testing the card I also flashed this without the boot option to a simple HBA IT mode version P19. Installed it into my white ESXI box and pass-through this and rebooted - I have added 2 x 4TB HDD to the M1015

Following on from the install, I created the first DS and used the entire 80GB SSD - since it had a windows partition on it already trying to use it with VMFS5 caused it to failed so done a V3 then deleted it then created it v5 instead of upgrading it to v5 :p

So next was to import the napp-it one .ova template in ESXi via ESXi vsphere menu - This went smoothly so just had a question about a comment in the manual regarding
During setup, you only need to assign your virtual nic. Network is confiured for dhcp.
After that, you can start the napp-it storage server and manage via web-browser.
I am not sure at this point and done nothing and moved along and I was able to connect via web-browser to http:serverip:81 fine and nappit is displayed

The other point I don't know what to do is this comment
!! problem currently napp-it uses mini-httpd, one of the smallest webservers.
Sometimes, mini-http crashes under load. You have to restart it then as root:
/etc/init.d/napp-it start
Also do I need to install the VM tools or is this already done?

Now I want to make a few VM's but to use one of the 120GB SSD drives and use the other to mirror them, how do I see these SSD drives as ESXI didn't see them when I was trying to make a DS but the BIOS could see these which are connected to the onboard Marvell Technology Group Ltd 88SE9123 PCIe SATA 6.0 Gb/s controller

Thanks
PEACE
Kosti
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
When you import the napp-it template, you can assign the two preconfigured vnics to a ESXi vswitch and optionally to a vlan if you use tagged vlans. A DHCP server is needed to get an ip but this is common and provided for example by your Internet router. VMtools are preinstalled.

About mini_httpd
Current version of mini_httpd is very stable. If you need to disable napp-it or restart use this command.

about the Marvell
Maybee this is not supported by ESXi. You may check for drivers, use Sata-AHCI or prefer using with ZFS over NFS and the LSI HBA on a separate high performance SSD pool what I would do.
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Hey gea, thanks for popping in to explain As for the Marvell I've been reading the Marvell 88SE9123 PCIe SATA 6.0 Gb/s controller is hit and miss :(

Code:
~ # lspci -v | grep "Class 0106" -B 1
0000:01:00.0 SATA controller Mass storage controller: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller
         Class 0106: 1b4b:9123
~ #
It looks like VMware has dropped support for Marvell 88SE9128 in ESXi 5.5. But apparently it worked in 5.1 So what if I install 5.1 and confirm Marvell is working then could I upgrade to 5.5 or 6 and still have it working or will it remove or overwrite such support and drivers?

According to From 32 to 2 ports: Ideal SATA/SAS Controllers for ZFS & Linux MD RAID - Zorinaq - on GitHub, Marvell 88SE9128 should be supported under Illumos: 3815 AHCI: Support for Marvell 88SE9128 Reviewed by: Johann 'Myrkrave… · joyent/illumos-joyent@257c04e · GitHub

There is also this site How to make your unsupported SATA AHCI Controller work with ESXi 5.5 and 6.0 which indicates that it can work by adding in the VIB file or via the Offline Bundle format but this is where my limited knowledge fails me as it is mentioned it was added to the list under version

Code:
Version History
Device Vendor Device Name PCI ID added in
Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller2) 1b4b:9123 1.1
Sadly I cannot comment in this blog since it has been closed and version is like 1.33 or something like that

In the same blogg there is mention to use the following to add in the support - but I am not sure if this is valid for my controller
Code:
esxcli software acceptance set --level=CommunitySupported
esxcli network firewall ruleset set -e true -r httpClient
esxcli software vib install -d http://vibsdepot.v-front.de -n sata-xahci
Can I just add this via a command shell, do I need to do anything after than other than a reboot?

I even went to the extreme of hacking my BIOS to load in the latest firmware for the controllers and updated a few other things while in the BIOS I added in the following updates - The intel controller has a later version however the version I added was deemed to have the best throughput and TRIM support

Code:
Intel ICH10R SATA RAID Controller: v11.2.0.1527 (TRIM mod)
Marvell 91xx SATA 6G Controller - Bios: v1.0.0.1038
Marvell 91xx SATA 6G Controller - Firmware: v2.2.0.1125b
Marvell 91xx SATA 6G Controller - Bootloader: v1.0.1.0002b
JMicron JMB36X Controller: v1.08.01
Intel 82567V-2 Gigabit Network: v1.5.43
SLIC 2.1: ASUS (SSV3/MMtool method)
For now I am going to grab 5.1 from the NET and see if this yields support for the controller, as I need to have this working.

If anyone has any further suggestions or ideas on how to get this controller working, please help me

Thanking you all in advance
PEACE
Kosti
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
This Marvell 88SE9123 for esx 5.5 is doing my bloody head in - why is this so hard - even if I passthrough as PCI to nappit it doesn't load up or boot fails to start VM.

I've read a ton of info and heaps of people have this issue, YES I know it's old tech but it seems one particular thread suggests a fix but it goes above my head I do not know what it means as it must be for a linux kernel not for ESXI

HERE

I also found another similar issue via a page here on this forums that pointed to a Japanese site that made a fix for ESXI via another VIB

Code:
In our example, it is assumed that it is put in / tmp.

# Esxcli software acceptance set --level = CommunitySupported
# Esxcli software vib install -v /tmp/sata-mv-0.1.x86_64.vib
I've downloaded the vib, but have not yet tried it since this was for esxi v4.0 from memory although I did attemp the one form Redirecting ... -n sata-xahci however this made stuff all difference and I still could not see this in ESXI 5.5u2 or any VM's

YES I'm in over my head with this stuff but with a little some guidance or someone with some linux background to explain all the stuff just maybe I can get it to work

More finding like these two below seems to have been tracking the bug..

Bug 42679 – DMA Read on Marvell 88SE9128 fails when Intel's IOMMU is on
&
Attachment #124001 for bug #42679

If I add some of these VIBS I am finding, what damage can be done and or can I remove them afterwards?

Kosti
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
You will never be happy on ESXi with non or badly supported hardware.
Use Intel serverboards, Intel nics and LSI HBAs only and your are done.
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
You will never be happy on ESXi with non or badly supported hardware.
Use Intel serverboards, Intel nics and LSI HBAs only and your are done.
Fair enough, i was hoping for some assistance to figure out the marvel issue, so I guess i'll scrap this esxi/nappit ZFS and just use MS server and be done with it
I should just piss this all off to ebay and buy a shitty qnap NAS and just not worry about asking any more help
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
If you want to use any hardware, you must use Windows as there is always driver support from the
hardware manufacturer. But even there you cannot expect that it will work on any Windows version
without trouble or at all.

With systems like ESXi, OSX or Solaris you have mainly driver support only by the OS vendor
like Vmware, Apple or Oracle/Illumos. For ESXi you can find special editions from Dell, HP
or Lenovo for a special server. Sometime you can use them for similar hardware but this is not
what I would call a trouble free solution.

You must simply select an OS or a system platform and then buy the hardware according to that.
This does not mean that it must be expensive. LSI OEM HBAs like a Dell H200 or IBM 1015
are cheap enough.
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Funny, there appears to be some support for "linux" "FreeNAS" and from what I've read "illumos" and even "unRaid"!!! just nothing for ESXI or vmware - YES VMWARE hates Marvel, LOL they just don't get along sadly I maybe should have just gone with a different approach, but I am committed to use nappit, i want to use it and learn on it, but I can't as I am being forced to a more commercial approach link windowz!

I'm hoping somone may know if it's possible in porting the linux drvier into ESXI / VMware?? Or someone who can help me diagnose the VIB errors I get when I add the drivers into ESXI - I'm close but I need help

Buy off the shelf supported hardware is good if you are rich this is not a commercial project just try to build a storage system from stuff i have as cheap as possible. As I mentioned in my OP this is my stuff I have not being used and want to get use out of them since they are doing nothing...Hell I even got my broken LSI M1015 working and this is still NOT even seen by NAPPIT in passthrough at all even when I try to initialize the HDD attached since they are new 4TB drives unformatted

Windows see all drives and even the marvell 9123 works fine out of the box and sees all my drives, there is driver support out there but my question is now around hardware support for it in this setup and seeking assistance. Is there someone who has some experience in this area of porting or can help with ESXI and the Marvel Controller that is onboard my motherboard?

I would have thought someone else had the issue and maybe someone out there that had some experience or work around I could try since this is obviously older hardware.

Thanks
Kosti
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
Clearly its ESXI! downgraded to 5.1 and WOW look its supported and detected now to figure out how to get it working under 5.5 & 6


Code:
~ # lspci -v | grep "Class 0106" -B 1
00:00:1f.2 SATA controller Mass storage controller: Intel Corporation ICH10 6 port SATA AHCI Controller [vmhba0]
         Class 0106: 8086:3a22
--
[b]00:01:00.0 SATA controller Mass storage controller: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller [vmhba2]
         Class 0106: 1b4b:9123[/b]
~ #
I also added in a further support driver using the list command of VIBS you can see it here

esxcli software vib list
sata-mv 0.1 daoyama CommunitySupported 2016-01-18

The only issue so far is that it take about 5-8 min to boot to esxi when loading the AHCI module and I am yest to see the attached drvies

Some positive progress but I need some help to work out how to get it to see the attached devices
 

Deci

Active Member
Feb 15, 2015
197
69
28
you have 3 drives. 1x ssd for esxi and omnios/nappit AIO, 2x to be passed to the AIO vm.

the intel controller will boot esxi fine, you attach the 2x ssd for the AIO to the LSI card and pass that card to the AIO vm.

why are you bothering to muck around with the marvel controller at all?
 

Kosti

New Member
Jan 7, 2016
22
0
1
54
nappit will not load when I add the PCI marvell controller and its in passthrough mode? Nappit hangs and CPU utilisation is max

it is possible to add support to it via the linux fixes into omniOS

@Deci
the LSI is used for the ZFS storage and will carry the 4TB drives in the pool

the Marvel is the 6gb where the intel is only 3gb so its too damn slow