1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

An Oracle ZFS Storage All in One Appliance for your Home Lab

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by zhu, Nov 28, 2015.

  1. zhu

    zhu New Member

    Joined:
    Nov 28, 2015
    Messages:
    5
    Likes Received:
    14
    A few days ago, I saw a thread in this forum about a DIY Oracle ZFS Storage Appliance. I realized that my own experience on that could be interesting for others as not that much information are available on the net.

    The beginning of the story is the Oracle ZFS Storage simulator for Virtual Box that you can download from Oracle. I gave it a try and really liked the efficient UI and most important the analytics part with so much option to see what is happening on your storage. After digging a bit, I realized that the simulator is the full software of Oracle ZFS Storage Appliance. I then started to investigate to see if it would be possible to use it under VMware ESXi.

    The following are the steps that are needed for that.

    First download the Oracle/Sun Storage simulator : Oracle ZFS Storage Simulator Downloads

    Create a new virtual machine in ESXi. You will need to select at least virtual hardware 10 (ESXi 5.5 and above) as we will need a SATA controller for the VM. Select Solaris 11 as the guest OS. Give at least 2560 MB of memory to the VM. Remove the default hard drive and the "LSI Logic Parallel Controller" (as the ZFS-SA won't like it). Also add a SATA controller to the VM.

    Unzip the downloaded file of the storage simulator. Upload "Oracle_ZFS_Storage-disk1.vmdk" to your ESXi host in the directory of the VM you have created above.

    You will need to convert the vmdk file. If you attach it directly to the VM it will appear to work but whenever you will want to change your VM you will run into trouble. Login to your ESXi ht using ssh, go to the directory of the VM and use vmkfstools to make the conversion:

    vmkfstools -i Oracle_ZFS_Storage-disk1.vmdk Oracle_ZFS_Storage-boot.vmdk -d thin -a sata

    Add a new hard drive to the VM, select use an existing disk and select the "Oracle_ZFS_Storage-boot.vmdk" file and attach it to the SATA controller (if a LSI controller is created, remove it)

    At that point the system won’t boot because the Solaris boot disk is not configured properly. In order to fix that I followed the instructions of RIAAN'S SYSADMIN BLOG

    Boot from a Solaris image. In ESXi, remember that pressing ESC during boot will bring you the boot menu where you can select to boot from the CDROM. Select keyboard, language then select shell. The purpose is to set the right path for the boot disk

    root@solaris:~# format
    Searching for disks...done


    AVAILABLE DISK SELECTIONS:
    0. c2t0d0 <ATA-VMware Virtual S-0001 cyl 6524 alt 2 hd 255 sec 63>
    /pci@0,0/pci15ad,790@11/pci15ad,7e0@2/disk@0,0
    Specify disk (enter its number): ^C

    "format" will give you the correct path, you then need to update ZFS-SA boot configuration

    root@solaris:~# zpool import -f system
    root@solaris:~# zfs list | grep root
    system/ak-SUNW_ankimo-2013.06.05.4.0_1-1.7/root 1,64G 36,8G 1,63G legacy
    root@solaris:~# mkdir /a
    root@solaris:~# mount -F zfs system/ak-SUNW_ankimo-2013.06.05.4.0_1-1.7/root /a
    root@solaris:~# zfs set readonly=off system/ak-SUNW_ankimo-2013.06.05.4.0_1-1.7/root

    root@solaris:~# cp /etc/path_to_inst /a/etc
    root@solaris:~# echo "setprop boot /devices/pci@0,0/pci15ad,790@11/pci15ad,7e0@2/disk@0,0:a" >>/a/boot/solaris/bootenv.rc

    root@solaris:~# bootadm update-archive -R /a
    updating /a/platform/i86pc/boot_archive
    updating /a/platform/i86pc/amd64/boot_archive

    root@solaris:~# umount /a
    root@solaris:~# zpool export system
    root@solaris:~# init 0


    At that point, the ZFS-SA should boot correctly under ESXi.

    Proceed with the basic configuration of the appliance. Connect to the web interface to perform the full setup.

    Once the setup if finished, login into the appliance (you can use ssh for that). You will login into the appliance shell (named aksh). in order to get a "real" shell, type "shell" to get a bash shell.

    First we need to make the root fs writable :

    mount -o rw,remount /

    Likely you will want to install the vmxnet3s driver. You can do it in the following way. Initiate vmware tool installation in ESXi. The CDROM won't mount automatically so the following steps are needed :
    mkdir /tmp/cdrom
    mount -F hsfs /devices/pci\@0\,0/pci-ide\@7\,1/ide\@1/sd\@0\,0:a /tmp/cdrom/
    cd /tmp/
    tar xzf cdrom/vmware-solaris-tools.tar.gz
    cd vmware-tools-distrib
    install -f /kernel/drv/amd64/ -u root -g sys -m 0755 ./lib/modules/binary/11_64/vmxnet3s
    install -f /kernel/drv/ -u root -g sys -m 0644 ./lib/modules/binary/11/vmxnet3s.conf
    add_drv -i '"pci15ad,7b0"' vmxnet3s


    MTU for vmxnet3s on Solaris can not be set in a conventional way and the appliance will not like that so you need to edit edit "/usr/lib/ak/svc/method/akdatalink" and comment the following 3 lines
    # if [[ $? -ne 0 ]]; then
    # dl_cfgfail "$1: could not set MTU $mtu"
    # fi


    VMware is not Virtual Box so for the web UI to look good, some more editing is needed. For that what's need to be edited are the platform hc topology and the platform xml description.

    cd /usr/platform/i86pc/lib/fm/topo/maps/
    cp VirtualBox-hc-topology.xml VMware-Virtual-Platform-hc-topology.xml

    vi VMware-Virtual-Platform-hc-topology.xml


    Replace ‘VirtualBox’ by ‘VMware-Virtual-Platform’

    Correct the PCI path for the SATA disks to point to the correct location (as found when you made the vmdk file bootable)
    :%s/pci8086,2829@d/pci15ad,790@11\/pci15ad,7e0@2/g

    cd /usr/lib/ak/metadata/appliance/SUNW,ankimo/chassis/
    cp VirtualBox.xml VMware-Virtual-Platform.xml
    vi VMware-Virtual-Platform.xml


    Replace ‘innotek GmbH’ by ‘VMware’ and ‘VirtualBox’ by ‘VMware-Virtual-Platform’

    Reboot the appliance in reconfiguration mode.
    reboot — -r

    if it hangs, just reset it.

    This is what you get

    [​IMG]
     
    #1
    Last edited: Nov 28, 2015
    epicurean, Chuckleb, apnar and 4 others like this.
  2. zhu

    zhu New Member

    Joined:
    Nov 28, 2015
    Messages:
    5
    Likes Received:
    14
    But you can go a bit further to get that which looks more like my home lab server :

    [​IMG]


    If you you have a LSI SAS2008 HBA for instance, you can use passthrough to give it to the ZFS-SA VM. You have different chassis that are available in the appliance, you will find them under /usr/lib/ak/metadata/appliance/. They all have an .xml description. You can replace the content of VMware-Virtual-Platform.xml by the content of one of these files to get a different look. The one above is the SUN-FIRE-X4270-M2 SERVER.

    You will also need to perform some editing to the hc topology file (/usr/platform/i86pc/lib/fm/topo/maps/VMware-Virtual-Platform-hc-topology.xml).

    The hc-topology file for above configuration is the following :

    Code:
    <?xml version="1.0"?>
    <!DOCTYPE topology SYSTEM "/usr/share/lib/xml/dtd/topology.dtd.1">
    <!--
    Copyright (c) 2008, 2012, Oracle and/or its affiliates. All rights reserved.
    -->
    
    <topology name='VMware-Virtual-Platform' scheme='hc'>
    
      <range name='chassis' min='0' max='0'>
        <enum-method name='x86pi' version='99'/>
      </range>
    
      <range name='motherboard' min='0' max='0'>
        <node instance='0'>
          <dependents grouping='children'>
            <range name='chip' min='0' max='100'>
              <enum-method name='chip' version='1' />
              <propmap name='chip' />
            </range>
            <range name='hostbridge' min='0' max='254'>
              <enum-method name='hostbridge' version='1' />
            </range>
          </dependents>
        </node>
      </range>
    
      <range name='chassis' min='0' max='0'>
        <node instance='0'>
          <dependents grouping='children'>
            <range name='bay' min='0' max='15'>
    
             <!-- This part is for the disks that are on LSI SAS2008 HBA controller and are discovered through bay.so plugin -->
             <enum-method name='bay' version='1' />
    
             <!-- The 4 last discs are on the virtual SATA controller -->
             <node instance='12'>
                <propgroup name='protocol' version='1' name-stability='Private' data-stability='Private'>
                 <propval name='label' type='string' value='HDD 0' />
                </propgroup>
                <propgroup name='binding' version='1' name-stability='Private' data-stability='Private'>
                 <propval name='occupant-path' type='string' value='/pci@0,0/pci15ad,790@11/pci15ad,7e0@2/disk@0,0' />
                </propgroup>
             </node>
             <node instance='13'>
                <propgroup name='protocol' version='1' name-stability='Private' data-stability='Private'>
                 <propval name='label' type='string' value='HDD 1' />
                </propgroup>
                <propgroup name='binding' version='1' name-stability='Private' data-stability='Private'>
                 <propval name='occupant-path' type='string' value='/pci@0,0/pci15ad,790@11/pci15ad,7e0@2/disk@1,0' />
                </propgroup>
             </node>
    
             <node instance='14'>
                <propgroup name='protocol' version='1' name-stability='Private' data-stability='Private'>
                 <propval name='label' type='string' value='CACHE 0' />
                </propgroup>
                <propgroup name='binding' version='1' name-stability='Private' data-stability='Private'>
                 <propval name='occupant-path' type='string' value='/pci@0,0/pci15ad,790@11/pci15ad,7e0@2/disk@2,0' />
                </propgroup>
             </node>
    
             <node instance='15'>
                <propgroup name='protocol' version='1' name-stability='Private' data-stability='Private'>
                 <propval name='label' type='string' value='CACHE 1' />
                </propgroup>
                <propgroup name='binding' version='1' name-stability='Private' data-stability='Private'>
                 <propval name='occupant-path' type='string' value='/pci@0,0/pci15ad,790@11/pci15ad,7e0@2/disk@3,0' />
                </propgroup>
             </node>
    
    
             <!-- Catch the disks that may have been missed above -->
              <dependents grouping='children'>
                <range name='disk' min='0' max='0'>
                  <enum-method name='disk' version='1' />
                </range>
              </dependents>
    
            </range>
          </dependents>
        </node>
      </range>
    </topology>
    if you have a SAS HBA managed by the appliance, you will use the "bay" enumerator to list the attached drives. For the bay enumerator to work, you need a "VMware-Virtual-Platform,bay_labels" file in the same directory as the topology file. Here is the file associated to above config :

    Code:
    # Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
    #
    # Product Name: SUN-FIRE-X4170-M2-SERVER / Modded for VirtualBox
    # Server Name: -
    #
    # Product : HBA : HBA Instance : Chassis Name : Chassis S/N : PHY : Label
    # -----------------------------------------------------------------------
    VMware-Virtual-Platform:mpt_sas:0:SYS:-:7:SASHDD0
    VMware-Virtual-Platform:mpt_sas:0:SYS:-:6:SASHDD1
    VMware-Virtual-Platform:mpt_sas:0:SYS:-:5:SASHDD2
    VMware-Virtual-Platform:mpt_sas:0:SYS:-:4:SASHDD3
    VMware-Virtual-Platform:mpt_sas:0:SYS:-:3:SASHDD4
    VMware-Virtual-Platform:mpt_sas:0:SYS:-:2:SASHDD5
    VMware-Virtual-Platform:mpt_sas:0:SYS:-:1:SASHDD6
    VMware-Virtual-Platform:mpt_sas:0:SYS:-:0:SASHDD7
    Once you managed to understand it, this is really a great system.

    One interesting point to mention in the context of a homelab use is that the ZFS-SA already has the persistant L2ARC cache found in Solaris 11.3 (but not the lz4 compression). So using a "large" SSD for your L2ARC if your server is not always on is not a waste of resources.

    Hope the above tips will help others. Don't hesitate to point me to any error or possible optimisations.
     
    #2
    Chuckleb, pgh5278, rubylaser and 4 others like this.
  3. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,517
    Likes Received:
    967
    Brilliant post. Thank you.
     
    #3
    pgh5278 likes this.
  4. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,366
    Likes Received:
    730
    Nice, pre-canned VMware ESXi appliance/ovf export please? :-D hahah

    It has always bugged me why a GOOD open source zfs web gui was not avail...not saying your reversed engineered 're-opening of a re-closed source code stack' hacks will be condoned by Oracle HAHA

    Sure we have napp-it, freenas, rockstor coming along nicely but why this brilliant piece of code from Sun engineers was never released is the EXACT reason why I despise Oracle...among another litany of reasons.
     
    #4
    Last edited: Nov 28, 2015
  5. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,366
    Likes Received:
    730
    Any updates/progress potentially on a pre-canned stg appliance :-D

    A man can dream right?
     
    #5
  6. K D

    K D Active Member

    Joined:
    Dec 24, 2016
    Messages:
    776
    Likes Received:
    169
    I came across this old thread while generally looking for ZFS options as I am having networking issues with my FreeNAS box. Oracle seems to have added a VMWare template download in addition to the virtualbox one. All the instructions provided along with it refer to VMWare workstation but I dont see why it can't work with esxi.
     
    #6
  7. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    171
    Likes Received:
    18
    yeah would love me some .ova file!!! rather than create an account and modify all this...

    also.. I am assuming a bunch of those steps can now be eliminiated for esxi host if you download the vmware file vs the virtualbox version?

    Thanks
     
    #7
    SlickNetAaron likes this.
  8. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    171
    Likes Received:
    18
    would I also be correct in assuming that this will not import current openzfs / nappit pools since its solaris zfs?
     
    #8
  9. K D

    K D Active Member

    Joined:
    Dec 24, 2016
    Messages:
    776
    Likes Received:
    169
    #9
  10. acmcool

    acmcool Active Member

    Joined:
    Jun 23, 2015
    Messages:
    576
    Likes Received:
    74
    One question...would we receive updates for this?
     
    #10
  11. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,503
    Likes Received:
    459
    Correct, you cannot move Open-ZFS Pools v5000 from/to Oracle Solaris ZFS v.37
     
    #11
  12. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    171
    Likes Received:
    18
    anyone have this fired up in an esxi 6 environment that would like to export an .ova for the cheering crowd???\

    anyone.. bueler.. bueler...
     
    #12
  13. K D

    K D Active Member

    Joined:
    Dec 24, 2016
    Messages:
    776
    Likes Received:
    169
    Oracle already provides an OVA file now that can be imported into esxi.
     
    #13
  14. Dawg10

    Dawg10 Member

    Joined:
    Dec 24, 2016
    Messages:
    70
    Likes Received:
    32
    Not in this decades budget...

    The recommended minimum disk storage configuration for VMware vSphere 6.x includes:

    A mirrored disk pool with the following configuration:
    For models using high-performance (HP) disks:
    Use at least twenty 600 GB 10,000 RPM HP 2.5-inch SAS3 hard disk drives (HDDs) — or
    at least twenty 1.2 TB 10,000 RPM HP 2.5-inch SAS3 HDDs
    with at least two 200GB SSD devices for LogZilla working with a striped log profile.

    For models using high-capacity (HC) disks:
    Use at least forty-four 8 TB 7,200 RPM HC 3.5-inch SAS3 HDDs
    with at least two 200 GB SSD devices for LogZilla working with a striped log profile.
    At least two 1.6 TB SSDs for Level 2 Adaptive Replacement Cache (L2ARC): Use a striped cache for both the HC or HP models.
     
    #14
  15. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    171
    Likes Received:
    18
    @K D

    so this ova requires none of the mods or command line stuff in @zhu original post>?
     
    #15
  16. K D

    K D Active Member

    Joined:
    Dec 24, 2016
    Messages:
    776
    Likes Received:
    169
    Nope. Standard OVA file.
     
    #16
  17. SlickNetAaron

    SlickNetAaron New Member

    Joined:
    Apr 30, 2016
    Messages:
    28
    Likes Received:
    8
    You are able to pass an HBA to the SAN and run raw disks on it?! That's incredible if it works long term!
     
    #17
  18. K D

    K D Active Member

    Joined:
    Dec 24, 2016
    Messages:
    776
    Likes Received:
    169
    Right now I don’t have a host with hba/disks available to pass through.
     
    #18
Similar Threads: Oracle Storage
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it DIY Oracle ZFS Storage Appliance? Nov 12, 2015
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 and Intel X552/X554 10GbE drivers May 21, 2017
Solaris, Nexenta, OpenIndiana, and napp-it Oracle rumours Dec 4, 2016
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 Oct 28, 2015
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 beta Jul 19, 2015

Share This Page