Custom storage plugins for Proxmox

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Hi,
in another thread @nephri and me discussed using zfs over iscsi with FreeNAS.

Proxmox VE 4.3, set an LACP bond

Somewhen this summer some (undocumented) changes went into Proxmox that allow custom storage plugins that don't break with the next update, the discussion on the pve-devel list can be found here:

[pve-devel] [PATCH] Add support for custom storage plugins

A sample implementation for MPIO with Netapp is here:
GitHub - mityarzn/pve-storage-custom-mpnetapp

In this thread i will give a short introduction how to use this and also an example with zfs on an scst iscsi target, in the hope it is usefull for someone.

Perl is not my 'native language' but i`m getting more and more used to by reading bit`s of code i need to understand to get things setup and/or do minor changes, mainly in the proxmox codebase.
So, if there is code, use it with caution and on your own risk !
Also, if someone knows a better / cleaner way to do those things in perl i`m happy to learn ;)

I was looking for a way to get zfs over iscsi with an SCST Target earlier, and have made some work that would require small changes in proxmox to survive updates. As i had some trouble with the code itself, migrating storage away from such a volume always caused errors and also had no urgent need for it i put this on hold.

However, after @nephri decided he would put some work on an plugin for FreeNAS i gave it another try, did some refactoring to use the 'custom-plugin' mechanism and also the silly io-errors that drove me nuts were gone.

Here is a short introduction how the mechanism of custom storage plugins in proxmox works:

- the plugins reside in /usr/share/perl5/PVE/Storage/Custom/
So create this directory if not present
- each plugin should reside in a file #name#.pm within this directory, i.E. scstZFSPlugin.pm or FreeNASZFSPlugin.pm
- this is basically a perl module that holds the implementation of the storage-plugin.
The 'Interface' is defined in PVE::Storage:: Plugin. Therefore its a good idea to inherit from this (or a subclass).

Code:
use base qw(PVE::Storage::Plugin);
Besides the Implementation itself some bits of configuration should be done within this, too.

This is basically an api-version, the plugins name (to be used in storage.cgf), what type of data the storage will be capable of, it`s properties and options:


Code:
sub api {
    return 1;
}

sub type {
    return 'scstzfs';
}

sub plugindata {
    return {
        content => [ { images => 1 }, {images => 1} ]
    };
}

# @todo why is it needed ?
sub properties {
    return {
        nowritecache_scst => {
            description => "disable write cache on the target",
            type => 'boolean'
        }
    }
}

sub options {
    return {
    nodes => { optional => 1 },
    disable => { optional => 1 },
    portal => { fixed => 1 },
    target => { fixed => 1 },
    iscsiprovider => {fixed => 1},
    pool => { fixed => 1 },
    blocksize => { fixed => 1 },
    nowritecache => { optional => 1 },
    sparse => { optional => 1 },
    comstar_hg => { optional => 1 },
    comstar_tg => { optional => 1 },
    content => { optional => 1 },
    shared => { fixed => 1 },
    };
}
- a good way to actually test an implementation is pvesm status - this includes your plugin and throws errors in the case there are some

- to activate / see it in the gui a restart of pvedaemon is required: /etc/init.d/pvedaemon restart
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Here is the current state of the scstZFS Plugin.
In my case scst runs within a proxmox - node, if someone is interested how to get it up and running i have small script that does everything.

This is just a part of the puzzle for what i want in the end and currently work on in a lab environment.
The ultimate goal is HA ZFS over two chassis, where each chassis hold half of the striped vdevs, exported as vdisk_blockio via SRP or ISER to the the other. ZVOL should the be imported / assembled on one node (storage-master) that exports it through scst on a service-ip, from where the storage then can be used via this plugin.

For the module itself i decided to inherit from PVE::Storage::ZFSPlugin as this does all the management of zfs via ssh. The only pitfall is that this doesn't know anything of the scst-target.

For the actual changes that were necessary for scst i adopted the concept of lun_cmds used for the other iscsi-targets by proxmox. So this resides in a separate file, within a folder 'LunCmd'.

/usr/share/perl5/PVE/Storage/Custom/scstZFSPlugin.pm

Code:
package PVE::Storage::Custom::scstZFSPlugin;

use PVE::Storage::Custom::LunCmd::SCST;

use PVE::Tools qw(run_command);

# inherit on the ZFSPlugin
#use base qw(PVE::Storage::ZFSPlugin);
@ISA = qw(PVE::Storage::ZFSPlugin);

my @ssh_opts = ('-o', 'BatchMode=yes');
my @ssh_cmd = ('/usr/bin/ssh', @ssh_opts);
my $id_rsa_path = '/etc/pve/priv/zfs';

# plugin configuration
sub api {
    return 1;
}

sub type {
    return 'scstzfs';
}

sub plugindata {
    return {
        content => [ { images => 1 }, {images => 1} ]
    };
}

sub properties {
    return {
        nowritecache_scst => {
            description => "disable write cache on the target",
            type => 'boolean'
        }
    }
}

sub options {
    return {
    nodes => { optional => 1 },
    disable => { optional => 1 },
    portal => { fixed => 1 },
    target => { fixed => 1 },
    iscsiprovider => {fixed => 1},
    pool => { fixed => 1 },
    blocksize => { fixed => 1 },
    nowritecache => { optional => 1 },
    sparse => { optional => 1 },
    comstar_hg => { optional => 1 },
    comstar_tg => { optional => 1 },
    content => { optional => 1 },
    shared => { fixed => 1 },
    };
}

my $lun_cmds = {
    create_lu   => 1,
    delete_lu   => 1,
    import_lu   => 1,
    modify_lu   => 1,
    add_view    => 1,
    list_view   => 1,
    list_lu     => 1,
};

# SCST-specifics


my $zfs_get_base = sub {
    return PVE::Storage::Custom::LunCmd::SCST::get_base;
};

sub zfs_request {
    my ($class, $scfg, $timeout, $method, @params) = @_;

    $timeout = PVE::RPCEnvironment::is_worker() ? 60*60 : 10
    if !$timeout;

    my $msg = '';

    if ($lun_cmds->{$method}) {
        $msg = PVE::Storage::Custom::LunCmd::SCST::run_lun_command($scfg, $timeout, $method, @params);
    } else {

        my $target = 'root@' . $scfg->{portal};

        my $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target];

        if ($method eq 'zpool_list') {
            push @$cmd, 'zpool', 'list';
        } else {
            push @$cmd, 'zfs', $method;
        }

        push @$cmd, @params;

        my $output = sub {
            my $line = shift;
            $msg .= "$line\n";
        };

        run_command($cmd, outfunc => $output, timeout => $timeout);
    }

    return $msg;
}

# Storage implementation is identical to ZFSPlugin and therefore not changed
 
Last edited:
  • Like
Reactions: Patrick and nephri

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
/usr/share/perl5/PVE/Storage/Custom/LunCmd/SCST.pm

in the attached file.

Example-Config in /etc/pve/storage.cfg

Code:
scstzfs: zfsonscst
        target iqn.2016.12.27.pve.local
        pool testpool
        portal 10.0.0.1
        blocksize 4k
        shared 1
        iscsiprovider iet
        content images
Currently iscsiprovider iet is still in there as i still haven't managed to override $zfs_get_base of the ZFSPlugin the right way and it complains about unknown iscsiprovider if this is not present :(

@nephri:

Guess you could just rename the Files and change the package-names / obvious values like the plugin-type and then implement the following in the LunCmd`s File:


my $lun_cmds = {
create_lu => 1,
delete_lu => 1,
import_lu => 1,
modify_lu => 1,
add_view => 1,
list_view => 1,
list_lu => 1,
};

I had a quick look on the FreeNAS API, should be quite possible to do everything via the REST-API rather than pushing commands via SSH how it is done in my sample.
So a lot of code can be deleted from this file for sure.

To get an overview what is called when and how /usr/share/perl5/PVE/Storage/ZFSPlugin.pm is a good place to look at.
 

Attachments

Last edited:
  • Like
Reactions: Patrick and nephri

nephri

Active Member
Sep 23, 2015
541
106
43
45
Paris, France
I added
Code:
 /usr/share/perl5/PVE/Storage/Custom/scstZFSPlugin.pm
 /usr/share/perl5/PVE/Storage/Custom/LunCmd/SCST.pm
I did a chmod 755 on both files.

I did a
Code:
systemctl restart pvedaemon
But i not see any new Storage type from the gui.

I tried the
Code:
pvesm -status
But it list my currently configured storages without any complaint.

PS: i check that my PVE host has the patch for the Storage/Custom mecanism and it seems to be ok.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
did you add the storage to your storage.cfg ?

you should then see this if it`s fine (or at least a lot of errors):

Code:
root@pve1:~# pvesm status
local        dir 1        98952796         6713492        87189756 7.65%
local-lvm  lvmthin 1       362430464         2392041       360038422 1.16%
zfsonscst  scstzfs 1       942669824        75635202       867034621 8.52%
 

nephri

Active Member
Sep 23, 2015
541
106
43
45
Paris, France
i think that the /etc/pve/storage.cfg set actual storages available in the host.
But haven't any effective storage to configure using the scst.

i'm excpected to see more options in the "Add Storage" menu in the gui.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
No, this is still not implemented in the GUI, at least not under 'add storage' :(

I had a look how the gui is built, as i do a lot of ExtJS - Programming at work (more than perl ...) and it would be darn easy to adopt this, but there is currently no way to inject some ExtJS Code that would handle this.

So, the storage needs to be added in storage.cfg by hand, and then should apear in a quick check with pvesm status, or at least some errors related to your implemantation should show up.

If the storage works it apears in the GUI when adding disks to a VM, under the nodes and in the List shown under Datacenter -> Storage.
 

nephri

Active Member
Sep 23, 2015
541
106
43
45
Paris, France
Ok,
in theses conditions, i'm starting to adapting your script to ctld.

My first question (it will be a lot ^^)
- the read_config in SCST.pm list all existing lun in the iscsi portal. Can you give example of result you have with your command ? It's a way for me if my result syntax need to be normalized or not.

By example i can do a "ctladm lunlist" but it give a things like it:
Code:
(7:1:0/0): <FreeNAS iSCSI Disk 0123> Fixed Direct Access SPC-4 SCSI device
but a ctladm devlist give me
Code:
LUN Backend       Size (Blocks)   BS Serial Number    Device ID
  0 block                  2048  512 00074304a2f000   iSCSI Disk      00074304a2f000
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Yes, this lists all luns configured in scst via sysfs ...
i really got no clue how ctld / the target implementation on FreeNAS works, but i guess somehwere in the beginning there is a LUN-Number (7:1:0/0) that could be parsed with a regexp ;)

But, anyway, why don't you figure-out what`s possible with the REST-API instead of dealing with CLI and parsing it ?
This should be way easier as it`s only HTTP-Calls and responses are JSON ...

Welcome to FreeNAS API’s documentation! — FreeNAS API 1.0 documentation

iSCSI Resources — FreeNAS API 1.0 documentation
 

nephri

Active Member
Sep 23, 2015
541
106
43
45
Paris, France
the only advantage to deal directly with ctladm is to be usable by example with a pure FreeBSD and not be dependant of FreeNas plateform.

ctladm devlist -x
give an xml result like it:

Code:
<ctllunlist>
<lun id="0">
        <backend_type>block</backend_type>
        <lun_type>0</lun_type>
        <size>2048</size>
        <blocksize>512</blocksize>
        <serial_number>00074304a2f000</serial_number>
        <device_id>iSCSI Disk      00074304a2f000                 </device_id>
        <num_threads>14</num_threads>
        <vendor>FreeNAS</vendor>
        <product>iSCSI Disk</product>
        <revision>0123</revision>
        <naa>0x6589cfc0000001f60077ecfadf87a6a8</naa>
        <insecure_tpc>on</insecure_tpc>
        <rpm>1</rpm>
        <file>/dev/zvol/volFAST/pveas_root</file>
        <ctld_name>pveas_root</ctld_name>
</lun>
</ctllunlist>
 
  • Like
Reactions: _alex

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
the only advantage to deal directly with ctladm is to be usable by example with a pure FreeBSD and not be dependant of FreeNas plateform.
Yes, this is a point for sure ...
On the other hand, using the API might hide/encapsulate ctladm and it`s specifics, so if FreeNAS decides to use another target it 'should' still work ...
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
That looks fine with the -x Flag, nice XML :)

So you should be able to work with this with

use XML::Simple;

And get the lun-nodes in ctllunlist (and their attribute id) from that to build a list of the luns currently configured, path is then in file ...

The Netapp-Plugin i linked also works with XML, you might find some usefull snippets in it, in special Line 35ff where it processes an XML-Response and accesses values in it:

pve-storage-custom-mpnetapp/MPNetappPlugin.pm at master · mityarzn/pve-storage-custom-mpnetapp · GitHub
 

nephri

Active Member
Sep 23, 2015
541
106
43
45
Paris, France
I did a try in this way :
- modifying /etc/share/pve-manager/ext6/pvemanagerlib.js by adding the "ctld" option in scsi provider
- adding CTLD.pm in /usr/share/perl5/PVE/Storage/LunCmd
- modifying /usr/share/perl5/PVE/Storage/ZFSPlugin for supporting CTLD Lun commands

so, after trying to add a storage in vm creation, i have this error
- Error : create failed : No configuration found. Install ctld on at ....../LunCmd/CTLD.pm line 99

It' because the $scfg->{portal} in my read_config return "" instead of the host of my iSCSI portal...

How can i debug output of the perl script while using the PVE GUI ?
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
I did a try in this way :
- modifying /etc/share/pve-manager/ext6/pvemanagerlib.js by adding the "ctld" option in scsi provider
- adding CTLD.pm in /usr/share/perl5/PVE/Storage/LunCmd
- modifying /usr/share/perl5/PVE/Storage/ZFSPlugin for supporting CTLD Lun commands
I wouldn't go this way, as this will break with every update of PVE.
pvemanagerlib.js is completely static, what is the main problem with extending the gui.
Also, ZFSPlugin.pm is likely to be changed / overriden with updates.

So, going this way works, but will need fixes when updates come. On rolling pool upgrades these need to be done on every host then ... not really a sustainable way

this is why the 'custom plugins' mechanism caught my attention when i read about in the pve-devel list ;)

so, after trying to add a storage, i have this error
- Error : create failed : No configuration found. Install ctld on at ....../LunCmd/CTLD.pm line 99

It' because the $scfg->{portal} in my read_config return "" instead of the host of my iSCSI portal...
Hm, this should usually not be a problem ...

How can i debug output of the perl script while using the PVE GUI ?
You can use the dumper:

use Data:: Dumper;
---
Dumper($var);

and/or my $debugmsg sub that also writes to syslog (see SCST.pm)
 
  • Like
Reactions: nephri

nephri

Active Member
Sep 23, 2015
541
106
43
45
Paris, France
yes i know it's not the right way.
It's just want to test it, and see how it create the storage in the pve/storage.conf and so one.

I will see with the dumper

I not really understand some commands:
- add_view command (seems to be a reload or kill the iscsi daemon on the target, i'm not sure if i need it)
- import_lu (it's a create with a specific lun id ?)
- list_view (get the lun id of a specific path ?)
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
ok, i'd just go the short path and add manually to storage.cfg as this only needs to be done once for the whole cluster.

can have a look at these commands tomorrow, some are used by other storage-backends like i.e. lvm to activate / deactivate and not used at all by the zfsplugin / always return 1.

i also found that, after fixing some mad bugs, snapshots and rollback (to last recent snapshot) now work, but it's not possible to create a clone from an earlier snapshot. this is a pretty annoying limitation in the zfsplugin (for all iscsi-providers) that doesn't allow copy on snaps as it's not implemented.

but should be possible with zfs send/receive as far as i understand.
 

nephri

Active Member
Sep 23, 2015
541
106
43
45
Paris, France
Ok,

After some works and attempts, i'm able to:
- Add the storage
- Create a VM with this storage.

In FreeNas:

The zvol seems to be created correctly:
Code:
zfs list
volFAST/vm-103-disk-1                                     34.0G  1.38T    64K  -
The lun seems to be configured (maybe it's not sufficient)
Code:
ctladm devlist -x
<ctllunlist>
<lun id="0">
        <backend_type>block</backend_type>
        <lun_type>0</lun_type>
        <size>8388608</size>
        <blocksize>4096</blocksize>
        <serial_number>MYSERIAL   0</serial_number>
        <device_id>MYDEVID   0</device_id>
        <num_threads>14</num_threads>
        <file>/dev/zvol/volFAST/vm-103-disk-1</file>
</lun>
</ctllunlist>
But when i start the VM, it fails
the command kvm -id 103 fail. i haven't many informations about that, but it's surely related to an iSCSI issue.

The drive part on the kvm is
Code:
-drive 'file=iscsi://xxx.xx.x.xxx/iqn.2016-12.fr.test.iscsi:pveas/0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on'
If any idea.