Custom storage plugins for Proxmox

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
Code:
 iscsiadm -m discovery -t st -p xxx.xx.x.xxx
 xxx.xx.x.xxx:3260,-1 iqn.2016-12.fr.test.iscsi:pveas:pve
but

Code:
iscsiadm -m node --targetname "iqn.2016-12.fr.test.iscsi:pveas:pve" --portal "xxx.xx.x.xxx:3260" --login

Logging in to [iface: default, target: iqn.2016-12.fr.test.iscsi:pveas:pve, portal: xxx.xx.x.xxx,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2016-12.fr.test.iscsi:pveas:pve, portal: xxx.xx.x.xxx,3260].
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals
It's effectively an authentification failure
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Guess you have an auth-group or anything else that requires chap on your target and don't pass the credentials.

You should see if/how authentication is required /etc/ctl.conf or maybe in FreeNAS gui ...

ctl.conf(5)


Unfortunately, haven't found anything in the ZFSPlugin how authentication is passed - maybe not implemented :(
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
I successfully connected with iscsiadm (by modifying manually the ctl.conf) but i'm always failing to start the kvm

i just discovered with theses issues that ctladm can create lun but theses are volatile. they don't survive to a reboot...

for manually testing i set my ctl.conf with this:

Code:
portal-group default {
}

portal-group pg1 {
        discovery-filter portal-name
        discovery-auth-group no-authentication
        listen 0.0.0.0:3260
        option ha_shared on
}

target iqn.2016-12.fr.nephri.iscsi:pveas:pve {
        alias "pve"
        auth-group no-authentication
        portal-group pg1

        lun 0 {
          backend block
          blocksize 4096
          path /dev/zvol/volFAST/vm-103-disk-1
          device-id iqn.2016-12.fr.nephri.iscsi:pveas/0
          device-type 0
          option scsiname iqn.2016-12.fr.nephri.iscsi:pveas/0
        }
}
i added the "auth-group no-authentication"
and added manually the lun 0 to the target.
I did a
Code:
 service ctld restart
 ctladm devlist 
 LUN Backend       Size (Blocks)   BS Serial Number    Device ID
  0 block               8388608 4096 MYSERIAL   0     iqn.2016-12.fr.nephri.iscsi:pveas/0
but i'm not able to start the KVM.

I thinked it's maybe an identifier issue,
on the kvm launch, i have
Code:
-drive 'file=iscsi://172.17.1.204/iqn.2016-12.fr.nephri.iscsi:pveas/0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on'
So it's why i tried to set the scsciname, device-id accordingly but it didn't help.

I will have to rewrite my LunCmd to update the ctl.conf instead of using ctladm... it's annoying.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
shouldn't it be like this ?

iscsi://172.17.1.204/iqn.2016-12.fr.nephri.iscsi:: pveas:: pve/0
(without spaces)

Oh, that`s not good that ctladm can't persist changes to config ...
scstadm --write_config is really nice, as it prevents the need to parse and modify the config-file.
 
  • Like
Reactions: nephri

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Hi, just a thought:

You could workaround this by making sure the LUN's are present when they are needed or set them up via ctladm if they are still missing

This would be for sure in:

sub path { ... }

in your Implementation of PVE::Storage:: plugin.

(where it definitely helps if you don't use the ZFSPlugin directly but a subclass ..)

This is called to assemble the iSCSI Connection string passed to KVM.
(i just extended this to support chap + iser/rdma for SCST ...)

Also, activate_storage / deactivate_storage and/or activate_volume / deactivate_volume would be candidates where it could work that you make sure that your lun is configured.

Downside would be that you maybe only see active LUN's / Volumes in the GUI for this storage.
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
You had right, i missed up the : pve at the end of the target.
I was able to start the VM, i'm currently installing a CentOS 7 for see how it works..

in all case, i have always a large amount of work in order to fullfill the implementation...

EDIT: i did the install of centOS without error, but when i tried to boot the VM on the iSCSI disk, it didn't find it...

EDIT2: with an UEFI bios, i was able to start the OS but i have to do a UEFI disk (i did it on the NFS volume)
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
...id=drive-scsi0... - in the KVM-Device ...

Why didn't you use virtio ? Much better performance, guess this is also somehow related to your boot-problems ....
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
i'm rewriting my LunCmd by using the FreeNAS Rest API.
I did all "GET" parts "list_lu", "list_view".

The advantage is that all things done by PVE will be visible in the GUI of FreeNas.
 
  • Like
Reactions: _alex

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
i'm rewriting my LunCmd by using the FreeNAS Rest API.
I did all "GET" parts "list_lu", "list_view".

The advantage is that all things done by PVE will be visible in the GUI of FreeNas.
this is maybe why i had the strong feeling that it makes abslutely sense to use an API when there is one ;)

will be skiing for some days, but could setup FreeNAS on one node next week and help testing/debugging.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Skiing / Snowboarding cancelled because everybody is ill :(
Anyway, not really good snow in the Alps, so at least good for my PVE-Hacking :D

Got it cleaned now, and also added ability to create VM-Clones from older Snapshots.
This 'materializes' the snapshots by zfs send/receive and then exports as LUN to be used for the VM-Clone then.
Sadly, doing this i found that Proxmox itself should be changed to handle things better.

Currently it insists on doing qemu image_convert even if src and target storage is on ZFS - where it would be sufficient to use / just rename the sent volume. Bit of a double-work, first zfs send and the conversion of the image to get exactly the same result :(

Also, it doesn't notify the storage-plugin in any way that the cloning is finished, so that this could cleanup by removing the zfs created from the older snap and also the iSCSI LUN that was setup therefore.

Guess i`ll implement this in QemuServer.pm and submit as patch so that it hopefully will be accepted.
Also think about lifting my changes/addition that allow cloning from older zfs-snapshots up to their ZFSPlugin.pm to add these capabilities for the existing iscsi-targets.

Last thing on the list is working rdma/iser + CHAP for zfs over iscsi, what currently is also not the case.
Also readonly LUN's might come in addition, as it would be good to have them ro for a tmp ZFS of an old snap.

Will put it on github when finished, more tested and cleaned from debugg-stuff and some comments.

Alex

@nepri: if you're interested in adoptiong some parts of this, i.e. the clones from older snaps, just drop me a note
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
Hi,

I'm sorry for your snowboarding cancellation.
You are french ? if yes, we can maybe talk in French :p

So, i completed my implementation using the REST API.
I'm able to create, start, destroy vm. It handle correctly luns on FreeNas.

After that, i did a code cleanup and i removed all code in PVE::Storage and move my implementation inside PVE::Storage::Custom
But, after that i have an issue :
- the zfs_get_base defined on ZFSPlugin is still used and my version in FreeNasZFSPlugin.pm don't override it!! but i need to set the base to "/dev/zvol" instead of "/dev"

I'm not enought fluent in perl to find a simple way to resolve it.


So if i bypass this issue, i have few others:
- I have to give a name to the "lun" before associate it to a Target in FreeNas. At this time i name it like the storage "ex: vm-103-disk-1".
But if FreeNas have multiple targets or if it's used by multiple PVE instance, the name is not enough "unique".
At this time, it's not really an issue but i would handle it.

- When i clone an iSCSI disk into a NFS disk, the process fails. the qemu-img convert process fail because the destination image (raw) didn't exist.
 
Last edited:

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
i bypassed my /dev/zvol issue by changing the /etc/pve/storage.cfg

i set the iscsiprovider istgt instead of iet
the istgt share the same get_base, so even if the method is not overrided, at this time, it works.


I have to configure credentials to use for call the FreeNas REST API.
At this time, it's hardcoded in the script.

But i would put it on the /etc/pve/storage.conf
like it
freenas_user xxxx
freenas_password yyyyy

or
options freenas_user=xxxx
options freenas_password=yyyyy

but, i tried to add a "freenas_user" in the sub options of FreeNasZFSPlugin.pm but i have a lot of error after that...

How do you store/retrieve specific configuration ?

About your question about features, i think it could be fun if at the end we can have only 1 plugin for yours and mine and derive correctly to the LunCmd corresponding to a properties like "scsiprovider".
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
hi,
i live in germany, so no french, sorry.
but obviously also no pow-pow in france ?

will try to answer some of your questions, at least those regarding the config of auth, tomorrow.

currently need VPN to access my pve dev boxes from homeoffice. this is also why i didn't touch iser/rdma yet - ib-switch there is powered-off :(

for the base-issue, this sounds weird. for my understanding resolution to the right implementation of the sub/method should be handled by oop/inheritance in perl. but again, i'm quite a beginner with perl :(

do you have some Code / snippets to get this reproduced?
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
the deep fresh white stuff in the mountains ;)

for your naming:
why not one target per cluster, there the names should be unique.

and/or prepend the name of the pve-cluster somehow from the configs.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
for the feature-mergin, yes, we could share an advanced implementation of the Storage-Plugin class that handles this. better would be to get some of them up to pve, preferrable in the zfsplugin.pm, together with some sort of supporting changes in QemuServer.pm that would be necessary to do things the right and most efficient way.

i dont like patchwork that breaks with updates, in special not when it affects such a vital part like storage ...

my final goal for scst is to also Support Mpio and not only iscsi but also srp, backed up by ,managed-zfs' on the target. and then make the Target be 2 nodes in ha ;)