CentOS 7: Mounting external iSCSI drive with LVM partitions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
Hi,

I test a rescue plan (in home lab) for personal use only.

Context:
- i have a proxmox cluster
- on vm have an iSCSI for disk storage on a ZFS ZVOL (on FreeNas)

i have did:
- shutdown the VM
- snapshot the ZVOL called here "ORIGIN".
- exported the snaphost from "ORIGIN" to another zvol called here "RESCUE".

I have an other Linux server (with a Cent OS 7 also)
I manually mounted an iSCSI target that expose the ZVOL "RESCUE" (that mean this server didn't boot from this iSCSI disk)
I used "iscsiadm" for achieve that.

I can see that all seems ok :
in /dev/disk/by-path, i have

Code:
lrwxrwxrwx 1 root root   9  1 janv. 17:03 virtio-pci-0000:00:0a.0 -> ../../vda
lrwxrwxrwx 1 root root  10  1 janv. 17:03 virtio-pci-0000:00:0a.0-part2 -> ../../vda2
lrwxrwxrwx 1 root root  10  1 janv. 17:03 virtio-pci-0000:00:0a.0-part1 -> ../../vda1
lrwxrwxrwx 1 root root  10  1 janv. 17:03 virtio-pci-0000:00:0a.0-part3 -> ../../vda3
lrwxrwxrwx 1 root root   9  1 janv. 17:03 pci-0000:00:01.1-ata-2.0 -> ../../sr0
drwxr-xr-x 6 root root 120  1 janv. 17:03 ..
lrwxrwxrwx 1 root root   9  1 janv. 17:09 ip-xxx.xx.x.xxx:3260-iscsi-iqn.2016-12.fr.nephri.iscsi:rescue-lun-0 -> ../../sda
lrwxrwxrwx 1 root root  10  1 janv. 17:09 ip-xxx.xx.x.xxx:3260-iscsi-iqn.2016-12.fr.nephri.iscsi:rescue-lun-0-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10  1 janv. 17:09 ip-xxx.xx.x.xxx:3260-iscsi-iqn.2016-12.fr.nephri.iscsi:rescue-lun-0-part1 -> ../../sda1
drwxr-xr-x 2 root root 200  1 janv. 17:09 .
If i do a fdisk -l i have

Code:
Disque /dev/vda : 8589 Mo, 8589934592 octets, 16777216 secteurs
Unités = secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : dos
Identifiant de disque : 0x000cd5f9

Périphérique Amorçage  Début         Fin      Blocs    Id. Système
/dev/vda1   *        2048     2099199     1048576   83  Linux
/dev/vda2         2099200     3778559      839680   82  Linux swap / Solaris
/dev/vda3         3778560    16777215     6499328   83  Linux

Disque /dev/sda : 34.4 Go, 34359738368 octets, 8388608 secteurs
Unités = secteur de 1 × 4096 = 4096 octets
Taille de secteur (logique / physique) : 4096 octets / 4096 octets
taille d'E/S (minimale / optimale) : 4096 octets / 1048576 octets
Type d'étiquette de disque : dos
Identifiant de disque : 0x000a69be

Périphérique Amorçage  Début         Fin      Blocs    Id. Système
/dev/sda1   *        2048     2099199     8388608   83  Linux
/dev/sda2         2099200    67108863   260038656   8e  Linux LVM
So we can see, i have my /dev/sda2 that is a partition that came from my iSCSI target and seems to be an LVM storage.

but now, i try to "mount" the underlying LVM logical volume in order to have access to the files system.

i tried differents way to achieve that

such things like
Code:
 #pvscan
 No matching physical volumes found
 #vgscan
   Reading volume groups from cache.
 #vgchange -aay
 #ssm list
---------------------------------
Device         Total  Mount point
---------------------------------
/dev/sda    32.00 GB  PARTITIONED
/dev/sda1    8.00 GB
/dev/sda2   23.99 GB
/dev/vda     8.00 GB  PARTITIONED
/dev/vda1    1.00 GB  /boot
/dev/vda2  820.00 MB  SWAP
/dev/vda3    6.20 GB  /
---------------------------------
---------------------------------------------------------------------
Volume     Volume size  FS      FS size       Free  TypeMount point
---------------------------------------------------------------------
/dev/vda1      1.00 GB  xfs  1014.00 MB  874.03 MB  part/boot
/dev/vda3      6.20 GB  xfs     6.19 GB    5.01 GB  part/
---------------------------------------------------------------------
or even
Code:
  #vgimportclone --basevgname /dev/cl /dev/sda2
    Failed to find physical volume "/dev/sda2".
    Failed to find all devices.
For checking if all goes well in my copy of the zvol.
I connected a VM on it through iSCSI inside proxmox and start to boot from.
It worked successfully, so my zvol "RESCUE" seems to be ok.

I didn't find the right way to mount my disk file-systems (that can be in read-only in this rescue plan)

If someone have any advice, they are welcomes :p

Séb.
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
For having a reference point, on the original vm that use the "ORIGIN" zvol, the ssm give me such result

Code:
#ssm list
---------------------------------------------------------
Device        Free      Used     Total  Pool  Mount point
---------------------------------------------------------
/dev/vda                      32.00 GB        PARTITIONED
/dev/vda1                      1.00 GB        /boot
/dev/vda2  4.00 MB  30.99 GB  31.00 GB  cl
---------------------------------------------------------
------------------------------------------------
Pool  Type  Devices     Free      Used     Total
------------------------------------------------
cl    lvm   1        4.00 MB  30.99 GB  31.00 GB
------------------------------------------------
--------------------------------------------------------------------------------
Volume        Pool  Volume size  FS      FS size       Free  Type    Mount point
--------------------------------------------------------------------------------
/dev/cl/root  cl       27.79 GB  xfs    27.78 GB   25.81 GB  linear  /
/dev/cl/swap  cl        3.20 GB                              linear
/dev/vda1               1.00 GB  xfs  1014.00 MB  862.50 MB  part    /boot
--------------------------------------------------------------------------------
and fdisk -l give:

Code:
#fdisk -l

Disque /dev/vda : 34.4 Go, 34359738368 octets, 67108864 secteurs
Unités = secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : dos
Identifiant de disque : 0x000a69be

Périphérique Amorçage  Début         Fin      Blocs    Id. Système
/dev/vda1   *        2048     2099199     1048576   83  Linux
/dev/vda2         2099200    67108863    32504832   8e  Linux LVM

Disque /dev/mapper/cl-root : 29.8 Go, 29842472960 octets, 58286080 secteurs
Unités = secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets


Disque /dev/mapper/cl-swap : 3435 Mo, 3435134976 octets, 6709248 secteurs
Unités = secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets

we can see that fdisk show /dev/mapper for each logical volumes.
And ssm list also.
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
Instead of using iscsiadm manually for set up the iSCSI rescue drive, i set up from proxmox as a second drive (but the vm don't boot on this one).

Directly, the ssm list show me the disk, vg and lv from the rescue disk.

i was able to do
Code:
  mkdir /rescue
  mount /dev/cl/boot /rescue
So, it may have 2 possibles cause:
1) from the proxmox, the drive is up at startup and the lvm layer init all stuff correctly at startup
2) the iscsiadm don't set up the drive the same maner proxmox do and they are incompatible at a certain point.

I think the 2) is the real issue.
The things that let me suppose this is the fact the ssm list didn't report the right size for /dev/sda2 when i set up the iSCSI drive with iscsiadm. Maybe it's an issue on how i use iscsiadm ? it's also possible...
I tried, to mount from proxmox the drive when the vm is online (after boot) and the ssm list works fine also. So it's definitively not the 1) issue.

What i did for set up the iSCSI drive with iscsiadm:

Code:
 #iscsiadm -m discovery -t sendtargets  -p xxx.xx.x.xxx:3260
  xxx.xx.x.xxxx:3260,257 iqn.2016-12.fr.nephri.iscsi:pve
  xxx.xx.x.xxx:3260,257 iqn.2016-12.fr.nephri.iscsi:rescue

 #  iscsiadm -m node -l -T iqn.2016-12.fr.nephri.iscsi:rescue -p xxx.xx.x.xxx:3260
     Logging in to [iface: default, target: iqn.2016-12.fr.nephri.iscsi:rescue, portal: xxx.xx.x.xxx,3260] (multiple)
     Login to [iface: default, target: iqn.2016-12.fr.nephri.iscsi:rescue, portal: xxx.xx.x.xxx,3260] successful.
Remarks: In CentOs 7, the volume group used for install the os is named "cl" by default.

I didn't find a way to change this at the install time.
So for avoid problems, i change the vg name of the linux vm before adding the iSCSI drive inside proxmox.
I found a very good documentation here : HowTo Change The LVM Volume Group Name That Includes The Root Partition - NST Wiki and it worked like a charm. I renamed "cl" to "vgmasterboot" on my linux.
 
Last edited: