ZFS native on Ubuntu 16.04 LTS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
ZFS on Ubuntu was in tech preview mode for Ubuntu 15.10. In 16.04 LTS it is set to be installed by default.

For those that have not seen it yet, here is a screenshot from the latest 16.04 LTS build I had on hand:
upload_2016-3-22_12-59-59.png

With ZFS on Linux heading to being extremely easy to get installed (e.g. it is there by default) I wanted to see if anyone has any plans with Ubuntu and ZoL. Will folks on older releases upgrade?
 
Last edited:
  • Like
Reactions: Chuntzu

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Afternoon Patrick, I JUST tried 16.04 xenial server release and still cannot get vt-D LSI HBA's to play nice, I get this: I dunno if these are broke now in Linux or what the heck is going on but I never did get a chance just yet to vet this up the chain. 4.4 kernel still doesnt play nice w/ LSI 2008 based HBA's.

root@xenial:~# dmesg | grep mpt
[ 0.000000] Device empty
[ 1.203755] mpt3sas version 12.100.00.00 loaded
[ 1.211406] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (8175500 kB)
[ 1.216691] mptbase: ioc0: Initiating bringup
[ 1.269302] mpt2sas_cm0: MSI-X vectors supported: 1, no of cores: 2, max_msix_vectors: -1
[ 1.270123] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 59
[ 1.270224] mpt2sas_cm0: iomem(0x00000000fd4fc000), mapped(0xffffc90000e48000), size(16384)
[ 1.270387] mpt2sas_cm0: ioport(0x0000000000005000), size(256)
[ 1.364379] mpt2sas_cm0: Allocated physical memory: size(7579 kB)
[ 1.364507] mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432)
[ 1.364666] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[ 31.409756] mpt2sas_cm0: _base_event_notification: timeout
[ 31.414586] mpt2sas_cm0: sending message unit reset !!
[ 31.416566] mpt2sas_cm0: message unit reset: SUCCESS
[ 31.633708] mpt2sas_cm0: failure at /build/linux-lAMkDx/linux-4.4.0/drivers/scsi/mpt3sas/mpt3sas_scsih.c:8800/_scsih_probe()!
root@xenial:~#

Anyone else willing to give this a try to prove/disprove my sanity w/ similar setup?
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
ZFS on Ubuntu was in tech preview mode for Ubuntu 15.10. In 16.04 LTS it is set to be installed by default.

For those that have not seen it yet, here is a screenshot from the latest 16.04 LTS build I had on hand:
View attachment 1910

With ZFS on Linux heading to being extremely easy to get installed (e.g. it is there by default) I wanted to see if anyone has any plans with Ubuntu and ZoL. Will folks on older releases upgrade?
I will eventually upgrade from 14.04 to 16.04 and use ZFS as my root filesystem, but I will likely do a clean install as things like the switch to systemd as default vs. Upstart will likely leave some cruft behind I would like to avoid. Also, Docker makes it super easy for me to get going from a clean install anyways.
 
Last edited:

Alfa147x

Active Member
Feb 7, 2014
192
39
28
I wonder if we'll start seeing Debian based FreeNAS competition. Not that I don't like FreeNAS but I do like competition!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
Afternoon Patrick, I JUST tried 16.04 xenial server release and still cannot get vt-D LSI HBA's to play nice, I get this: I dunno if these are broke now in Linux or what the heck is going on but I never did get a chance just yet to vet this up the chain. 4.4 kernel still doesnt play nice w/ LSI 2008 based HBA's
....
Anyone else willing to give this a try to prove/disprove my sanity w/ similar setup?
Unfortunately, the 4 blades I was testing in do not have LSI controllers!
 

cperalt1

Active Member
Feb 23, 2015
180
55
28
43
Currently running ZOL on Ubuntu 14.04 LTS but not as a root pool. That is the only drive that is not running ZFS on my server. On the other hand my Linux Install dates back to 2007 and have performed in place upgrades so it looks like it is time to do a clean install in order to avoid any other issues (systemd vs upstart). The only consideration I have at this point is the dicey upgrade of mythtv since I record via Firewire from a Comcast STB.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Currently running ZOL on Ubuntu 14.04 LTS but not as a root pool. That is the only drive that is not running ZFS on my server. On the other hand my Linux Install dates back to 2007 and have performed in place upgrades so it looks like it is time to do a clean install in order to avoid any other issues (systemd vs upstart). The only consideration I have at this point is the dicey upgrade of mythtv since I record via Firewire from a Comcast STB.
You "should" be fine once the official system upgrade package is developed. Just make an image of your OS drive prior to the upgrade just in case. You will obviously need to re-install if you want your root filesystem to be ZFS based (I'm going to do a mirror of 240GB Intel 730 SSDs).

Also, have you thought about virtualizing or putting you MythTV install into a container so it's more easily backed up/upgraded/portable)?
 

cperalt1

Active Member
Feb 23, 2015
180
55
28
43
I have thought about virtualizing but the problem becomes the PCI Firewire cards I have don't do well via VT-D. It would be easier if I get an HD Homerun but that is a future upgrade.
 

unwind-protect

Active Member
Mar 7, 2016
418
156
43
Boston
My ZFS running backup holder is dual-boot FreeBSD and ZFS on Debian. No significant difference in performance or otherwise. Obviously the same code.

The only problem is that ZoL sometimes leaves marks on the disks that can make FreeBSD choke, e.g. device paths for components. Haven't seen that the other way round.
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
Afternoon Patrick, I JUST tried 16.04 xenial server release and still cannot get vt-D LSI HBA's to play nice, I get this: I dunno if these are broke now in Linux or what the heck is going on but I never did get a chance just yet to vet this up the chain. 4.4 kernel still doesnt play nice w/ LSI 2008 based HBA's.

root@xenial:~# dmesg | grep mpt
[ 0.000000] Device empty
[ 1.203755] mpt3sas version 12.100.00.00 loaded
[ 1.211406] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (8175500 kB)
[ 1.216691] mptbase: ioc0: Initiating bringup
[ 1.269302] mpt2sas_cm0: MSI-X vectors supported: 1, no of cores: 2, max_msix_vectors: -1
[ 1.270123] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 59
[ 1.270224] mpt2sas_cm0: iomem(0x00000000fd4fc000), mapped(0xffffc90000e48000), size(16384)
[ 1.270387] mpt2sas_cm0: ioport(0x0000000000005000), size(256)
[ 1.364379] mpt2sas_cm0: Allocated physical memory: size(7579 kB)
[ 1.364507] mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432)
[ 1.364666] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[ 31.409756] mpt2sas_cm0: _base_event_notification: timeout
[ 31.414586] mpt2sas_cm0: sending message unit reset !!
[ 31.416566] mpt2sas_cm0: message unit reset: SUCCESS
[ 31.633708] mpt2sas_cm0: failure at /build/linux-lAMkDx/linux-4.4.0/drivers/scsi/mpt3sas/mpt3sas_scsih.c:8800/_scsih_probe()!
root@xenial:~#

Anyone else willing to give this a try to prove/disprove my sanity w/ similar setup?
can you add on kernel command line: pci=realloc=off

let see if still failing or not.

I assume that you already know kernel command line, if not you can search on ubuntu wiki or uncle google...
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
can you add on kernel command line: pci=realloc=off

let see if still failing or not.

I assume that you already know kernel command line, if not you can search on ubuntu wiki or uncle google...
Rebuilding and attempting now, can anyone remind me the trick to inject this in grub2? I seem to recall having to edit a file and run some grub cmd again to rebuild initrd's and such.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Rebuilding and attempting now, can anyone remind me the trick to inject this in grub2? I seem to recall having to edit a file and run some grub cmd again to rebuild initrd's and such.
Should be as easy as holding the Shift key down to make temporary changes at the GRUB menu or, to make it permanent... This also covers it.

Code:
sudo -i
nano /etc/default/grub
search for the line with this
Code:
GRUB_CMDLINE_LINUX_DEFAULT=""
add the string...
Code:
GRUB_CMDLINE_LINUX_DEFAULT="pci=realloc=off"
update Grub and reboot
Code:
update-grub
reboot
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Thanks, that dusts off the rust from a month or so ago. Too much going on. Rebooting now, don't expect it to work :-(
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
No luv, same issue/errors in dmesg (looks like 'pci=realloc=off' stuck via the update /etc/default/grub and update-grub, reboot sequence)

etc-default-grub-pci=realloc=off.png boot-grub-grub.cfg-pci=realloc=off.png
 
Last edited:

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
No luv, same issue/errors in dmesg (looks like 'pci=realloc=off' stuck via the update /etc/default/grub and update-grub, reboot sequence)

View attachment 1919 View attachment 1920
ssh to your server and do cat /proc/cmdline

make sure you pci=realloc=off is in cmdline output



as I remember to change kernel cmdline on the fly.
*when boolting .. press tab many times, grub selection will shown up.
*press e to edit kernel command line, and add pci=realloc=off at the end. please remember space *in the beginning
* press b to boot with modified kernel commandline

got the link -> GRUB2 Edit Mode
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
root@xenial:~# cat /proc/cmdline | grep pci
BOOT_IMAGE=/vmlinuz-4.4.0-15-generic root=/dev/mapper/xenial--vg-root ro pci=realloc=off
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
root@xenial:~# cat /proc/cmdline | grep pci
BOOT_IMAGE=/vmlinuz-4.4.0-15-generic root=/dev/mapper/xenial--vg-root ro pci=realloc=off
the only way to try is flashing newer firmware :D


the interesting part is 2008 with megaraid firmware does not have issue ha!!!!


I looked 4.4.6 kernel source code before got off from work today.
only two issues can cause that:
1) mfct ID got messed up due on kernel 3.4 and up hasn pci realloc is auto. 3.1 kernel does not has that.
or (by assuming no messed up with mfct ID)
2)mpt3sas (merging with mpt2sas) failed to read attached devices aka timeout.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
the only way to try is flashing newer firmware :D


the interesting part is 2008 with megaraid firmware does not have issue ha!!!!


I looked 4.4.6 kernel source code before got off from work today.
only two issues can cause that:
1) mfct ID got messed up due on kernel 3.4 and up hasn pci realloc is auto. 3.1 kernel does not has that.
or (by assuming no messed up with mfct ID)
2)mpt3sas (merging with mpt2sas) failed to read attached devices aka timeout.
I am open and willing to all suggestion but I believe I have tried newest/latest v20 FW for the card, even swapped back to v19 w/ no difference, flashed it back to v20 that was from at least 12/15. If there's a newer one I can try that but I sure don't have these issues in OmniOS, Solaris 11.3, SmartOS, FreeNAS, Zol based on ubuntu LTS 14.04.3

I'm trying the .vmx hack now suggested over in this thread.

LSI9211-8i on Ubuntu 15.10 timeouts

EDIT: no fix w/ that .vmx fixup either w/ or w/out the 'pci=realloc=false' in kernel boot line.
 
Last edited:

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
So this is still just in VMWare that you are seeing this on passed through PCIe devices with newer kernels, correct? I just did a fresh baremetal install of 16.04 server with two cards installed in the system. No errors in dmesg and both HBA's are visible as well as attached disks. This is also persistent over numerous reboots. This is also with the default kernel options (no pci=realloc=no).

If so, have you tried any other hypervisors just to test to see if others are effected (i.e. Proxmox)

Code:
root@xenial:~# dmesg | grep mpt2sas
[    4.244667] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (16325032 kB)
[    4.345596] mpt2sas_cm0: MSI-X vectors supported: 1, no of cores: 12, max_msix_vectors: -1
[    4.345653] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 46
[    4.345654] mpt2sas_cm0: iomem(0x00000000504c0000), mapped(0xffffc90001d40000), size(16384)
[    4.345654] mpt2sas_cm0: ioport(0x000000000000e000), size(256)
[    4.434164] mpt2sas_cm0: Allocated physical memory: size(7579 kB)
[    4.434165] mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432)
[    4.434165] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[    4.478516] mpt2sas_cm0: LSISAS2008: FWVersion(19.00.00.00), ChipRevision(0x03), BiosVersion(00.00.00.00)
[    4.478517] mpt2sas_cm0: Protocol=(
[    4.478724] mpt2sas_cm0: sending port enable !!
[    4.478835] mpt2sas_cm1: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (16325032 kB)
[    4.533183] mpt2sas_cm1: MSI-X vectors supported: 1, no of cores: 12, max_msix_vectors: 8
[    4.533230] mpt2sas1-msix0: PCI-MSI-X enabled: IRQ 50
[    4.533231] mpt2sas_cm1: iomem(0x00000000fb240000), mapped(0xffffc90001d80000), size(65536)
[    4.533232] mpt2sas_cm1: ioport(0x000000000000d000), size(256)
[    4.622767] mpt2sas_cm1: Allocated physical memory: size(7579 kB)
[    4.622767] mpt2sas_cm1: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432)
[    4.622767] mpt2sas_cm1: Scatter Gather Elements per IO(128)
[    4.667483] mpt2sas_cm1: LSISAS2008: FWVersion(19.00.00.00), ChipRevision(0x03), BiosVersion(00.00.00.00)
[    4.667484] mpt2sas_cm1: Protocol=(
[    4.667686] mpt2sas_cm1: sending port enable !!
[    5.986956] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x500605b002b6fd60), phys(8)
[    7.294286] mpt2sas_cm1: host_add: handle(0x0001), sas_addr(0x5b8ca3a0f49cf000), phys(8)
[   12.116054] mpt2sas_cm0: port enable: SUCCESS
[   12.116466] mpt2sas_cm0:     sas_address(0x4433221107000000), phy(7)
[   12.116495] mpt2sas_cm0:     enclosure_logical_id(0x500605b002b6fd60),slot(4)
[   12.116528] mpt2sas_cm0:     handle(0x0009), ioc_status(success)(0x0000), smid(1)
[   12.116563] mpt2sas_cm0:     request_len(0), underflow(0), resid(0)
[   12.116591] mpt2sas_cm0:     tag(65535), transfer_count(0), sc->result(0x00000000)
[   12.116626] mpt2sas_cm0:     scsi_status(check condition)(0x02), scsi_state(autosense valid )(0x01)
[   12.116667] mpt2sas_cm0:     [sense_key,asc,ascq]: [0x06,0x29,0x00], count(18)
[   12.428052] mpt2sas_cm1: port enable: SUCCESS
Also, the mpt2sas driver still looks old, but I just grabbed and installed the latest mpt2sas-20.00.00.00-1_Ubuntu14.04.amd64.deb driver from Avago's website, but the version number still says 12.100.00.00, so maybe that doesn't match the installed version number?

Code:
root@xenial:~# modinfo mpt2sas | grep version
version:        12.100.00.00
srcversion:     3946C07EF122A6D7F0CF884
vermagic:       4.4.0-15-generic SMP mod_unload modversions

root@xenial:~# modinfo mpt3sas | grep version
version:        12.100.00.00
srcversion:     3946C07EF122A6D7F0CF884
vermagic:       4.4.0-15-generic SMP mod_unload modversions