ZFS issue on FreeNAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
So long story short, I'm an idiot..let's just get that outta the way before we start.

I have a FreeNAS Server that's on an AIO. I had 2 x Samsung 843T 480GB SSDs running as a mirrored ZIL and I needed them for another project and I had 2 x 200GB S3700's laying around so I wanted to replace them.

I went through some forums posts and internet searches and I was able to remove the 480s. I shut down then removed the 480s and insert the S3700's I then went into the FreeNAS GUI and tried to add them as ZIL but for some reason I couldn't add them as a mirror..wouldn't give me the option. I went back to command line and i'm not sure how, but I added one of the S3700's to my RaidZ2 vdev that is made up of 6 x 4TB drives. Now I can't remove it.

Everything i've read tells me i'm screwed and have to rebuild and restore from backup, which I can do, but trying to avoid.

Also, I wanted to replace the AIO INF from an Haswell E3 to a more robust SKT2011 and 2670 combo so I can add more ram and additional PCI-E.

My gut tells me to check all backups..rebuild and restore. The good news is that I have two VMs right now, the FreeNAS and a Utility Server that's running Blue Iris. The OS drives are on a PX300d All Flash 4 x 480GB Samsung 843T's in a RAID5 and are presented to the AIO as and NFS share...so at least the Utility server can be recovered very quickly.

I'm trying to do this the quickest way possible so if anyone has any advice, I would appreciate it.

Thanks!
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
If you have added the single ssd as a new single device vdev to the pool you will need to destroy the pool and rebuild.

Migrating a ZFS pool to new hardware is super easy though. You just export the pool on the old host and import in the new one.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
BTW.. There is no need to mirror SLOG devices. That might be why the freenas gui doesn't make it easy. It can be done at the command line, but you would have to export and import the pool again for the UI to see the changes.

If you are using command line, make sure to use the switch to just show you what it will do. I think it's "-n". That helps prevent doing things like adding a single device vdev accidentally when outside the UI.
 

phooka

New Member
May 9, 2016
4
0
1
A mirrored SLOG is required if you care about your data, and you must if you choose to take the performance hit of zfs over other options.

Again I repeat a mirrored SLOG is required to maintain data redundancy from the time the I/O is acked until the transaction group is written out the the pool data drives.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
A mirrored SLOG is required if you care about your data, and you must if you choose to take the performance hit of zfs over other options.

Again I repeat a mirrored SLOG is required to maintain data redundancy from the time the I/O is acked until the transaction group is written out the the pool data drives.
The only time you risk losing data in the case of a SLOG device failure is if the SLOG device fails AND you have a power outage before the data can be written to your pool. While possible it's very slim.

No longer do your lose your entire pool if your slog dies... you can even add, remove, add slog devices to 'live' pools for testing :)

I'm not pushing to mirror SLOG or not but just adding a bit more info :)
 

phooka

New Member
May 9, 2016
4
0
1
Or the arc to nfs/network stack memory pool tunings collide and you kernel panic. Or you have an uncorrectable ecc error, etc.

The if you leave the ZIL on the pool data disks it is redundant.

Just more things to think about. Enough thread hijacking.
 

unwind-protect

Active Member
Mar 7, 2016
414
156
43
Boston
That mistake is the one that also got me cringy about ZFS early on. The commandline syntax of the ZFS tools is very "Sun-ish" which means out of reference with what "normal" Unix tools do and boof I had made a stripe instead of adding a mirror. It is some time ago but I do not believe that I found a solution for this.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
That mistake is the one that also got me cringy about ZFS early on. The commandline syntax of the ZFS tools is very "Sun-ish" which means out of reference with what "normal" Unix tools do and boof I had made a stripe instead of adding a mirror. It is some time ago but I do not believe that I found a solution for this.
What "mistake"? A perfect storm of numerous failures within seconds of each other or the fact that you typed the wrong command. The solution to typing the wrong command is learning the correct one.
 

mjt5282

New Member
Jul 18, 2015
11
2
3
56
The BSD command line is for experts . Double check important commands before the most damaging key - the return key!
I have been using FreeNAS for over a year now, I like it, but the GUI can't do everything. I have added and deleted SLOGs via the command line, even used gpart to correctly partition a raw disk for FreeNAS. Hmm, perhaps a sticky and a separate posting?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
What "mistake"? A perfect storm of numerous failures within seconds of each other or the fact that you typed the wrong command. The solution to typing the wrong command is learning the correct one.
In his defense, some of the zfs commands are like a chainsaw with no safety bar. I don't think it should let you add a non-redundant vdev to a pool with one or more redundant vdevs (I could swear I've used a distro, don't remember which, that actually warned you about that and required '-f'). Everyone fat-fingers occasionally, and screwing you good and hard is not good...
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
In his defense, some of the zfs commands are like a chainsaw with no safety bar. I don't think it should let you add a non-redundant vdev to a pool with one or more redundant vdevs (I could swear I've used a distro, don't remember which, that actually warned you about that and required '-f'). Everyone fat-fingers occasionally, and screwing you good and hard is not good...
There's a lot of commands issued in various systems via command line that can screw you up... that's not a down-fall of ZFS or BSD is all I'm saying... that's like saying we should ONLY use GUI or only use commands that have a "are you sure" built-in... I'm not trying to come across as some UNIX elitist as I'm far far from that but I think we all know the risks of command line that affect our data and systems in general ;)

If you're worried and want to be extra safe then do like myself and others I know... keep notes of your commands in a separate file before issuing them so you can read over them thoroughly and understand them. Often I end up noting these files and saving them for the future since I don't do this for a 'job' I forget the stuff I'm not doing often :)
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Also, this can be made much safer by using the -n option when you use the zpool add command.

Code:
-n       Displays the configuration that would be used without actually
         adding the vdevs. The actual pool creation can still fail due to
         insufficient privileges or device sharing.
 
  • Like
Reactions: Danic

unwind-protect

Active Member
Mar 7, 2016
414
156
43
Boston
The BSD command line is for experts . Double check important commands before the most damaging key - the return key!
What I am saying is that the ZFS tools commandline does not behave like most Unix commandline utilities, be it BSD or GNU fileutils or ffmpeg for that matter.

We can backtrace this but I think from the documentation available and the commands issued it is too easy to mistake these two cases, say you start from a 2-disk mirror of 1 TB:
  • Add a third disk to the mirror (2-time redundance).
  • Add the third disk so that you now have a 2 TB filesystem, the first part is backed by 2 disks and the second one by 1 disk.

I believe that is what happened to the OP, and it is what happened to me when I started out with ZFS.

If you want we can track step by step how to do this in Linux md and ZFS. The ZFS commandline is unusual, the mdsetup commandline follows normal Unix procedures more closely. The documentation released by Sun is also second-class IMHO, but it is getting better as FreeBSD and ZFSonLinux add their own guides.

ETA: of course one has to acknowledge that ZFS is more complicated as far as the "raw device" handling is concerned, and in any case filesystem layer management leaks into zpool management. So they have a harder job to document than md RAID.

ETA2: almost fully on ZFS now. Just not blind to nitpicks.