truenas core filesystem question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jcizzo

Member
Jan 31, 2023
37
5
8
Hi all, looking for some very simple guidance.

I'm an absolute novice with truenas (core and scale, never installed scale).

i'm building my first nas based on the latest stable release of core. for simplicity's sake lets assume the drive layout will be the following:

2x SSD's for base os (truenas core 13)
5x HDD's for storage pool (zfs of course).

here are my questions:

-when a jail is set up and plugins installed to that jail, where in the directory structure is it all located? is the jail(s) installed on the spinners or on the OS drives?

thanks for your patience
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
Hi all, looking for some very simple guidance.

I'm an absolute novice with truenas (core and scale, never installed scale).

i'm building my first nas based on the latest stable release of core. for simplicity's sake lets assume the drive layout will be the following:

2x SSD's for base os (truenas core 13)
5x HDD's for storage pool (zfs of course).

here are my questions:

-when a jail is set up and plugins installed to that jail, where in the directory structure is it all located? is the jail(s) installed on the spinners or on the OS drives?

thanks for your patience
That's up to you: Docs Hub | Setting Up Jail Storage you can store it wherever you want.
I would recommend putting TrueNAS on a USB stick and using the SSD for ARC and ZIL instead. TrueNAS itself doesn't really benefit from SSDs once it is booted up.
 

jcizzo

Member
Jan 31, 2023
37
5
8
regarding usb installs, i know it's possible, it's just not what i want to do at the moment.. i'm trying to learn this system from the very very basics before i go altering things.
i know i can install whatever i want anywhere i want, but what are the defaults when its "outta the box"?
again, ran the installer, installed truenas core to the 2 SSD's, set the ip address, logged in to the webui, created a storage pool across the 5 spinners, and now i want to create a jail and install plugins.. where are the jails and plugins stored?
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
regarding usb installs, i know it's possible, it's just not what i want to do at the moment.. i'm trying to learn this system from the very very basics before i go altering things.
i know i can install whatever i want anywhere i want, but what are the defaults when its "outta the box"?
again, ran the installer, installed truenas core to the 2 SSD's, set the ip address, logged in to the webui, created a storage pool across the 5 spinners, and now i want to create a jail and install plugins.. where are the jails and plugins stored?
You can't create Jails until you setup a Jail storage location. So the Jails will be wherever you have configured it to be, otherwise they don't exist. If you just follow the prompts and you only have one pool, you can only select that pool and that will be the location of the Jail storage.

As per the documentation:

  • It should have at least 10 GiB of free space (recommended).
  • It cannot be located on a share.
  • It, the iocage dataset, automatically uses the first pool that is not a root pool for the TrueNAS system.
  • A defaults.json file contains default settings used when a new jail is created. The file is created automatically when not already present. When the file is present but corrupted, iocage shows a warning and uses default settings from memory.
  • Each new jail installs into a new child dataset of iocage/. For example, with the iocage/jails dataset in pool1, a new jail called jail1 installs into a new dataset named pool1/iocage/jails/jail1.
  • FreeBSD releases are fetched as a child dataset into the /iocage/download dataset. This datset is then extracted into the /iocage/releases dataset to use in jail creation. The dataset in /iocage/download can then be removed without affecting the availability of fetched releases or an existing jail.
  • They, the iocage/ datasets on activated pools, are independent of each other and do not share any data.
iocage jail configs are stored in /mnt/poolname/iocage/jails/jailname. When iocage is updated, the config.json configuration file is backed up as /mnt/poolname/iocage/jails/jailname/config_backup.json. You can rename the backup file to config.json to restore previous jail settings.
 

ericloewe

Active Member
Apr 24, 2017
293
128
43
30
I would recommend putting TrueNAS on a USB stick
Please don't, the experience quickly gets miserable. Sure, you could buy a really good USB flash drive, but that's more expensive than a decent SATA SSD, so it's probably a third option after a decent SATA SSD and a reputable NVMe SSD.
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
Please don't, the experience quickly gets miserable. Sure, you could buy a really good USB flash drive, but that's more expensive than a decent SATA SSD, so it's probably a third option after a decent SATA SSD and a reputable NVMe SSD.
For a single 5 drive pool?

Say you'd be making bigger configurations, running a bunch of other stuff, sure, get some SATA SSDs to speed up software loading, but all your persistence is going on the spinning rust anyway so all your usage performance will depend on that.
 

jcizzo

Member
Jan 31, 2023
37
5
8
well, as i asked in the initial post, i layed out my system. i didn't ask about installing on usb drives or anything weird.. i simply asked about the directory setup.. why can't we just stick with my original question so that others with the same questions can get the correct answer.

this post had nothing to do with usb drives, and no i won't go down that path. one can get 2x240 to 250GB ssd's from amazon for $20~ now adays. going the usb route would cost more and be less reliable.
 

ericloewe

Active Member
Apr 24, 2017
293
128
43
30
For a single 5 drive pool?
It has literally zero to do with anything else in the system (well, I guess it's marginally worse if you have a more complex configuration).

USB flash drives die a lot. Worse, they tend to get really slow as you use them - especially as ZFS uses them - presumably due to poor garbage collection. After a year and a half of updates back in the 9.3 days, I could look forward to 40+ minutes to do a simple upgrade, because the boot pool was so miserably slow. That's when I gave up on USB flash drives as anything other than a last resort.

where are the jails and plugins stored?
Where you set it up to be, as helpfully referenced by @oneplane in post #4. Since the GUI doesn't allow you to choose the boot pool, you don't have much of a choice.
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
It has literally zero to do with anything else in the system (well, I guess it's marginally worse if you have a more complex configuration).

USB flash drives die a lot. Worse, they tend to get really slow as you use them - especially as ZFS uses them - presumably due to poor garbage collection. After a year and a half of updates back in the 9.3 days, I could look forward to 40+ minutes to do a simple upgrade, because the boot pool was so miserably slow. That's when I gave up on USB flash drives as anything other than a last resort.
I'm not entirely sure why you would run the system pool on the USB drive, you only need it to boot. Everything else (R/W) would be on the data pool. While I haven't needed to do this for years, if someone suggests a spinning rust pool with no SSDs, but then wants to use SSDs for boot+system pools, that raises all sorts of questions. Just give the UEFI something to boot (which is pretty much just read once), and leave everything else on the accelerated pool.

Perhaps the following makes more sense:

say your resources are 5xHDD and 2xSSD, would it be wise to create a ZFS pool that doesn't utilise the SSDs when it comes to pool performance?

Ideally you'd have 4 SSDs, but that wasn't the case as far as the post reads. As far as I know, asking someone to dig up a couple of read/boot USB drives is much more likely to succeed than asking someone to buy more SSDs. Especially when they haven't shown they know their way around the documentation.
 
Last edited:

unwind-protect

Active Member
Mar 7, 2016
415
156
43
Boston
I would partition the SSDs, using one part for the OS, and the other part for ZFS cache.

As mentioned, the OS installation is barely touched during operations.
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
I would partition the SSDs, using one part for the OS, and the other part for ZFS cache.

As mentioned, the OS installation is barely touched during operations.
Oh yeah, that would totally work, they are also big enough to over provision 200% if needed. And considering an ARC is supposed to be read-many and the ZIL is not that heavy for so few drives it wouldn't even wear the SSDs that much.

But somehow I get the feeling that the author here wants to continue on the X/Y problem path instead of going over the actual things that matter. I hope he figures out the link to the docs, that's all you really need to get started.
 

ericloewe

Active Member
Apr 24, 2017
293
128
43
30
I'm not entirely sure why you would run the system pool on the USB drive
You misunderstand - the miserable performance was just from the OS and doing updates over less than two years. There was nothing else on the boot pool, no system dataset in particular.

While I haven't needed to do this for years, if someone suggests a spinning rust pool with no SSDs, but then wants to use SSDs for boot+system pools, that raises all sorts of questions.
I strongly disagree, it's a perfectly standard setup in a more serious scenario - with two redundant SSDs for the boot pool, that leaves a lot of empty space that's just begging to receive the system dataset. And no serious scenario is going to use less than a pair of SSDs.

say your resources are 5xHDD and 2xSSD, would it be wise to create a ZFS pool that doesn't utilise the SSDs when it comes to pool performance?
Yes. You can't magically add SSDs without a thought in a chase for "more performance", you define a performance target and design accordingly. Dumping metadata onto a Special dataset is possible, but the caveats are significant - the Special dataset needs to be just as reliable as the rest of the pool, for starters. SLOG is only relevant for very specific workloads and requirements. L2ARC is often not that much of a gain, since it consumes space in ARC.

I would partition the SSDs, using one part for the OS, and the other part for ZFS cache.
Besides being a pain to manage, blindly adding L2ARC is not a good idea - 25 MB of ARC are used to support 1 GB of L2ARC. Even 100 GB of L2ARC means 2.5 GB of ARC used up and unless you're really benefiting from the L2ARC (large, consistent working set), you'll probably just lose performance. Or if you have abundant RAM for ARC, L2ARC is just additional cognitive workload for no benefit.
That's not to say it's useless, simply that it's not a magic "go-faster" thing.