ZFS Pool Planning

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Nabisco_DING

New Member
Apr 5, 2013
2
0
0
Hello!
I have been reading about ZFS in and out for about a year and I am getting ready to setup my first test storage server!
I am eventually going to be setting up a storage server for my small business that will serve data to a hypervisor (haven't really decided yet but I am teetering to the side of Proxmox rather than ESXi 5.1)

I was reading this document and I am now a little confused. I know that this document is for assisting IT sysadmins in an enterprise environment but I some questions...

I was originally thinking of creating a root pool for the OS and a tank pool to serve to the hypervisor to store the virtual machines; however, the Oracle document suggests creating separate pools for windows and linux virtual machines.

Why is that?
Why not create a single pool for both Windows and Linux VMs?

At this time, I think that they did this because having 24 servers-worth of read/write requests going through a single pool would degrade performance for all servers that are making those requests. If this is the case, why not separate them further into more pools?

Ideally, wouldn't it be best to segregate servers based on how much read/write requests each server makes?

I feel like I am overthinking this hahaha.
 

gea

Well-Known Member
Dec 31, 2010
3,162
1,195
113
DE
There are a lot of tuning options if you need to guarantee a certain I/O level that is available for a single application. But in general I would create two pools, one highspeed pool for all VM's (best from SSDs) with a fill-level below 60% and a second slower but large storage/backup pool and share the highspeed pool it as NFS datastore (and per SMB for easy backup/move/clone).

For small to medium setups, you may think about a virtualized storage server (All-In-One) to have highspeed connectivity between storage and VMs without the cost of highspeed network (10G or FCoE)
 

Nabisco_DING

New Member
Apr 5, 2013
2
0
0
There are a lot of tuning options if you need to guarantee a certain I/O level that is available for a single application. But in general I would create two pools, one highspeed pool for all VM's (best from SSDs) with a fill-level below 60% and a second slower but large storage/backup pool and share the highspeed pool it as NFS datastore (and per SMB for easy backup/move/clone).

For small to medium setups, you may think about a virtualized storage server (All-In-One) to have highspeed connectivity between storage and VMs without the cost of highspeed network (10G or FCoE)
Regarding SSDs for pools, I am worried about SATA vs SAS. I am reading various opinions on the matter on the internet but nothing conclusive as some people say that SATA vs SAS only matter for spinning platters but then you get dire warnings like this person

Again, this person is mentioning designing a storage appliance for an enterprise however, although I am a small business owner, I still care about my data.

Another note, some people also talk about having issues mixing SATA and SAS drives together and this author went so far as to recommend an all SAS drive line-up for storage pools (but the author still uses SATA SSDs for his L2ARC and ZIL devices). In the comments section, the author and another commenter say that they have been having issues mixing SATA and SAS drives.

Reading things like this, I came to the conclusion that since I am not a certified storage engineer , I should probably stick to the tried and true so that I can minimize the risk of data loss/corruption/etc.

With this said, I am not against using SSDs for my storage pool but I just need to know the risks involved since losing my data will cause me to basically go bankrupt.
 

gea

Well-Known Member
Dec 31, 2010
3,162
1,195
113
DE
SAS vs Sata
SAS is mainly used in enterprise storage. It has advantages regarding multipathing, expanders and cable length.
But there are a lot of Sata enterprise disks and enterprises using Sata due to costs and higher available capacity.

SSD vs conventional disks
in both areas you can find ones with problems and reliable ones.
In the SSD (and affordable MLC) area, expect similar failure rates like conventional disks, at least in the first say five years.

Knowing, that I/O of SSD can be 100x better, who would hesitate.
If data is really important, allow at least a two disk failure (3way mirror, Z2 or Z3)