ZFS & CPU Usage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I'm planning on the following pools in ZFS start:

- 6x5TB WD RED RAID in a Raid-Z2 Pool (General Storage & Media Storage/Streaming)
- 2x2TB WD RE Mirrored (Security Camera)
- 2x5TB Toshiba Mirror (RAIDZ2 pool copies 'important' data here
- 4x200gb Intel S3700 RAID-10 or 0 (VM OS Drive, backed up to 6x5TB Nightly or more depending on the VM)
- 2x160gb Intel S3500 Mirror or Stripe not sure yet (VM fast Storage backed up to 6x5TB)
- 2x480gb Intel 730 Strtipe or mirror not sure (faster VM storage, IE: MySQL, Reddis, etc)
- Eventually the file system will schedule migrating around 100-200gigs once a day to the 'tape' server for archive rotation.
(Plan to test with SLOGs on various pools with SSD as well as raid card 1GB SLOG -- Shouldn't affect too much iirc)


I've read that some ZFS operations utilize a good % of CPU. Assuming I run the above, and stream to at most 2 TVs, and have at minimum 6 Cameras feeding the security system, what % of CPU usage should I expect from ZFS?

I was planning to give it 4 Physical 2.0ghz-2.ghz Sandybridge cores as well as 64gb to the VM and if I can specify max ARC in ZFS I would set to 58-60gb. (2x E5-2620 12 total physical)

Most likely 2-3 CentOS VMs using the SSD storage pools for read/wiring and analyzing data. Ideally this will run 24/7, so planned usage is "in use" all the time, especially with security stuff running too.

How would that fair? What about something with a higher clock, anyone done comparisons?
 
Last edited:

TeeJayHoward

Active Member
Feb 12, 2013
376
112
43
I'm pegging my 24 drive array right now. Restoring the entire thing from a snapshot I took yesterday. Here's my CPU usage with 2 vCPUs of a E3-1230v1@3.2GHz:

Code:
top - 12:49:19 up 1 day, 53 min,  3 users,  load average: 3.12, 3.37, 3.48
Tasks: 325 total,  19 running, 306 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.2 us, 31.6 sy,  0.0 ni, 27.2 id, 38.3 wa,  0.0 hi,  1.7 si,  0.0 st
KiB Mem:   8012548 total,  7537820 used,   474728 free,       72 buffers
KiB Swap:  1679356 total,      332 used,  1679024 free.    92448 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1083 root      38  18       0      0      0 R  17.3  0.0 195:54.78 z_wr_iss/0
 1082 root      39  19       0      0      0 R  16.6  0.0 178:40.28 z_rd_int/0
 2920 root      20   0  161848  39564    956 D  12.6  0.5 136:46.26 cp
 2758 root      20   0  200056  12748   4052 S   3.0  0.2  40:21.12 iotop
 1175 root       0 -20       0      0      0 D   2.0  0.0  26:23.96 txg_sync
 1096 root      39  19       0      0      0 R   1.3  0.0  11:06.29 z_wr_int/7
 1067 root      39  19       0      0      0 S   1.0  0.0   9:42.05 z_null_iss/0
 1089 root      39  19       0      0      0 R   1.0  0.0  11:14.53 z_wr_int/0
 1090 root      39  19       0      0      0 S   1.0  0.0  11:18.34 z_wr_int/1
 1092 root      39  19       0      0      0 R   1.0  0.0  11:14.57 z_wr_int/3
 1093 root      39  19       0      0      0 R   1.0  0.0  11:17.41 z_wr_int/4
 1094 root      39  19       0      0      0 R   1.0  0.0  11:14.40 z_wr_int/5
 1095 root      39  19       0      0      0 R   1.0  0.0  11:11.35 z_wr_int/6
 1097 root      39  19       0      0      0 R   1.0  0.0  11:16.96 z_wr_int/8
 1098 root      39  19       0      0      0 R   1.0  0.0  11:00.60 z_wr_int/9
 1099 root      39  19       0      0      0 R   1.0  0.0  11:06.89 z_wr_int/10
 1100 root      39  19       0      0      0 R   1.0  0.0  11:10.54 z_wr_int/11
 1101 root      39  19       0      0      0 R   1.0  0.0  11:09.34 z_wr_int/12
 1103 root      39  19       0      0      0 R   1.0  0.0  11:10.89 z_wr_int/14
 1091 root      39  19       0      0      0 R   0.7  0.0  11:14.46 z_wr_int/2
 1102 root      39  19       0      0      0 R   0.7  0.0  11:00.74 z_wr_int/13
 1104 root      39  19       0      0      0 R   0.7  0.0  11:06.75 z_wr_int/15
Code:
Total DISK READ :     305.05 M/s | Total DISK WRITE :     125.05 M/s
Actual DISK READ:     180.02 M/s | Actual DISK WRITE:     273.17 M/s
  TID  PRIO  USER     DISK READ DISK WRITE>  SWAPIN      IO    COMMAND
2920 be/4 root      137.12 M/s  125.05 M/s  0.00 % 82.54 % cp -i -Rv ./~Video /pool/
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % systemd --sw~serialize 23
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
I doubt you'll need 4 vCPUs.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Looks like you need 4...

That would put you right under 100% usage, correct?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Care to share what you did to come up with that?

As 3 CPUs are 100% and the 4th would be 80% available.
Even if 1 CPU was 100% available 2 CPU usage is 67% usage.

Since he was having 40-80% free on the last core that pushes usage up to 70%_+ which IMHO is close to 100% with 4Cores... which is a lot of CPU.


Thanks for sharing.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
CPU still utilized, while waiting... correct? As-in it can't do other tasks while waiting, so the disks are holding up the CPU, but the CPU is still "needed" to do more. Right...
 

Deci

Active Member
Feb 15, 2015
197
69
28
the cpu is held up by the disks, so having more cpu wouldnt help it, you would just have the cpu doing more waiting.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
the cpu is held up by the disks, so having more cpu wouldnt help it, you would just have the cpu doing more waiting.
If he had more CPU you're telling me ZFS would hold it up too, and it wouldn't be available for other OS duties?

At what point does ZFS stop needing CPU, even if it's just waiting for the disks...
 

cn9amt100up

Member
Feb 11, 2015
43
9
8
38
CPU still utilized, while waiting... correct? As-in it can't do other tasks while waiting, so the disks are holding up the CPU, but the CPU is still "needed" to do more. Right...
1.2 = User process
31.6=sys process
27.7=idle
38.3= io wait
1.7=software interrupt

accrording to load avg, yes it is overload
however, according to above figure, nearly 40 % is waiting for the disk, so it either ram is not enough or disk performance is too low
need to take a look, why software interrupt was happen as well

hence, if those 40% can be remove, the load avg will back to sth around 2 - 2.2
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Either way, that's all great... but the point I'm trying to get 100% clear on is how ZFS handles CPU when waiting for disks. Does it eat up all CPU waiting or does it only eat X%.

If ZFS had MORE CPU would it go to the ioWait or would ZFS not eat it up? If ZFS wouldn't eat up more then adding a 3rd and 4th (CPU) should provide more CPU resources for other apps & OS duties.... CORRECT?
 

cn9amt100up

Member
Feb 11, 2015
43
9
8
38
If he had more CPU you're telling me ZFS would hold it up too, and it wouldn't be available for other OS duties?

At what point does ZFS stop needing CPU, even if it's just waiting for the disks...
For this situation, it have higher chance other OS duties need to be queue as well
 

cn9amt100up

Member
Feb 11, 2015
43
9
8
38
Either way, that's all great... but the point I'm trying to get 100% clear on is how ZFS handles CPU when waiting for disks. Does it eat up all CPU waiting or does it only eat X%.

If ZFS had MORE CPU would it go to the ioWait or would ZFS not eat it up? If ZFS wouldn't eat up more then adding a 3rd and 4th (CPU) should provide more CPU resources for other apps & OS duties.... CORRECT?
cpu is for calculation, before cpu doing those calculation, something should be feed to cpu.
in logic, data from disk array will pass it to ram and push to cpu for cal.
if more jobs need to be done by cpu, however, there are no data feed to cpu, then, most of the cpu % will still pending for waiting IO

If ZFS had MORE CPU would it go to the ioWait or would ZFS not eat it up?
the cpu% will be lower, for example wa->19.2, id->~54

If ZFS wouldn't eat up more then adding a 3rd and 4th (CPU) should provide more CPU resources for other apps & OS duties.... CORRECT?
No, new task will pending, as cpu is waiting for disk IO, then, the figure should be.
User process-> % Up
Sys process->% no change
idle->% down
IO WAIT->%Up
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
IO Wait would only increase if the other tasks required data managed by ZFS, correct? IE: RAID0 Mirror passed through to same VM wouldn't wait on CPU as it's not managed by ZFS.

ZFS will use as much CPU as it can until it's waiting for disks, so if we have various Pools used by various VMs then ZFS CPU allocation should be sized accordingly and not generalized, is that an accurate statement?
 

cn9amt100up

Member
Feb 11, 2015
43
9
8
38
IO Wait would only increase if the other tasks required data managed by ZFS, correct?
IO wait will increase if others tasks required data at the same disk array.

ZFS will use as much CPU as it can until it's waiting for disks, so if we have various Pools used by various VMs then ZFS CPU allocation should be sized accordingly and not generalized, is that an accurate statement?
In theory, yes.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Can you move a ZFS filesystem from "system to system" as long as you use the same controller, or can that even be swapped out when failure occurs?

I want to make sure I'm prepared :)
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Not sure if I understand your question correctly.
A ZFS filesystem is not tied to a disk, vdev or controller.
Data on a pool are striped over all vdevs/disks for performance reason.

What you can do:
- use zfs send to copy a filesystem to another location (on same or different host)
- move any disks around between any controller or add disks on any controller or move all disks to another host with any controller where you can import the pool without any prior actions needed to prepare the move.

No problem as ZFS raid is software raid and all needed infos are on pool.
 
  • Like
Reactions: CreoleLakerFan

TeeJayHoward

Active Member
Feb 12, 2013
376
112
43
Can you move a ZFS filesystem from "system to system" as long as you use the same controller, or can that even be swapped out when failure occurs?
I've moved a ZFS filesystem between hosts before with completely different hardware. I pulled the six disks out of one computer running Solaris, plugged them in to another computer running ZFSonLinux (not caring at all about which disk gets plugged in where), and did a zpool import. All of my data was there. Different chipset, different controller, different everything. As long as you have all the required disks you're golden.
 
  • Like
Reactions: T_Minus

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
I've moved a ZFS filesystem between hosts before with completely different hardware. I pulled the six disks out of one computer running Solaris, plugged them in to another computer running ZFSonLinux (not caring at all about which disk gets plugged in where), and did a zpool import. All of my data was there. Different chipset, different controller, different everything. As long as you have all the required disks you're golden.
This ... existing ZFS pools are recognized by different systems. I've moved a pool between hosts running OmniOS/Napp-it and NexentaStor and it was immediately recognized.
 
  • Like
Reactions: T_Minus