Samsung 840 SSDs as zilog/l2arc?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Thatguy

New Member
Dec 30, 2012
45
0
0
Thinking about adding some ssds to my pools for fun and to hopefully increase performance a little bit.

Are samsung 840's a good choice? was thinking about grabbing 4, doing 2 raid1's and assigning each raid1 as a zilog/cache

I'd rather not shell out $999+ for a stec drive. Don't think I need quite that much performance.

Array is currently 4 12 disk 2TB raidz3's and another pool being 16 3TB's in a raidz2.

Array spends most of its time doing sequential reads.
 
Last edited:

cactus

Moderator
Jan 25, 2011
830
75
28
CA
For L2ARC they will be fine. One thing to note with any caching technology is new data will not be in the cache. On first read of a file, its blocks will be written into the cache and will be accelerated the next time you read that file/block. This normally helps frequently accessed random IO. There will also be some kind of aging algorithm, so if you are doing lots of reads of files that are larger than you caching device, you may get no use from it.

For ZIL, which is all writes, I wouldn't use them. Putting ZIL on an SSD would help for NFS IOPS.

What are you using to talk to the server? SMB over GbE. NFS over GbE. Either over 10GbE. Either over Infiniband.
 
Last edited:

Thatguy

New Member
Dec 30, 2012
45
0
0
you think they'd be worse than writing directly to the drives? if I were to use them as a zilog

I access it primarily via nfs. server stores media. my desktop talks to it via samba.

server is virtualized and it talks to other vms via vxnet. physicallt its 1gbe until I get a 10g switch
 

badatSAS

Member
Nov 7, 2012
103
0
16
Boston, MA
Guys based on all of the unknowns with the new TLC they put in the Samsung 840, I can't think of a worse (latest model) drive to buy for this.

Samsung 830's or Samsung 840 Pro with MLC - tried and true, known good, but the 840's limitation of write cycles with the new TLC NAND scares the crap out of me for anything but desktop/laptop use and value per dollar
 

spazoid

Member
Apr 26, 2011
92
10
8
Copenhagen, Denmark
I'm using a 120GB 840 non-pro as L2ARC on a mirror vdev used for VMs.. No issues so far, but it's only been running for a bit over a month..

I wouldn't use it for zil - ever, but for L2ARC it'll be fine.

It does sound like you need to read up on what L2ARC is for, and how it works, though. Your suggestion of mirroring your L2ARC is not a good one, there's no point - if a drive fails, all reads will just go to disk instead, and all you'll see is a performance drop. Besides, in your case, it sounds like L2ARC wouldn't help at all due to your workload consisting mostly of large sequential reads.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
you think they'd be worse than writing directly to the drives? if I were to use them as a zilog

I access it primarily via nfs. server stores media. my desktop talks to it via samba.

server is virtualized and it talks to other vms via vxnet. physicallt its 1gbe until I get a 10g switch
Since ZIL is a lot of writes, the fear is you would wear the 840s out. They also don't have any kind of power backup for their RAM cache, so if power goes out, you may lose user data. Putting ZIL on a SSD is also not going to increase the performance of the Samba client. There is a lot of good information about ZIL here.

In regards to ARC:
If you are using the pool for VM storage and running multiple VMs that are the same OS or access the same files, then those files will be caches and you will see in increase in performance. I think 1 or 2 840s would be find for L2ARC because there is no risk of data loss. Also, no need to mirror them.
 

Thatguy

New Member
Dec 30, 2012
45
0
0
Just an update:

Bought Intel 520's because I just had to get something. I've got two of four caching currently, no earth shattering difference at this point. I'm going to move a couple vm's back onto that zpool and see if I get more cache hits.
 

spazoid

Member
Apr 26, 2011
92
10
8
Copenhagen, Denmark
fwiw i've got two of them in raid-0 as a cache drive and two forceGT 120s in raid-0 for a zlog
Just a friendly FYI - striping zlogs will not improve performance. The ZIL speed is limited by the latency of the zlog, not the bandwidth. Some info can be found here but there are better threads out there.
Striping your l2arc is also not a good idea. Data is automatically spread across all cache devices. In a normal setup, if you have 2 drives and one dies, you just have half as much cache/half as many cache devices - in your setup, your raid fails, and you have no cache left.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Oracle zfs docs - example 4-4 any use or are you trying to do something other than just remove the log device whilst keeping the actual data in tact ?. I suspect you may need to unmount any zfs mounts that use that poolz to make sure the ZIL is empty and not written to during the operation.

Am just reading through the manuals for my own setup :).

RB
RB
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
A dedicated SSD or DRAM Zil device is used to improve performance with sync write where each write command must be committed from disk until the next write can occur. This is used for ultimate data security regardless the performance problem.

With current ZFS versions, you can add such a ZIL log device at any time and you can remove at any time. You can even import a pool with a missing log device. So this is not a problem at all.
 

MACscr

Member
May 4, 2011
119
3
18
Glad i found this thread as I almost bought one of these drives for both. So what low cost drive would you recommend for zil? Was thinking about starting of simple with a single 120GB SSD for zil and l2arc. The ZFS storage is going to be for VM's. I already have a 120GB Muskin Chronos that I plan to use with one of my zfs storage systems, though Im buying a new one for my other storage node.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Is it true that striping the ZIL won't improve performance? The Oracle docs for the rather speedy 7000 series ZFS appliances say: "Use Logzillas to speed up the ZIL. In case you have two or four Logzillas use the 'striped' profile to further improve performance. "

Just a friendly FYI - striping zlogs will not improve performance. The ZIL speed is limited by the latency of the zlog, not the bandwidth. Some info can be found here but there are better threads out there.
Striping your l2arc is also not a good idea. Data is automatically spread across all cache devices. In a normal setup, if you have 2 drives and one dies, you just have half as much cache/half as many cache devices - in your setup, your raid fails, and you have no cache left.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
If you stripe two SSD for ZIL usage, you double bandwith and capacity but keep the same latency or I/O performance. So it depends on your use case if this can improve performance.

Look at the situation.
SSD's are fast on reads but slow on writes. You can only write to empty cells so blocks (typically 512 kB even on smaller writes) must be erased prior write unless the disk is new and empty. Trim is used to help in this situation but there is currently no trim support (This is why I/O of SSD drops quite often to 1/10 after some time of usage).

In nearly all use cases, this behaviour is the limiting factor. The throughput of sync write is only a fraction of the sequential performance of a single SSD. So there is no benefit of doubling throughput. This is also the reason why using a SSD for ARC and ZIL is a performance killer and you should either disable sync or use a dedicated ZIL.

The same with capacity. Even if you have a 10 Gb connection, you can deliver about 1GB per second. ZFS collects about 5s data to write them afterwards as a large sequential write. With sync enabled, you must log these 5s and write them to SSD with a commit on each block. Typically the needed size of a ZIL is about 10s of transfer. So even with 10 GB no more than about 8GB-10 is needed (The size of a ZeusRAM, the mostly used highend ZIL).

So when is striping SSDs for ZIL helpful?
more bandwith: I can't see a reason. Sync throughput is far below SAS2
more capacity: If you have multiple 10 Gb transfers or a local database, then you can deliver more data than the capacity of a Single ZeusRAM. In such a case, striping can help a lot.

What helps in typical use cases:
Using a large SSD with a Supercap (to ensure log even in case of a power failure). Mostly it is suggested to use a small partition but I have not compared using a large SSD vs a small partition of a large SSD (I suppose it depends on SSD internals) or use a DRAM based ZIL with a far better I/O without the degration of SSD after some time of usage. They deliver much better sync write values and are worth the money.
 
Last edited: