80GB S3500 for $30/bo (accepted $25ea for 10)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Marsh

Moderator
May 12, 2013
2,646
1,497
113
Thanks
Picked up 10 @ $25 each for my next build of dedicated miner nodes.
 

mmo

Well-Known Member
Sep 17, 2016
559
358
63
44
Just FYI, The seller just got a negative feedback regarding this sale, not sure what's going on.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
took 4 at 25ea as well.. now I wonder if I shoudnt have hit him with $20 to see if he countered.. ah well.. its only money...
 

Marsh

Moderator
May 12, 2013
2,646
1,497
113
Seller "shipped" the item 30min after I paid for it.
SHIPPING LBL CREATED USPS AWAITS ITEM

Seller is 35 miles from me, I should get it in 2 days.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@Marsh

So did you get yours up and tested yet?

Mine are due to arrive today if usps is to be believed coast to coast shipped in 5 days.. pretty impressive..

Hopefully they will have excellent life left

Question....

Intel changed firmware for the 3500 and 3700 cards from 512 to 4k sector...

If using these with napp-it as zil slog .. would keeping them at 512 be better or should I upgrade them to 4k ..since slog doenst really write data.. its only meta which I believe is rather small writes?

I assume that for the larger drives being used by napp-it and served via nfs for VM storage might benefit from 4k though or are VM read writes pretty small as well? From what I understand intel spent a lot of time ensuring the controller overhead for 512E had very little performance hit?

Thanks
 

Marsh

Moderator
May 12, 2013
2,646
1,497
113
I received the 10 SSD last Friday, but did not have time weekend to tested yet.
Packaging was terrible, seller just put the drives in a USPS priority mail envelope, no padding at all.
It is just SSD, hopefully no damage was done.

Once I tested the drives, will let you know about the condition of the drive.
 

Marsh

Moderator
May 12, 2013
2,646
1,497
113
I had time yesterday to test 3 Intel S3500 80GB SSD.

1 drive 13 days power on , ~700GB written
1 drive 20 days power on , ~700GB written
1 drive 544 days power on , ~700GB written

I updated the firmware , secure erase , ran DiskMark . All 3 drives are healthy.
7 more SSD to test.
 
  • Like
Reactions: dragonme

dragonme

Active Member
Apr 12, 2016
282
25
28
@Marsh

thanks.. yes I found similar results

I really dont know the de rigur for intel SSD helth testing but smart looks promising if to be believed

4 drives 3 averaged about 11000 hours and one with 450 hours

all smart data 100/100 except for 2 that had media wearout at 99/100

so it appears these were probably low use log devices

they have the updated firmware 0201370 from the factory but were setup for 512 instead of 4k blocks near as I can tell

as far as I know.. I believe 512 is better than 4k for zil / slog devices since log writes are metadata only and pretty small right?

ran serials on intels checker and they didnt come up so likely OEM from prebuild storage arrays... went on chat with intel and they told me warentee coverage til dec 2020 .. whohoo...

so other than the packageing.. there were none.. 4 drive slapped in a padded mailer unound.. so far so good.. for $25 a pop....
 
  • Like
Reactions: Marsh

dragonme

Active Member
Apr 12, 2016
282
25
28
@AVD2359

this is quite possibly the most clearly written artical that handles zil and talks about they why and how

zil is the zfs intent log.. you have one with or without seperate device

zil only active for sync writes, without a slog device (serperate zil dev) the drive has to have a dedicated area for the zil metadta until incoming writes are committed ...

Nex7's Blog: ZFS Intent Log

if after reading this you disagree.. let me know why... zfs is still magic even to profesisionals
 

AVD2359

New Member
Jan 27, 2017
23
4
3
70
@AVD2359Nex7's Blog: ZFS Intent Log

if after reading this you disagree.. let me know why... zfs is still magic even to profesisionals
I don't believe that article supports the metadata-only idea. And it is consistent with information I've seen elsewhere.

A few quick specifics:
The ZIL's purpose is not to provide you with a write cache. The ZIL's purpose is to protect you from data loss. It is necessary because the actual ZFS write cache, which is not the ZIL, is handled by system RAM, and RAM is volatile.
How will ZIL (either on-pool or as discrete hardware) protect your data if only metadata is written to it?

It should be at least a little larger than this formula, if you want to prevent any possible chance of overrunning the size of your slog: (maximum possible incoming write traffic in GB * seconds between transaction group commits * 3).
If only metadata is written, why should ZIL size depend on total network throughput? Wouldn't metadata be a small fraction of the total?


Finally, "meta" appears in that post 3 times. Unfortunately, Nex7 uses it to mean 2 different things.

The 1st appearance ("...[VM HD] is one of the worst environments to lose a couple seconds of write data in, as that write data is potentially critically important metadata for a filesystem sitting on top of a zvol, that when lost, corrupts the whole thing.") does NOT refer to ZFS metadata but to the VM's. IOW, he's actually talking about NTFS/UFS/whatever metadata, which is just regular data to ZFS.

In the later references, "metadata" does mean ZFS metadata.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@AVD2359

you very well might be right.. like I said.. you cant get 3 experts in storage to agree what actually happens under the hood regarding many features of zfs.. although most documents I read specifically say its an inent LOG and not a CACHE... a cache would have the full write.. an intent log would have meta that would be able to tell the file system what writes were not completed and REDO the original writes... I think that is how you can have these 2 year old log devices that came from enterpise storage pools with little to no wear on them .. they just dont deal with alot of actual write and acually NO read unless the power is yanked.. again .. ZIL is just the intent meta if the write goes through the zil slog is never read.

anywho.. its magic.. it just works...
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
@AVD2359

you very well might be right.. like I said.. you cant get 3 experts in storage to agree what actually happens under the hood regarding many features of zfs.. although most documents I read specifically say its an inent LOG and not a CACHE... a cache would have the full write.. an intent log would have meta that would be able to tell the file system what writes were not completed and REDO the original writes... I think that is how you can have these 2 year old log devices that came from enterpise storage pools with little to no wear on them .. they just dont deal with alot of actual write and acually NO read unless the power is yanked.. again .. ZIL is just the intent meta if the write goes through the zil slog is never read.

anywho.. its magic.. it just works...
Sorry, but the statement in red makes absolutely no sense at all.

SLOG stands for Separate Intent Log, which is why it's used like "SLOG Device " when you're referring to the actual device you're assigning to perform the duty specifically of ZFS Intent Log.

If you have a SLOG Device then you've said to ZFS "The ZIL is no longer on the pool it's on THIS(SLOG) device".

Sync writes ALL go through the ZIL so if you don't have a SLOG it's on-pool if you have a SLOG the ZIL is on the SLOG.

They're written to the SLOG then the SLOG device DUMPS (sequentially) to the pool itself, which of course requires a read... but a read doesn't wear out the device anyway, it's the constant writing.

You can't install a SLOG and then say "well if it's going through the zil it never is read by the slog" the SLOG stores the ZIL if you have a SLOG assigned to a pool. The ZIL ONLY accounts for the transaction group being written to and incoming, which is why you can't size your SLOG for 1 transaction group size, and another reason why network + transaction group size in ZFS are important tune-able configuration options. (Transaction group may be a time setting I forget, which you should * your network speed * 2 to make sure you can account for existing/outgoing transaction+ incoming based on your network. This is why if you have a SAN with 4x40Gig connections your SLOG must be a lot larger to compensate for the throughput)

ZFS uses RAM for READ and WRITE cache and the SLOG protects the write on the way to the 'slower' media. It's not just storing META from my understanding, and everything I've read and researched on this matter.

@gea does the above go along with your understanding as well?
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@T_Minus
Synchronous Writes with a SLOG
The advantage of a SLOG, as previously outlined, is the ability to use low latency, fast disk to send the ACK back to the application. Notice that the ZIL now resides on the SLOG, and no longer resides on platter. The SLOG will catch all synchronous writes (well those called with O_SYNC and fsync(2) at least). Just as with platter disk, the ZIL will contain the data blocks the application is trying to commit to stable storage. However, the SLOG, being a fast SSD or NVRAM drive, ACKs the write to the ZIL, at which point ZFS flushes the data out of RAM to slow platter.

Notice that ZFS is not flushing the data out of the ZIL to platter. This is what confused me at first. The data is flushed from RAM to platter. Just like an ACID compliant RDBMS, the ZIL is only there to replay the transaction, should a failure occur, and the data is lost. Otherwise, the data is never read from the ZIL. So really, the write operation doesn't change at all. Only the location of the ZIL changes. Otherwise, the operation is exactly the same.

As shown in the image, again the pink arrows labeled number one show the application committing its data to both the RAM and the ZIL on the SLOG. The SLOG ACKs the write, as identified by the green arrow labeled number two, then ZFS flushes the data out of RAM to platter as identified by the gray arrow labeled number three.


Make sense now?


The slog/zil only makes the sync writes faster because it can give the pool an ack on the write before its flushed from ram but not have to wait for the pool to write the serpate zil meta to plater .. zfs writes sync data twice or more depending on pool layout....

The reason sync performance on platters is really slow is because that is a lot of dual or more random writes and latency and the application with the sync flag cant send the next write until it gets an ack .. so the zil allows the ack to process because during a power loss its been commited to the slog device for playback not lost in volitile ram...

Get it?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Sync writes flush from RAM not the SLOG device itself that is something I wasn't aware of or forgot ;) thanks for catching that.


" they just dont deal with alot of actual write"

Still not true.

If the ZIL wasn't actually getting data written to it (no matter what the data is) then why would adding a SLOG device improve performance? I mean, if no data is written in the ZIL then why care if it's on a slow pool or a fast NVME SLOG device?

The reason the ZIL is slow on ANY pool compared to dedicated is that writing in the ZIL eats up IOPs no matter platters or SSD or NVME pool you're taking some of your resources and allocating to the ZIL utilization and throw on top of that fragmentation and the SLOG Device is something great to have even if it may be only marginally better than on-pool when getting into higher-end/fast drives.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
performace is improved becasue the app calling for the sync write doesnt have to wait 5 to 10 seconds (tunable zfs perf) for a zfs call to flush write out of ram and or send an ack that the zil (on platters) completed the (zil) write ... while waiting for the ram and platters to catch up..

the zil is just insurance that the write gets done.. its a journaling device .. not a cache...

the speed comes from a low latency ack from the ssd acting as a slog (seperate zil) saying right.. got it.. I see a sync write.. ack.. now send another .. and it just logs it complete when zfs flushes ram and commits to platter.. never once reading the zil slog AND not making the drive first write the zil data THEN write the data before the ack is sent.. it cuts platter bandwith on all sync writes and speeds acks buy factors..

if power gets yanked.. zfs will see un written request in the zil if you used non vol device upon reboot and bring the pool to compliance state.. that is the only time ZIL get read.. otherwise its sucking data from much faster ram...

not saying there is no data there in the ZIL but its not what people think its doing.. its not a write cache intermediary ... its a log device hence the name

watch real time zil stats with the pool is under load and you will see next to nothing being done.. even with sync writes in progress...

now I am NO expert .. but I DONT throw rocks at posters without posting source info.. so stop.. just telling me I am wrong without posting a source is being and a** .. so I dont appreciate it...

Just adding a seperate ZIL slog to a pool wont speed up anything or fix frag or anything else you suggest unless the WRITE is posix flagged for sync or you have the pool / dataset set for sync=always....



read this

Aaron Toponce : ZFS Administration, Appendix A- Visualizing The ZFS Intent LOG (ZIL)

and perhaps a host of other sources...
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@T_Minus

here are more citations and while he has several verifies.. it checks with other articles I have read..

this is why solaris got to charge a fortune for zfs support.. people dont understand it.. some by design as sun didnt really tell people what was going on under the hood, and part due to the complexity and human nature of being lazy not wanting to do the reading and the research.. so people mostly dont use zfs correctly...


pasted from another source.. not mine...

The ZIL (ZFS Intent Log) is basically a transaction log (similar to those in databases, if you're familiar).

Note that since ZIL contents are duplicated in RAM, and ZFS uses the RAM copy. Normal operation rarely reads contents from the ZIL - it is there for correctness and recovery, and read when importing. It is not a write buffer as such, just there as a non-volatile copy of the RAM copy. Yet it is still important to IOPS, because it is flushed to disk regularly



The ZIL is used for all ZFS metadata.

The ZIL also applies to sync writes. (with some details: sync writes smaller than 64KB have their content go to the ZIL, larger writes go to the pool and are pointed to by the ZIL. That size is tweakable. This seems to be a sensible performance consideration that has little other impact?(verify))

Non-sync write data skip the ZIL, they are only in RAM (typically a few-second cache(verify)).

It also applies to all all filesystem syscalls that apply to ZFS. (Technically, this means that all writes are mentioned in the ZIL(verify), white the data is only there for for (small) sync writes(verify))

The upshot is if recovery (after a crash / power loss) loses more async writes than sync writes. Like any filesystem. Because it's a sensible bias.

Due to ZFS's copy-on-write nature you would just see and older version of the data. The ZIL is there in part so that "accepted into the ZIL" typically means "will be seen in the pool, be it now or a bit later".