EXPIRED $279:Oracle F320 (3.2TB) AIC

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
I'm been so frustrated with Samsung's website, for enterprise SSDs, for years. What exactly is this drive? There isn't any products with that part number on Samsung's site.

If by some pocketbook-draining coincidence, I was just on Samsung's website reading about the Z-SSD. Insane 30 DWPD and insanely low latency.

Is that what these drives are?

I found references to a PM1725a. Is that the same thing as this Oracle card?

How do they compare to Optane's latency?

And lastly, what's the workload for these? And issues using them as say a gaming machine, ZFS SLOG for multiple pools (divided up), etc?

There seems to be several for sale around this same price, and for a few years it seems. In trying to figure out my SLOG devices and gaming machines' drives for a few builds ...
 
  • Like
Reactions: Samir

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado
How do they compare to Optane's latency?
not many things as good as optane - except RAM based devices (like Radian RMS 200 and 300). STH has some articles on SLOG performance as well as primers. And unless you are going to partition it down (which means NO GUI management) you simply do not need nor will ever use 3.2TB of storage for a SLOG. slog size is basically a function of how much data you can transfer and then dump in a 5s interval. 10Gbe is about 8GB of storage or so with typical enterprise spinners.

However Oracle (Sun Micro) - which is where this drive comes from used devices like this as caching (and probably Arc & SLOG on ZFS, but wouldn't surprise me for proprietary caching too), devices also included the intel P3605 (1.6TB): products include: F320 (sammy), F160 (intel), F800 (lsi/seagate), F400 (lsi) etc.

btw, the F320's are TLC , the F160 is MLC with I suspect lots of extra cells since they have an insane endurance, can't remember if the F800 is MLC or SLC.

And lastly, what's the workload for these? And issues using them as say a gaming machine, ZFS SLOG for multiple pools (divided up), etc?

There seems to be several for sale around this same price, and for a few years it seems. In trying to figure out my SLOG devices and gaming machines' drives for a few builds ...
IMO these devices make dandy (mirrored) storage for VM's too and I suspect as single drives for desktops they'd be pretty quick with nice capacity.

From a storage utilization/performance curve using a whole F320 for slog would be I think a waste unless you chopped it up.
 

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
And unless you are going to partition it down (which means NO GUI management) you simply do not need nor will ever use 3.2TB of storage for a SLOG. slog size is basically a function of how much data you can transfer and then dump in a 5s interval. 10Gbe is about 8GB of storage or so with typical enterprise spinners.
Ok, first... GUI Management?! You've peaked my interest here. Does this drive actually have a GUI? I was reading where Samsung drives, if this indeed a PM1725a?, can be divided up into dedicated VM partitions (which sounds like SR-IOV to me). Is the GUI where you divide up the card, or does the BIOS still handle those things?

And you're absolutely correct on the sizing of the SLOG devices. The point was more of what is the Write Endurance portion of drives like this, compared to say Optane.

I was just about to buy 5+ Optane drives, 900P, for a lot more than I wanted to pay (ebay, but still too much!). They will all be ZFS SLOG devices for various server and desktop workloads. Some pools up to 24 spinning rust, and 12x SATA6 SSDs, etc.

I just found this article that directly compares the Optane to Samsung's Z-NAND - which I think is the most perfect comparison for this Great Deals thread:


What's insane is... That amount of storage... 3.2 TB !! /me looks over at the pool of 12x 240GB SM883 enterprise drives, thinking that was going to be pretty cool to install with Optane in front of it. But, uh, a single $250 PCIe 3.0 NVMe Samsung drive, that just happens to be the same size of this huge 12x stack of SSDs...

btw, the F320's are TLC , the F160 is MLC with I suspect lots of extra cells since they have an insane endurance, can't remember if the F800 is MLC or SLC.
Reading that Tom's hardware article I posted above has this nugget:

Built off of a modified V-NAND design, Z-NAND currently utilizes 48 layers and functions in single level cell (SLC) mode, so each cell can only have a charge level of a 1 or a 0. MLC, TLC, and QLC NAND all have more voltage states which reduce performance and endurance. Thus, they need more powerful and complex ECC algorithms in order to prevent data read errors and are slower.

Z-NAND features page sizes as small as 2-4 KB where normal NAND pages are 8-16 KB. This, in turn, helps to provide more parallelism so that the drive can read and write smaller chunks of data at a faster pace. It also helps reduce latency.
So, they are TLC on specs - but operate in "SLC mode"?
 
  • Like
Reactions: Samir

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado
Ok, first... GUI Management?! You've peaked my interest here. Does this drive actually have a GUI? I was reading where Samsung drives, if this indeed a PM1725a?, can be divided up into dedicated VM partitions (which sounds like SR-IOV to me). Is the GUI where you divide up the card, or does the BIOS still handle those things?
my bad. There is no GUI within this device that I know of. I was overlaying what I do onto your use case. I was speaking about TrueNAS because you said SLOG but it could linux with zfs, napp-it etc. With TNC you can't (that I'm aware of) manage the partitioning of the SLOG devices from a single NVME device within the gui . HOWEVER if this device is in fact basically a PM1725a then it may well support multiple nvme namespaces which will appear as multiple devices and namespace it to the sizes you want. that will depend on whether Oracle maintained that feature space in their firmware.

FWIW I personally use Optane 900P 280GB's as SLOG for my spinning rust pool and I use a radian rms-200 8GB as SLOG for mirrored SATA SSD's.

And you're absolutely correct on the sizing of the SLOG devices. The point was more of what is the Write Endurance portion of drives like this, compared to say Optane.

I was just about to buy 5+ Optane drives, 900P, for a lot more than I wanted to pay (ebay, but still too much!). They will all be ZFS SLOG devices for various server and desktop workloads. Some pools up to 24 spinning rust, and 12x SATA6 SSDs, etc.
what about a pair or three 905P 960GB?

What's insane is... That amount of storage... 3.2 TB !! /me looks over at the pool of 12x 240GB SM883 enterprise drives, thinking that was going to be pretty cool to install with Optane in front of it. But, uh, a single $250 PCIe 3.0 NVMe Samsung drive, that just happens to be the same size of this huge 12x stack of SSDs...
btw, I believe there is a 6.4TB version of the pm1725a.

if the IOPS and transfer rate meet your needs then yeah do it! But do a pair and if you are using ZFS mirror them!

Reading that Tom's hardware article I posted above has this nugget:

So, they are TLC on specs - but operate in "SLC mode"?
I'm not familiar enough to say

there are a bunch of hits searching in the forums for F320 and PM1725a. Have you looked at any of that?
 
  • Like
Reactions: Samir

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
HOWEVER if this device is in fact basically a PM1725a then it may well support multiple nvme namespaces which will appear as multiple devices and namespace it to the sizes you want.

...

btw, I believe there is a 6.4TB version of the pm1725a....
Still trying to figure that out. Out of all of Samsung's PCIe NVMe 3.0 devices (yeah, there's like 7), the only ones that come in 3.2TB sizes are the SM1715, PM1725, and I found a reference to a PM1735. If they follow Samsung's standard product codes:

PM=Read Intensive
SM=Write Intensive

They are typically the exact same hardware, just different firmware. The main difference is that Samsung halves the capacity for the SM variant - while increasing warranty to 5 years and doubling DWPD. So a 6.4 TB PM1725 is exactly the same as a 3.2 TB SM1725, but with very high endurance due to the over-provisioning (spare capacity that allows for the high DWPD).

what about a pair or three 905P 960GB?
It's 5 or 6 different machines, desktops, workstations, etc.
 
Last edited:

Your name or

Active Member
Feb 18, 2020
281
41
28
Hi
Well I still looking for an drive who can handle the Output from an Software who put out many million Tiles...
I got the Radian Cards who are nice but a bit small. Is there a good replacement or expansion?
Thanks
 

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
How would this fare split up to be used as both SLOG/ZIL and L2ARC?
You are talking my language as I've been agonizing over this setup I have for years... Lol.

When thinking about a SLOG device, the faster the fsyncs, the faster it will be for VMs, k8s (and etcd!!), NFS, and anything that uses a lot of fsync.

Next, people want fast transfers over their 10G networks. However, if you have a SLOG device, it doesn't matter how many vdevs you have for striped throughput: you'll be limited to however your SLOG vdev is setup and just it's devices.

On a scale of 1 to 10 whereas 1 equates to the ultra low latency of the (newer) Intel Optane, and 10 represents the general average NVMe PCIe U.2 SSD latency, this PM1725 drive scores about a 3 to 4, depending on the generation.

That's fast. That's very very fast. And, the prices of NVMe PCIe drives are even more money!

The last word on performance is with queue depths. Read up on this. This is where the PM1725 really really shines, and even beats the 905P Optane in a few (select, but high IOPS concurrency) benchmarks. Check out the video I just posted above your post.

With that said, you should think about logistics. 3.2 TB is extreme overkill for a SLOG, which is usually, at a max of 50 GB - and that's if you have a 100 Gbps network.

Yes, you could slice off a chunk of 50 GB for SLOG, and use the rest for a massive L2ARC. But even that's a waste because L2ARC is not persistented between boots. Edit: Ah, ZFS can now persist L2ARC.

And IIRC, the last note on L2ARC would be that you could actually slow reads down with this drive. Given, you'd actually need one hell of a stripe across fast vdevs to exceed the PM1725. :)

---

IMO, get a $230-ish Optane 900P which is only 280GB (they are all 280GB). Assign 50GB to your SLOG, and maybe 100GB to the L2ARC. Tune as needed (or even create yet more partitions for more pools!).

Use this PM1725a for 100% of your heaviest fsync I/O. VMs, k8s etcd, databases, container volumes, etc.
 
Last edited:
  • Like
Reactions: Ca_s12

Mithril

Active Member
Sep 13, 2019
354
106
43
[...]

Yes, you could slice off a chunk of 50 GB for SLOG, and use the rest for a massive L2ARC. But even that's a waste because L2ARC is not persistented between boots.

[...]
I thought that changed and that even truenas core supports persistent L2ARC now?