SanDisk CloudSpeed Eco 1.92TB 2.5" SATA $120

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mrpasc

Well-Known Member
Jan 8, 2022
582
326
63
Munich, Germany
It depends. If you continue to do more sequentiell writes, yes. If you start to hammer the drive with massive random write load it will not last that long.
The CloudSpeed Eco are „read optimised“ drives.
But the average homelaber should be good with them for some years. I think they will be die because of their capacitor’s drying out before the flash cells are gone.
 

autoturk

Active Member
Sep 1, 2022
227
168
43
I think you can do that, but I don't think many has experience with drives that has reached 0% yet. I have one Intel s3500 with 30% life left and it still works great
It depends. If you continue to do more sequentiell writes, yes. If you start to hammer the drive with massive random write load it will not last that long.
The CloudSpeed Eco are „read optimised“ drives.
But the average homelaber should be good with them for some years. I think they will be die because of their capacitor’s drying out before the flash cells are gone.
thank you both for the perspectives!! While I have you both, I’m building a NAS with a 10G connection. I’m thinking of getting 8 of these and setting up truenas scale in a raidz1 with an optane slog. I should be able to saturate the link for reads and writes, right?

i imagine this is a much more economical option then trying to go with 4x enterprise NVMEs.
 

mrpasc

Well-Known Member
Jan 8, 2022
582
326
63
Munich, Germany
I’m thinking of getting 8 of these and setting up truenas scale in a raidz1 with an optane slog. I should be able to saturate the link for reads and writes, right?
For reads (if mostly from RAM ARC cache): yes
For writes: no. With RAID-Z1 you get barely the write rate of a single of your SATA ssd, around 500MB. And remember: a SLOG is NOT a write cache, it’s just a backup of the RAM based write cache for powerloss. So with sync writes disabled (like for a SMB share) it doesn’t help anything (will not be used), for sync writes like for a NFS share it helps to get near the write rate of the pool like it would be with async writes.
AGAIN: a SLOG is NOT a write cache (even it’s so named in official iX documents). Point.
 
  • Like
Reactions: Markess and Fritz

Markess

Well-Known Member
May 19, 2018
1,187
807
113
Northern California
thank you both for the perspectives!! While I have you both, I’m building a NAS with a 10G connection. I’m thinking of getting 8 of these and setting up truenas scale in a raidz1 with an optane slog. I should be able to saturate the link for reads and writes, right?
For writes: no.
Just curious, to advance the conversation, what's the best approach for @autoturk to boost write speeds? Is it the usual catch all performance improver for ZFS....more system RAM for a larger ZFS Cache?
 
Last edited:

Shalita

New Member
Jan 17, 2022
5
4
3
I ordered one of them 1.92tb for $46, it has 55k hours usage, but 99% life remaining.
Working perfect so far
 
  • Like
Reactions: Fritz

GregH

New Member
Jul 10, 2023
2
0
1
It depends. If you continue to do more sequentiell writes, yes. If you start to hammer the drive with massive random write load it will not last that long.
The CloudSpeed Eco are „read optimised“ drives.
But the average homelaber should be good with them for some years. I think they will be die because of their capacitor’s drying out before the flash cells are gone.
Pretty much my use case. It's in an enclosure replacing 2TB of USB 'portable' Toshiba spinning rust.
That said, by the time it was currency converted and shipping costs applied ~A$101,
I would have been better off sourcing one of these during Prime day last week

[Prime] Silicon Power P34A60 2TB PCIe Gen3 NVMe M.2 SSD $93.09 Delivered @ Silicon Power Amazon AU - OzBargain

C'est la vie
 

SirCrest

Member
Sep 5, 2016
21
26
13
34
Texas
They now appear to be sold out. I never pulled the trigger. Maybe for the best as I can keep waiting until I'm ready to build.
 

smithse79

Active Member
Sep 17, 2014
206
39
28
45
Is it a terrible idea to get a group of these and a Netapp ds2246 to build a flash ZFS array?
 

mrpasc

Well-Known Member
Jan 8, 2022
582
326
63
Munich, Germany
I would double-check if the DS2246 can do Sata whiteout Sata interposer. Not really sure but if I remember correctly it can’t.
Hope others with more knowledge of NetApp shelves and sata compatibility chime in (myself only uses sas disks with them).
 

smithse79

Active Member
Sep 17, 2014
206
39
28
45
Completely forgot about that. I have a 4246 that I believe has to have the interposers with SATA (I am also currently running SAS in that).
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
I believe you can use SATA on, e.g., the 4246 (w/o interposers). But the shelf will be bandwidth-capped at 1700 MB/s, even with both controllers being fed. Might be that interposers will raise that SATA cap to 3400. Puzzling that the cap is 1700/3400, and not 2200/4400.

Maybe @BeTeP or @Stephan can provide some insight.

For this SSD, specifically, using a vanilla SATA3 connection, I've not been able to get spec'd performance
ecoII-perf.jpg
No matter what I tried, I couldn't exceed 21K read (4KB) IOPS;
and Read seq (128KB) @#jobs=8, qd=32 only got 465 MB/s.
(I didn't even mess with write's.)
Similar results on a 2nd drive.

An old Samsung 850 evo gets 85k @j4q8, and 520 M/s @j1q4
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,187
769
113
Stavanger, Norway
intellistream.ai
That sounds weird, I get 85000 iops

Code:
sudo fio --filename=/dev/sdi --direct=1 --rw=randread --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=iops-test-job --eta-newline=1 --readonly
iops-test-job: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.33
Starting 4 processes
Jobs: 4 (f=4): [r(4)][2.5%][r=314MiB/s][r=80.4k IOPS][eta 01m:57s]
Jobs: 4 (f=4): [r(4)][4.2%][r=320MiB/s][r=81.8k IOPS][eta 01m:55s]
Jobs: 4 (f=4): [r(4)][5.8%][r=320MiB/s][r=82.0k IOPS][eta 01m:53s]
Jobs: 4 (f=4): [r(4)][7.5%][r=320MiB/s][r=81.9k IOPS][eta 01m:51s]
Jobs: 4 (f=4): [r(4)][9.2%][r=320MiB/s][r=81.9k IOPS][eta 01m:49s]
^Cbs: 4 (f=4): [r(4)][10.8%][r=322MiB/s][r=82.4k IOPS][eta 01m:47s]
fio: terminating on signal 2

iops-test-job: (groupid=0, jobs=4): err= 0: pid=385030: Thu Jul 20 22:53:29 2023
  read: IOPS=81.5k, BW=318MiB/s (334MB/s)(4343MiB/13644msec)
    slat (nsec): min=1227, max=2019.7k, avg=47687.13, stdev=126115.15
    clat (usec): min=419, max=21590, avg=12505.40, stdev=1492.51
     lat (usec): min=671, max=21592, avg=12553.09, stdev=1499.67
    clat percentiles (usec):
     |  1.00th=[ 9372],  5.00th=[10159], 10.00th=[10683], 20.00th=[11207],
     | 30.00th=[11731], 40.00th=[12125], 50.00th=[12387], 60.00th=[12780],
     | 70.00th=[13173], 80.00th=[13698], 90.00th=[14484], 95.00th=[15008],
     | 99.00th=[16319], 99.50th=[16712], 99.90th=[17695], 99.95th=[17957],
     | 99.99th=[18744]
   bw (  KiB/s): min=297944, max=339128, per=100.00%, avg=326004.44, stdev=2026.27, samples=108
   iops        : min=74486, max=84782, avg=81501.11, stdev=506.57, samples=108
  lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=3.57%, 20=96.40%, 50=0.01%
  cpu          : usr=2.79%, sys=11.24%, ctx=760217, majf=0, minf=1064
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=1111754,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
   READ: bw=318MiB/s (334MB/s), 318MiB/s-318MiB/s (334MB/s-334MB/s), io=4343MiB (4554MB), run=13644-13644msec

Disk stats (read/write):
  sdi: ios=1109785/0, merge=0/0, ticks=821296/0, in_queue=821297, util=99.23%
 

Stephan

Well-Known Member
Apr 21, 2017
1,033
801
113
Germany
@UhClem You tested what configuration exactly? I.e. which controller card on the server, which canister in shelf, which shelf, interposer yes/no?

And you say the Evo performed adequately in this configuration while the SanDisk didn't?
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
@BackupProphet , thanks for checking. Odd that testing using a filesystem, vs. using the device directly (ie, /dev/sdX) actually improves performance, and so dramatically. But, seeing is believing (I get 74k w/j4q8).

@Stephan , my first paragraph was addressing the just-prior (diverged) discussion of the DS4246 regarding SATA usability, with vs without interposers. Then,
For this SSD, specifically, using a vanilla SATA3 connection, I've not been able to get spec'd performance ...
was meant to intro a back-on-topic separate issue (and maybe should have been a separate posting). I'll PM you tomorrow with the gory details on the "shelf issue".
 

lhibou

Member
Jun 12, 2019
49
21
8
thank you both for the perspectives!! While I have you both, I’m building a NAS with a 10G connection. I’m thinking of getting 8 of these and setting up truenas scale in a raidz1 with an optane slog. I should be able to saturate the link for reads and writes, right?

i imagine this is a much more economical option then trying to go with 4x enterprise NVMEs.
As others have noted, mirrored vdevs in stripes are the way to go for speed. If you're planning to run eg: VMs on that pool that's what you'll wanna aim for.
For general file server duties where write speed isn't crucial, RaidZ is great.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,773
2,150
113
If anyone has any of the ones with lower usage (<1PB and not insane hours) -- and wants to trade let me know how many you have :) and what parts you're interested in.... I have lots of DDR4 32GB RDIMM still to trade + lots of other goodies, so please message me.... ultimately looking for 30-50 of these more.