ZeusRam for $1500

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Found this today, dunno if it's a great deal but certainly a good deal at nearly 40-50% off list.

HGST STEC Zeusram SSD 8GB SAS Enterprise Class New Zeus RAM | eBay

I have one of these on my AIO vSphere/Omni/ZFS array and it is the single saving grace on my stripped mirror (ZFS raid10) 6 disk Hitachi Ultrastar pool for NFS VM's. Running 30+ VM's at all times w/ excellent performance. A cheaper solution would of course be the Intel DC s3700 on a budget but it's hard to compare those two apples to apples honestly. This guy will come dual ported SAS for HA env's, lower latency, DRAM-like access/etc.

Shamelessly stolen off a description online :-D

The ZeusRAM™ is designed to merge DRAM performance reads and writes with flash backup persistence. It is an enterprise class, wear-resistant solid state drive (SSD) targeted for high write transaction oriented environments that require fast write-commit capabilities to enhance user-application throughput of NA and unified storage appliances. The drive is Plug-and-Paly (PnP) compatible; no additional device drivers are required to install the drive. The SSD is recognized by Pnp-compatible operating systems and PnP-aware BIOS.


Performance

The drive can operate at sustained read/write transfer rates of up to 500MB/s. The dirve is capable of performing 80,000 random read/write operations per second. the power dissipation is TBD% less than disk-based drives. The solid state design eliminates electromechanical noise and delay inherent in traditional magnetic rotating media.



Disk Capacity

The drive has an eight(8) gigabyte memory capacity. the high-IOPS, low-latency rate is achieved using an array of DDR3 SDRAM devices as the main processing and storage memory. The non-volatile backup memory is an array of Single-Level Cell (SLC) NAND EEPROM flash components
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Far from scientific but my high level runs show roughly

a dd of a 10gb file using this syntax for write/read yileds this:

root@beastnode:/hgst1.5tb01# dd if=/dev/zero of=10gbfile bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 59.0923 s, 177 MB/s

root@beastnode:/hgst1.5tb01# dd if=10gbfile of=/dev/zero bs=1024k
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 41.6898 s, 252 MB/s

fio stats (comparing s3700 to my 8 disk stripped mirror pool w/ the zeusram behind it)

NFS mount from VM on 10G network to zpools

Single Intel DC s3700 (3975 read iops / 1326 write iops)

testingtons: (groupid=0, jobs=1): err= 0: pid=1664: Mon Mar 2 12:49:48 2015
read : io=3838.8MB, bw=20172KB/s, iops=3975, runt=194867msec
slat (usec): min=0, max=2804, avg= 3.15, stdev= 6.67
clat (usec): min=285, max=246676, avg=10407.79, stdev=6453.85
lat (usec): min=287, max=246679, avg=10411.08, stdev=6453.88
clat percentiles (usec):
| 1.00th=[ 1012], 5.00th=[ 2128], 10.00th=[ 3440], 20.00th=[ 6112],
| 30.00th=[ 8032], 40.00th=[ 9408], 50.00th=[10432], 60.00th=[11456],
| 70.00th=[12480], 80.00th=[13632], 90.00th=[15168], 95.00th=[16768],
| 99.00th=[37120], 99.50th=[44288], 99.90th=[62208], 99.95th=[91648],
| 99.99th=[162816]
bw (KB /s): min= 4578, max=66855, per=99.46%, avg=20062.76, stdev=6531.08
write: io=1281.3MB, bw=6732.1KB/s, iops=1326, runt=194867msec
slat (usec): min=1, max=413, avg= 3.76, stdev= 5.89
clat (usec): min=807, max=238930, avg=17036.14, stdev=8335.46
lat (usec): min=809, max=238936, avg=17040.04, stdev=8335.53
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 12],
| 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 19],
| 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 24], 95.00th=[ 26],
| 99.00th=[ 50], 99.50th=[ 54], 99.90th=[ 71], 99.95th=[ 87],
| 99.99th=[ 157]
bw (KB /s): min= 1793, max=22305, per=99.44%, avg=6694.59, stdev=2197.09
lat (usec) : 500=0.01%, 750=0.22%, 1000=0.48%
lat (msec) : 2=2.76%, 4=6.37%, 10=28.63%, 20=52.32%, 50=8.79%
lat (msec) : 100=0.38%, 250=0.04%
cpu : usr=1.98%, sys=3.88%, ctx=879476, majf=0, minf=23
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=774731/w=258437/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
READ: io=3838.8MB, aggrb=20171KB/s, minb=20171KB/s, maxb=20171KB/s, mint=194867msec, maxt=194867msec
WRITE: io=1281.3MB, aggrb=6732KB/s, minb=6732KB/s, maxb=6732KB/s, mint=194867msec, maxt=194867msec6 disk

Hitachi Ultrastar 1 TB (6x) + Crucial C300 120GB L2ARC and 8GB ZeusRam ZIL (1485 read iops / 495 write iops)


testingtons: (groupid=0, jobs=1): err= 0: pid=24152: Thu Mar 5 12:38:58 2015
read : io=3838.8MB, bw=7535.6KB/s, iops=1485, runt=521640msec
slat (usec): min=0, max=1535, avg= 3.23, stdev= 3.60
clat (usec): min=333, max=434350, avg=32255.17, stdev=28671.34
lat (usec): min=336, max=434352, avg=32258.54, stdev=28671.75
clat percentiles (usec):
| 1.00th=[ 708], 5.00th=[ 2352], 10.00th=[ 5920], 20.00th=[12864],
| 30.00th=[17792], 40.00th=[21632], 50.00th=[25472], 60.00th=[29824],
| 70.00th=[35584], 80.00th=[44288], 90.00th=[63744], 95.00th=[87552],
| 99.00th=[148480], 99.50th=[173056], 99.90th=[226304], 99.95th=[242688],
| 99.99th=[284672]
bw (KB /s): min= 1663, max=44441, per=100.00%, avg=7539.10, stdev=3528.87
write: io=1281.3MB, bw=2515.2KB/s, iops=495, runt=521640msec
slat (usec): min=0, max=347, avg= 3.91, stdev= 3.52
clat (usec): min=524, max=408210, avg=32463.78, stdev=28648.89
lat (usec): min=528, max=408214, avg=32467.83, stdev=28649.41
clat percentiles (usec):
| 1.00th=[ 868], 5.00th=[ 2416], 10.00th=[ 5984], 20.00th=[12992],
| 30.00th=[18048], 40.00th=[21888], 50.00th=[25728], 60.00th=[30336],
| 70.00th=[36096], 80.00th=[44800], 90.00th=[63744], 95.00th=[87552],
| 99.00th=[146432], 99.50th=[175104], 99.90th=[228352], 99.95th=[244736],
| 99.99th=[280576]
bw (KB /s): min= 490, max=15186, per=100.00%, avg=2516.24, stdev=1205.45
lat (usec) : 500=0.01%, 750=1.04%, 1000=1.24%
lat (msec) : 2=2.23%, 4=2.74%, 10=8.19%, 20=20.16%, 50=48.25%
lat (msec) : 100=12.61%, 250=3.50%, 500=0.04%
cpu : usr=1.08%, sys=1.66%, ctx=926252, majf=0, minf=23
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=774731/w=258437/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
READ: io=3838.8MB, aggrb=7535KB/s, minb=7535KB/s, maxb=7535KB/s, mint=521640msec, maxt=521640msec
WRITE: io=1281.3MB, aggrb=2515KB/s, minb=2515KB/s, maxb=2515KB/s, mint=521640msec, maxt=521640msec

Guess I could fire up IOmeter if I was feeling advantageous or run it through a test suite.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Spec Sheet:

Specification

Form Factor
3.5 inch; 25.4mm Case Height
Interface Serial-Attached SCSI-2 (SAS2);
Dual port
Connector 7-Primary Signal; 7-Secondary Signal;
15-Power
SAS Topology Expander and Fanout adapter support
Drive Capacity 8 gigabytes DDR3 processing memory
Fash Capacity 16 gigabytes SLC NAND flash backup memory
Logical Block Size 512-byte host sector size
Dual Power Input 5V DC 5% or 12V DC 10% split operating power
Access Latency ≤ 15μsec
Interface Bandwidth 6Gb/second or 3Gb/second auto-negotiation
Sequential Read 500MB/second
Sequential Write 500MB/second
Random 4k Read 80,000 IOPS
Random 4K Write 80,000 IOPS
EDC/ECC 13-Bit BCH over 512 bytes
Event Logging Non-volatile Event logs
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
FIO is perfectly fine... so long as you don't have it set to sync or Qdepth=1 like you appear to now.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
For reference: s3700 single zfs pool local dd test for write/read

root@beastnode:/iohog/nfs# dd if=/dev/zero of=10gbfile bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 23.7793 s, 441 MB/s
root@beastnode:/iohog/nfs# dd if=10gbfile of=/dev/zero bs=1024k
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 26.8633 s, 390 MB/s
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
FIO is perfectly fine... so long as you don't have it set to sync or Qdepth=1 like you appear to now.
Is there a queue depth switch I am missing then? Only thing I see that looks promising in 'man fio' is 'thinktime_blocks=int'
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
8gb RAM and 16gb SLC for $1500... what am I missing here, how is this remotely a good deal?

You can get faster and more RAM and a SLC drive for a lot less, setup a RAM drive and have the same thing ???

I understand if you had like 20+ of these but just 1 or 2 seems pointless ???
 

Scott Laird

Active Member
Aug 30, 2014
312
144
43
Presumably this is intended to be used as a log device. If you're planning on writing at full speed 24x7, then it comes closer to making financial sense. Mind you, you'd really have to work to kill even an S3700 if you only use 8GB of the capacity. And for $1500 you should be able to get SLC without any problem. One of the bigger Hitachis SLCs that have been going around should be able to sustain very close to full-bandwidth SATA writes for 3 years, even without giving it 50x more spare space than they speced it with.

If I was looking for another zlog device, then I *might* consider it for $150 or so, but probably not even then. For the money, you'd be better off throwing more RAM in for read caching and a couple small, fast SSDs for logging.
 

Scott Laird

Active Member
Aug 30, 2014
312
144
43
You could put 1,000 NVMe SSDs behind it, but that wouldn't change the fact that it's only 6G SAS. It might be *slightly* lower latency than a good SATA/SAS SSD, but that's about it. It can't have much better bandwidth or IOPS, just because the pipe won't fit any more. And if all you need is 8G, than a number of modern (or semi-modern) SSDs can give you *years* of ~500 MB/sec sustained writes.

Once upon a time, getting 500+ MB of sustained low-latency writes was awesome and expensive, but that time has passed.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
8gb RAM and 16gb SLC for $1500... what am I missing here, how is this remotely a good deal?

You can get faster and more RAM and a SLC drive for a lot less, setup a RAM drive and have the same thing ???

I understand if you had like 20+ of these but just 1 or 2 seems pointless ???
To bundle these up in any sort of raid config is not the target 'use case' the capacity of the drive is a dead giveaway against that theory...instead these are indeed to be used as ent level write cache (ZIL/SLOG) devices. For most of us (and I think the point has been made in other posts) we may very well be able to get away w/ using a s3700 for a ZIL/SLOG dev but hands donw if $ was not an option I would take ZeusRam's over s3700's all day long. I got mine for a cost you cannot argue 'free' but regardless it is an AWESOME write cache accelerator that will breathe new life into a pool of rusty magnetic disks w/out a doubt w/ SUSTAINED writes. Time marches on and HET NAND w/ capacitor backed setups seems to be a good choice as well these days for cost sensitive builds where 'good enough' meets the requirements.
 
  • Like
Reactions: Entz and T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
GOTCHA! understandable.

No way to do a RAMDRIVE For ZIL I takei t?
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
You can use a ramdrive for a SLOG, but it completely defeats the purpose of having one. The idea is to have a non-volatile, ultra-low latency place to store a copy of all sync writes until they are written to the underlying disks (vdevs). If something happens to the machine (power outage, sudden reboot etc), this log will be "replayed" and the rest of the data written out to disk when the system recovers. Having it RAM and having it go poof will cause data loss and possibly corruption. You can disable this functionality all together if that is the goal (no need for a ramdrive).

This is where things like the ZeusRam excelled, 24/7/365 low latency writes that stayed consistent no matter how much you threw at it. As mentioned not as needed now with the newer SSDs especially when over provisioned, but still an interesting device none the less. I would expect a PCI-e / SAS3 type version will exist at some point. 2GB/s of instantly ACKed sync writes please :D
 
  • Like
Reactions: whitey

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
RAMDRIVE software I've used can have a copy on a SSD/HD for reboots... ??? Is that not possible how you configure the filesystem? (Sorry if it's one of those stupid questions ;))
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
The software cannot copy the data to your SSD/HDD if there is power loss or the system resets (Kernel panic etc) or locks up.