1.6 Million IOPS < 2000$

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
Well that looks like a nightmare to keep stable :-X
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Devil's advocate here but if you wanted high iops read Sun Oracle Intel SSD DC P3605 1.6TB Flash Accelerator F160 NVMe Card 7090698 | eBay

It's "only" $640 each. Let's say you get 3 for a server that's maybe 60W instead of whatever that is and about 1.4M read iops and 165K write 7.8GBps read 4.8GBps write but you'd have 4.8TB so twice the capacity.

I mean cool but I'd prefer cheaper nvme these days. That's so many internal drives to fail on you.
 

PaulR

New Member
Nov 4, 2017
1
10
3
54
TL;DR buy new(ish) NVME instead.

I worked on this product at Sun/Oracle - in fact I validated all the performance testing for spec sheets.

It was a good product in its day, but it is too old and complex, even for $1900. BTW, it had a list price of USD $180k
at time of FCS (First Customer Shipment).

It's an Andy Bechtolsheim design.

It's a SATA SSD JBOD with the same logic block stamped out 4 times. Each block is based around an LSI 36-port
expander, with 20 ports facing the SSDs and the remaining 16 ports host facing on 4 x 4-lane SFF-8088 connectors.
80 SSDs (Fmods or Flash modules as we called them) is the max capacity - maybe the vendor is tossing in
16 loose Fmods. We borrowed the form factor from a JEDEC SO-DIMM and the pinouts are rearranged to
support

By zoning the SAS expanders, you could configure how the 20 SSDs were assigned to the host-facing SFF-8088 ports.
Because they were SATA Fmods, they were not dual ported so could only be assigned to one host SFF-8088 port.
Common zoning was 4 x 5 Fmods or 2 x 10 Fmods, but mostly I expect they were not zoned. Better hope any
unit you buy is unzoned, because SAS zoning on these require Sun CAM (Common Array Manager) software.
We had a dual-port offering with dual-port capable SAS Fmods planned but it was canned due to low demand.
I ran the SAS Fmod version in the lab for many year as a load bank for SAS HBAs.

Each of the 4 blocks implements Power Loss Protection in a proprietary Energy Storage Module loaded with super caps,
cold pluggable from the front panel and with a design life of 5 years IIRC. Where would you source the right caps to re-cap
these? Not impossible, but probably specialty procurement. How good is your soldering, because you can expect to need to re-cap the ESMs.

There are pins on the SO-DIMM Fmod that provide the condition of the ESMs and if they fail, writes to those Fmods will
be slow.

The only plus I can think of is that the Fmods are Samsung SLC and in those days we would only OEM 10 DWPD SLC
and heavily over-provisionedb (32GB Flash, 24GB user accessible). Marvell provided the SATA controller (indeed the
whole Fmod FRU).

I dunno how may folks can use 2M IOPS, but I suspect most apps respond better to low latency. The F5100 was great
in its day, but advances in technology (NVME) blows it away.

Buy NVME instead. Intel rocks.

HTH
Paul

P.S. Just a very satisfied Intel Flash customer, no disclosure required. Former Sun/Oracle veteran employee.