5.4PB in 4U? Seagate shows off a 60TB SSD at FMS 2016

  • Thread starter Patrick Kennedy
  • Start date
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DaveP

Member
Jul 30, 2016
50
2
8
43
5.4PB in 4U is pretty damn impressive. And since they put it in a JBOD, you'd probably never stress out an individual disk too much so heat might not even be much of a concern.

@gigatexal, just because a single disk can be huge or fast or both, that doesn't negate the need for SANs. Storage manageability and flexibility is not something to just throw out. Those 90 disk can just as easily go into a SAN where they become immensely more useful.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
This isn't that far of a leap actually, it's a 3.5 inch disk which some quick back of napkin math shows as a 3.6 increase in usable space, using the same density everything 15.6*3.6 gets you to 56.16... samsung could with very little R&D probably release something in the same class if asked to.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
5.4PB in 4U is pretty damn impressive. And since they put it in a JBOD, you'd probably never stress out an individual disk too much so heat might not even be much of a concern.

@gigatexal, just because a single disk can be huge or fast or both, that doesn't negate the need for SANs. Storage manageability and flexibility is not something to just throw out. Those 90 disk can just as easily go into a SAN where they become immensely more useful.
I know I was being flippant but I think with enough built in redundancy and enough spares on hand and some failover setup, like two of these in HA you could save the vendor tie-in of a branded SAN.
 
  • Like
Reactions: DaveP

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
How about a built-in ARM chip and 10Gbase-T instead of SAS-3. You'd have the mother of all stand-alone Ceph OSDs.

Seagate is already signed on to Kinetic - so its not a totally wild idea. If they'd just get off their "closed" view of what it means to have an "open" standard.
 
Last edited:

DaveP

Member
Jul 30, 2016
50
2
8
43
I know I was being flippant but I think with enough built in redundancy and enough spares on hand and some failover setup, like two of these in HA you could save the vendor tie-in of a branded SAN.
Absolutely. There are a lot of options out there for rolling your own solution instead of the exorbitantly priced branded SANs. And no reason the homegrown system can't be very, very fast too. FreeNAS even does ok for that job right now. I'm looking forward to tinkering with the final Server 2016 release when it comes out next month. The updates to storage spaces, the new block based replication functions, shared-nothing storage clustering (Storage spaces direct), etc, all are pointing to even Windows being decent as a storage headend in the near future.
 
Apr 13, 2016
56
7
8
54
Texas
Fab allocation of flash is going to be a fun thing to watch. In enterprise realms, we're already seeing significant backlog on some high end NVMe devices, simply due to NAND allocations going to certain consumer companies that have a fruit as their name. I'm really glad Seagate went with SAS for this first - as it is a real tell on the target market where they think the solution makes sense. Dual port SAS solutions are vastly more prevalent (and proven/hardened) than the nascent dual port NVMe solutions.

I again wonder aloud - when is an end point device "too big?" :)
 

Scott Laird

Active Member
Aug 30, 2014
317
148
43
Entertaining, but they don't really make a ton of sense today, except maybe in really exceptional situations. Assuming a similar price per GB to Samsung's 16T drives (and no discounts), you'd be paying over $3M for a 90-drive 4U server. With 12 Gbps SAS, it'd take most of a day to rebuild each drive. You wouldn't really want a SAS expander with drives that can *each* fill 100% of the pipe under normal use, but you can't really fit 90 12G SAS drives into anything that looks like a normal server without oversubscribing something pretty dramatically. At these prices, the server is ~free compared to the drives, and the rack space is even cheaper, so there's no real reason not to spread things out more and limit yourself to 16-32 drives per server, unless you *really* need 5P in a single local filesystem.

It's cool that you *could* do it, though.

Rationally speaking, 60T is just too big for a 12G link--at 1100 MB/sec, that's over 15 hours. We're having problems with 3-hour hard drives today when it comes to RAID rebuilds and the like. At 550 MB/sec, SATA is about 2T/hour. That'd make SAS3 around 4T/hour. 4x NVME would be 6-9T. Find me a 16x PCIe 4.0 NVMe 60T drive, and we'll talk :).
 
Apr 13, 2016
56
7
8
54
Texas
EEtimes note from today, this time on Micron's 3D Xpoint demo along with Toshiba NAND roadmap updates. (@Patrick, did you get to talk to the Micron folk?)

Micron demos 3D XPoint in drives | EE Times

I continue to be terrified by statements like:
Separately, Toshiba now has working chips for QLC, a version of flash supporting four bits/cell. The dense designs could enable 100 TByte SSDs and beyond, initially using a PCIe Gen 3 interface.

One such drive could replace a dozen hard drives while offering significantly lower power consumption and higher performance. Toshiba plans to shift the designs to PCIe Gen 4 in 2019.

Too much data in single points of failure (from a device perspective.) At a solution level, we need to see much more Ceph like offerings, where an individual node is a FRU (field replaceable unit) and therefore disposable - but I don't see those solutions happening in a "simple for the masses to deploy" manner fast enough.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
FMS has terrible wifi and no press wifi. Pushing me a bit behind on updates.

QLC was requested in Facebook's talk yesterday so odds are we will see it sooner rather than later
 

ATS

Member
Mar 9, 2015
96
32
18
48
How about a built-in ARM chip and 10Gbase-T instead of SAS-3. You'd have the mother of all stand-alone Ceph OSDs.

Seagate is already signed on to Kinetic - so its not a totally wild idea. If they'd just get off their "closed" view of what it means to have an "open" standard.
Hasn't that died already? I know that at least HGST appears to have completely dropped it. It seems interesting at first until you consider that it is likely significantly more electrical power, each node has extremely low cpu performance, and there are many high performance and low power cpus available that can handle multiple drives easily (some burning less power than a 10-24 port 10G switch).
 

i386

Well-Known Member
Mar 18, 2016
4,243
1,546
113
34
Germany
Nimbus Data announced an enterprise 3.5" ssd with 100tb capacity and sata 3 interface. (100,000 iops @4k random read or write, 500mbyte/s throughput in sequential workloads).
(It would take ~59 hours to fill this ssd @ 500mbyte/s :eek:)
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
At those capacity i can't see anything other than NVMe making any sense.
Cool just the same, 60TB @ 15watts is very impressive !

(and as an enterprise spending millions on multi petabyte all flash SAN's is now the norm i think, i don't see SAN going away for a lot of things just yet)
 

niekbergboer

Active Member
Jun 21, 2016
154
59
28
46
Switzerland
I'd expect this to be an order of magnitude more expensive then that; given that this is "Enterprise", and looking at the price per TB at that level, I'd expect this to be a couple of hundred grand.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I think 50k is not that far off the mark actually at release time , that’s $1 per GB, look at a 4TB enterprise ssd is costing $2000 or so meaning 50c per GB