EU Fusion-IO 3.2TB - A condition, 20+ available £184.20.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

LukeP

Member
Feb 12, 2017
183
21
18
44
silly question i know but what are you doing with these? a 970 pro or optane is faster these days right?
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
silly question i know but what are you doing with these? a 970 pro or optane is faster these days right?
I run about 30 VMs per ioDrive in my home environment where I have openstack and openshift labs spun up for test installs/etc. Plus my normal utility VMs running IDM/Satellite/Active Directory and RHEL/CentOS/Windows utility VMs.

I have a 3-node hyper-converged RHV/oVirt cluster with gluster running underneath, all on ioDrives.

I'd destroy a cheaper sata SSD or overheat an m.2 ssd real quickly with some of these automated builds.
 
  • Like
Reactions: Samir

Markus

Member
Oct 25, 2015
78
19
8
I run about 30 VMs per ioDrive in my home environment where I have openstack and openshift labs spun up for test installs/etc. Plus my normal utility VMs running IDM/Satellite/Active Directory and RHEL/CentOS/Windows utility VMs.

I have a 3-node hyper-converged RHV/oVirt cluster with gluster running underneath, all on ioDrives.

I'd destroy a cheaper sata SSD or overheat an m.2 ssd real quickly with some of these automated builds.
So single disk with XFS and then GLUSTERFS over those 3?
 

jaysa

New Member
May 25, 2018
9
4
3
silly question i know but what are you doing with these? a 970 pro or optane is faster these days right?
Am using two of these puppies for prototyping in AI and model building etc.
Two setups rolled out into operations so far.
PHBs wont invest in R&D and need convincing, so this (and the power bill) goes on at home ...

Because the suites are mostly open source, I/O is not efficient.
An Optane was faster but datasets are now too big.
Mine run with a 120mm fan at 7V pushing air along the slots - keeps them cool enough under load.
 
  • Like
Reactions: Samir

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,872
617
113
silly question i know but what are you doing with these? a 970 pro or optane is faster these days right?
For things like games (LMAO saying this on STH) or warm storage these are awesome.

If I didn't already have two SM863's and a 1TB 860 Evo I would have bought one of those ioDrives.
 
  • Like
Reactions: Samir

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Are these read or write cache drives?
I mean, if you have software that can use them as a cache, you can use them as either.

By default they are just basically an ssd, so ideally you'd use them as a storage device.

We did have software we used to sell/give out called directCache that let you use ioDrives as a cache device for various filesystems in linux and windows, but it died a long time ago.
 

josh

Active Member
Oct 21, 2013
615
190
43
I mean, if you have software that can use them as a cache, you can use them as either.

By default they are just basically an ssd, so ideally you'd use them as a storage device.

We did have software we used to sell/give out called directCache that let you use ioDrives as a cache device for various filesystems in linux and windows, but it died a long time ago.
Ah, I just thought the reason for needing the additional bandwidth from PCIe was so that it could be a cache drive. Never crossed my mind about using it as storage since you only have that many PCIe slots per mobo.
 
  • Like
Reactions: Samir

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Ah, I just thought the reason for needing the additional bandwidth from PCIe was so that it could be a cache drive. Never crossed my mind about using it as storage since you only have that many PCIe slots per mobo.
Oh, if you needed more slots, we'd usually use an expansion chassis and we could put 200TB in a 3U external enclosure that does ~100GB/sec. You could easily connect multiple of these to a single host as well if you wanted, or 4 different hosts to one unit and split up the cards internally between each host.








And then non-Fusion-io, at SanDisk, we had this thing called Infiniflash that was sas-connected JBOD flash. 64x8TB for 1/2PB of storage in 3U.




Empty chassis:


Makes a 45-drives box look a bit wimpy =)

-- Dave
 
Last edited:

josh

Active Member
Oct 21, 2013
615
190
43
Oh, if you needed more slots, we'd usually use an expansion chassis and we could put 200TB in a 3U external enclosure that does ~100GB/sec. You could easily connect multiple of these to a single host as well if you wanted, or 4 different hosts to one unit and split up the cards internally between each host.








And then non-Fusion-io, at SanDisk, we had this thing called Infiniflash that was sas-connected JBOD flash. 64x8TB for 1/2PB of storage in 3U.




Empty chassis:


Makes a 45-drives box look a bit wimpy =)

-- Dave
I discovered a whole new world today and now I want one :eek:
Honestly though, what are you using so much SSD space for?
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I discovered a whole new world today and now I want one :eek:
Honestly though, what are you using so much SSD space for?
Well, we'd usually drop huge databases onto them, or use them as storage for enterprise virtualization. They make great data recorders for sensor networks and also make great ceph/gpfs cluster storage.

what are these external enclosures? do you have any links for them?
One Stop Systems makes the external Pci-e enclosures.