You'll be 6ft under before you wear one of these out at home...For me it come with 89.66%.
It will finish in the trash before the end of it's remaining PBW !
I run about 30 VMs per ioDrive in my home environment where I have openstack and openshift labs spun up for test installs/etc. Plus my normal utility VMs running IDM/Satellite/Active Directory and RHEL/CentOS/Windows utility VMs.silly question i know but what are you doing with these? a 970 pro or optane is faster these days right?
So single disk with XFS and then GLUSTERFS over those 3?I run about 30 VMs per ioDrive in my home environment where I have openstack and openshift labs spun up for test installs/etc. Plus my normal utility VMs running IDM/Satellite/Active Directory and RHEL/CentOS/Windows utility VMs.
I have a 3-node hyper-converged RHV/oVirt cluster with gluster running underneath, all on ioDrives.
I'd destroy a cheaper sata SSD or overheat an m.2 ssd real quickly with some of these automated builds.
Am using two of these puppies for prototyping in AI and model building etc.silly question i know but what are you doing with these? a 970 pro or optane is faster these days right?
For things like games (LMAO saying this on STH) or warm storage these are awesome.silly question i know but what are you doing with these? a 970 pro or optane is faster these days right?
I mean, if you have software that can use them as a cache, you can use them as either.Are these read or write cache drives?
Ah, I just thought the reason for needing the additional bandwidth from PCIe was so that it could be a cache drive. Never crossed my mind about using it as storage since you only have that many PCIe slots per mobo.I mean, if you have software that can use them as a cache, you can use them as either.
By default they are just basically an ssd, so ideally you'd use them as a storage device.
We did have software we used to sell/give out called directCache that let you use ioDrives as a cache device for various filesystems in linux and windows, but it died a long time ago.
Oh, if you needed more slots, we'd usually use an expansion chassis and we could put 200TB in a 3U external enclosure that does ~100GB/sec. You could easily connect multiple of these to a single host as well if you wanted, or 4 different hosts to one unit and split up the cards internally between each host.Ah, I just thought the reason for needing the additional bandwidth from PCIe was so that it could be a cache drive. Never crossed my mind about using it as storage since you only have that many PCIe slots per mobo.
I discovered a whole new world today and now I want oneOh, if you needed more slots, we'd usually use an expansion chassis and we could put 200TB in a 3U external enclosure that does ~100GB/sec. You could easily connect multiple of these to a single host as well if you wanted, or 4 different hosts to one unit and split up the cards internally between each host.
![]()
![]()
![]()
And then non-Fusion-io, at SanDisk, we had this thing called Infiniflash that was sas-connected JBOD flash. 64x8TB for 1/2PB of storage in 3U.
![]()
![]()
Empty chassis:
![]()
Makes a 45-drives box look a bit wimpy =)
-- Dave
Well, we'd usually drop huge databases onto them, or use them as storage for enterprise virtualization. They make great data recorders for sensor networks and also make great ceph/gpfs cluster storage.I discovered a whole new world today and now I want one
Honestly though, what are you using so much SSD space for?
One Stop Systems makes the external Pci-e enclosures.what are these external enclosures? do you have any links for them?