EU UK Cheap Fusion-io £150 3.2TB and £400 6.4TB PCI-e SSDs

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Fairlight

New Member
Oct 9, 2013
21
3
3
Hmm I came here just to browse the threads as this place is such a gold mine of a resource (I don't post much) and I see this thread, I am seriously tempted by one of the 6.4TB cards but was hoping to drop it into a R820 and tbh at this time I have no idea if it would fit. I will have to check, but I just wanted to thank you guys for such outstanding threads on this site.
 

Fairlight

New Member
Oct 9, 2013
21
3
3
Thanks - I am big time tempted to use one of these as a VMware Datastore, I notice only the B grades are left now.
 

YardBouncer

always yield to the hands-on imperative
Jul 13, 2019
50
28
18
UK
@acquacow is a human being of the first god damn water! He used to work for Fusion-io and has helped us all get them going.

Depending on how your R820 is loaded already it'll work for sure. From the R820 manual:
7 PCIe slots:
• Two x16 half-length, full-height
• One x8 half-length, full-height
• Three x8 half-length, half-height
• One x8 half-length, full-height for the RAID controller

only the B grades are left now.
They've been selling like shite off a shovel since this thread; wish I was on commission!


Oh well, it seems to be stable now, I tried different Ryzen Timing Calculator timings.
Phew. Zero parts fixes are always the best ones.
I was keen on AMD from the Athlon XP days through to Opterons till the i7 Xeons came out (Westmere?). Been Intel/Nvidia ever since.
The only company I actively hate is Seagate whom I loathe with a cold passion. Unreliable mandavoshka that they are.
Its possible I'd be using an AMD chip now if the Precision or Z series used them.
Put my own machines together for a couple of decades till I found Xeon + ECC workstations affordable. No bluescreens EVER.
Not as much fun but these days you don't have to squeeze every 0.1% out of a system, even SFF desktops are fast now.
 
  • Like
Reactions: Fairlight

Fairlight

New Member
Oct 9, 2013
21
3
3
You'd had made a small fortune for sure! thanks for posting it up!

Great - I should be good to go then as other than the RAID controller there is nothing in it, its a fairly "new" system (the 16SFF bay model).

Oh wow I didn't know @acquacow used to work for them, that's great, to be honest I've used one of these cards before quite a while ago for a SQL Server deployment (this is a long while ago actually) and the thing was seriously rapid, I was really impressed with it. I had forgotten all about them until I saw this thread.

I'll let you know how I get on.

Cheers guys
 
  • Like
Reactions: YardBouncer

YardBouncer

always yield to the hands-on imperative
Jul 13, 2019
50
28
18
UK
Yeah, @acquacow is a living leg end. Knows everything about these cards and is very helpful.

If you only have the RAID card in you've the choice of 3 slots - luxury!

R820 looks very nice, learned something new from looking at the specs too.
I always thought that quad CPU boxes had to use the E7-xxxx Xeons but the R820 data said E5-46xx

Got no real experience with quads, most of mine have been dual.
I had a quad Opteron box years ago but swapped it due to the noise & power consumption.
Plus most of the demands I put on CPUs are single threaded so fewer cores that run faster suit me.

The computational geometry kernel that is at the core of, I think, every solid model CAD system was written in the late 1970s and is inherently linear.
Its an incredible piece of work.
I think it was a German team that did it, now its licenced out to all the different solid modelling software makers.
And the poor Germans are still recovering. What a headcracker that must have been! I'm in awe of them.

Says a lot that nobody else managed to make their own in all this time.
I read that there is an attempt to make a multithreaded version happening soon but I'm not holding my breath.
I think some problems are serial by nature; sometimes one thing HAS to follow another.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Dual socket boxes are faster than quad socket anyways for most work loads because of the added NUMA overhead. We used to run circles around Oracle and MS SQL setups on 4 and 8 socket HP DL580s and 980s with just a standard DL380.

You don't need a lot of CPU to do database I/O.
 
  • Like
Reactions: YardBouncer

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
I always thought that quad CPU boxes had to use the E7-xxxx Xeons but the R820 data said E5-46xx
IIRC the Xeon E5-4xxx lineup was quad-socket capable, but skimped out on the QPI links. They only got 2 instead of 3. So there was no way to directly connect each CPU with with each other, leading to an additional hop on the furthest NUMA node.
 
  • Like
Reactions: YardBouncer

YardBouncer

always yield to the hands-on imperative
Jul 13, 2019
50
28
18
UK
IIRC the Xeon E5-4xxx lineup was quad-socket capable, but skimped out on the QPI links. They only got 2 instead of 3. So there was no way to directly connect each CPU with with each other, leading to an additional hop on the furthest NUMA node.
That makes complete sense. A nasty AND cunning hack. I know E7s cost a lot more so if your application could stand the extra latency its a compromise worth making.

I wonder if the salesmen pointed that out or just said 'heres some extra cheap quad boxes we made cos we're clever'.

Dual socket boxes are faster than quad socket anyways for most work loads because of the added NUMA overhead. We used to run circles around Oracle and MS SQL setups on 4 and 8 socket HP DL580s and 980s with just a standard DL380.
Interesting. So the motive for scaling up physical sockets would be either for per box licencing (which I know is not common any more) or more VMs per box? My understanding is that DL380s and other dual sockets far outsell the bigger models. With the way core counts and RAM stick sizes have been going I could see that 4 or 8 way boxes would be relegated to quite niche applications.

Or homelab e-peen? I know I've always wanted to play with an 8 socket machine.
Not really sure what I'd use it for but thats hardly the point :)
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
As the title suggests.
Grade B indicates 70-99% endurance remaining.

My 3.2TB example arrived very quickly and was excellently packed. Firmware was up to date.
Excellent physical condition, looked new.
Both types are OEM Sandisk branded ie no custom firmware etc.
Not sure if this counts as a bargain but I was impressed with it. No affiliation BTW.

Details on my card can be seen here at post 256:
https://forums.servethehome.com/ind...-iodrive-2-1-2tb-reference-page.11287/page-13

£150 (£180 inc VAT) SanDisk Fusion-io 3.2TB MLC ioScale PCI-e SSD F11-002-3T20-CS-0001 - Grade B

I bought one a couple of days ago. 76.4% endurance remaining ie ~15PB left and 100% reserves intact.
100% endurance examples are £200 (£240 inc VAT)

Fusion-io 3.2TB MLC ioScale Accelerator Card - Grade B
  • PCIe Gen 2, 4x electrical 8x mechanical Interface
  • 3.2TB Storage
  • 1.2GB/s write speed
  • 1.4GB/s read speed
  • Full Size Bracket
  • Part Number F11-002-3T20-CS-0001
==========================================

£400 (£480 inc VAT) SanDisk Fusion-io 6.4TB PCIe SSD SDFACCMOS-6T40-SF1 - Grade B
100% endurance examples are £500 (£600 inc VAT)

SanDisk Fusion-io 6.4TB PCIe Application Accelerator - Grade B
  • PCIe Gen 2, x8 Interface
  • 6.4TB Storage
  • 2.1GB/s write speed
  • 2.7GB/s read speed
  • Full Height, Full Length
  • Sandisk PN: SDFACCMOS-6T40-SF1
==========================================
Damnit. I think I’m in for one.
 

YardBouncer

always yield to the hands-on imperative
Jul 13, 2019
50
28
18
UK
They are well worth it, superb cards.
Do be aware they have upped the price since I first posted that.
 

YardBouncer

always yield to the hands-on imperative
Jul 13, 2019
50
28
18
UK
I think the card would be amazing for my database testing.
Would be a perfect use for one. Most of us seem to be using them as PC/workstation general storage as if they were just large cheap SSDs.
Like using a tank that can go 200mph in order to go to the local shop for a pint of milk.

Drivers are needed. In fact they are a large part of the card's Special Sauce.
Can't be used as a boot device.
There are several links to the drivers upthread; you have to register with WD to get them (Fusion-io were bought by Sandisk who were then bought by WD).

The Windows drivers seem to work with most versions of Windows; @acquacow, who used to work for them, knows the exact details.
I'm using Server 2008 r2 drivers with Win7 x64 and everything works perfectly.

Have no personal experience with Linux; I imagine it would depend on what distro & version you are using?
I think someone was using one with CentOS somewhere in this thread.
Give the forum search a go, there are a good few other threads where people discuss Fusion-io cards.

Get one, you won't regret it.

To save you time I've attached a copy of the Linux drivers manual.
I'm unsure if its the most recent one, its just what I found in a zip of stuff I grabbed from WD.
 

Attachments

Fairlight

New Member
Oct 9, 2013
21
3
3
You guys are right on the Quad Socket performance view point, but I just wanted one because I like faffing about with tech like this thats quite niche - I also wanted to consolidate 3 x DL380P G8s I was previously running and the extra cores help.

I'm currently running ESXi 6.7 on it which is virtually fully automated now via GitLab CI/CD --> Terraform, Ansible, Packer, Docker, works really well.

I have more compute power than I could ever use so its only natural I should try and attain the same with IOPs and network bandwidth too (though I am checking to see if I can use one of these cards as a straight datastore, not cache), like YardBouncer mentioned in terms of the tank - thats my bracket too.
 
Last edited:
  • Like
Reactions: YardBouncer

YardBouncer

always yield to the hands-on imperative
Jul 13, 2019
50
28
18
UK
so its only natural
Absolutely. Man after my own heart.

I like faffing about with tech
Me too. Its the only way to learn properly. I learned electronics the same way; constant fiddling and breaking stuff then repairing it again :)

attain the same with IOPs and network bandwidth
IOPS are the easy bit. Its buckin bandwidth thats torturing me these days. 10GBe adapters and switches are still far too expensive for me.
I tried making a NAS a few years ago but gave it up for local storage because of speed. More than one drive on 1GBe saturates it, very annoying.

Keep looking at Dell R710 or HP DL380 deals but its the same story - starved of bandwidth.
The RAID card in my workstation is a HP P812 with an expander built in.
I keep looking at those MSA60 SAS enclosures (I'm limited to 3.5" drives for space and price reasons).
That deals with the speed issue but means having a loud 19" case close by.
I know SAS cables can span a decent distance but theres still a limit (20'?).

Big fat He SAS drives inside my workstation are probably what I'll go with once I can afford 6x 12TB.
Then I can make a low power high capacity NAS for backups where speed is less of an issue.
zRAID and all those other funky modern RAID types hate running in VMs so I loop back to thinking about a Dell R710 or HP DL380 again.

All that stuff is on the long finger till I scrape together £1k for internal He drives though.
They'll have to be mirrored - I can't imagine the resilvering time for 12TB drives in RAID6...

Its a disease I tells ya. Glad to hear I'm not the only one 'suffering'.
 
  • Like
Reactions: Fairlight

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I grabbed a netgear XS708T and a few aquantia 10gige nics last year. I have Fusion-io and nvme stuff in every server, so my transfers are quite quick no matter what I'm doing.
 
  • Like
Reactions: YardBouncer

lukts30

New Member
Aug 4, 2018
6
1
3
Seems to be a known problem that at least Debian 10/Proxmox 6 and probably a 5.0<= Kernel in general do not work.

[TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox
"We have checked on your query with our engineering team related to ioScale 2 support for Debian 10 OS.

Unfortunately ioMemory at this stage in its life cycle has no plans to add support for Debian 10 “Buster” as it is considered a major OS update. Currently, we are only adding support for minor OS updates of OS’s already supported.

Kindly let me know if you have further query on this case."
 

YardBouncer

always yield to the hands-on imperative
Jul 13, 2019
50
28
18
UK
Nice work @acquacow.
Thats the direction I'm headed down; I'll probably get there by the time 100GBe is where everyone else lives :)
There will always be a bottleneck somewhere.

Seems to be a known problem
Thats a bummer.

Always liked the idea of Linux but never had the mental bandwidth to tackle it; too many other interests & projects.
I do use LinuxCNC for motion control with an FPGA card though.
It has a weird properly real-time patched kernel but luckily its mostly set up, you 'only' have to configure the software & hardware side of it.
My interests have always been where the meat meets the metal.
'Machinery may move without warning' stickers give me w| make me happy.