Enterprise SSD "small deals"

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Cruzader

Well-Known Member
Jan 1, 2021
953
944
93
15.36TB ruler format $500 Pardon Our Interruption...

(caution with seller)
I was tempted by their previous listing, but from a that fresh seller it feels a bit risky.
(Reusing the same pictures with same serials when first listing already sold just adds to that sense of risk for me)

It did peak my interest for ruler format prices in general tho.
A fair bit of 600-750$ 15.36tb listings that has been sitting for a long time that would probably accept some okay offers.
 

ca3y6

Well-Known Member
Apr 3, 2021
778
762
93
Same pictures for both listing. That's a big red flag, unless they were a big seller using stock photos, which doesn't seem to be the case (or unless the previous sale was cancelled, which is a possibility).
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,890
1,267
113
Well Cheap, Good, Fast, pick 1 or 2 at most.

So Cheap it is, Good I don't know, Fast probably not :p .
That is a Fusion IO SX350 card. It'll do >3GBps easy. Granted not as fast as the latest NVME etc, but 6.4TB for $150 or less? I'm almost tempted to buy them.

p.s. They work great even in the latest Proxmox.
 
  • Like
Reactions: nexox

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
That is a Fusion IO SX350 card. It'll do >3GBps easy. Granted not as fast as the latest NVME etc, but 6.4TB for $150 or less? I'm almost tempted to buy them.

p.s. They work great even in the latest Proxmox.
Well somebody bought them. Seller rejected my Offer of 90 USD / Piece :rolleyes: .
 

Markess

Well-Known Member
May 19, 2018
1,240
871
113
Northern California
Well somebody bought them. Seller rejected my Offer of 90 USD / Piece :rolleyes: .
I was tempted, but trying to do a little research before jumping.

I set up the home NAS with SSDs to keep noise down (a handful of 1.92tb Samsung I got for $40 each when prices dipped). But, it was so popular with the family….its “full”. So have to get something else.

Little cooling fans on a PCIe card shouldn’t be too bad, right?
 

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
I set up the home NAS with SSDs to keep noise down (a handful of 1.92tb Samsung I got for $40 each when prices dipped). But, it was so popular with the family….its “full”. So have to get something else.
Those are some incredible Prices :oops: .

Little cooling fans on a PCIe card shouldn’t be too bad, right?
If you have the Space for it and can give up the next PCIe Slot, sure. Or centrifugal Fan. Or Side Chassis fan depending on the Chassis Model.
 
  • Like
Reactions: Markess

Markess

Well-Known Member
May 19, 2018
1,240
871
113
Northern California
Those are some incredible Prices :oops: .


If you have the Space for it and can give up the next PCIe Slot, sure. Or centrifugal Fan. Or Side Chassis fan depending on the Chassis Model.
Price was good, but not "incredible" for when I bought them 2 years ago. Back then, $45-50 for single pieces seemed pretty common and I offered for 5.

Yeah, if I went the AIC route for expansion, I'd need to switch to ATX/mATX. Ports are maxed out on the current ITX based NAS, so no way to add capacity there unless I went fresh with larger SATA SSDs. I have a couple suitable chassis and boards I could repurpose from the homelab for a "new" NAS. So, I think keeping the current pool and adding additional storage is my best bet to save $$$. Just need to decide what to do :p
 

kapone

Well-Known Member
May 23, 2015
1,890
1,267
113
I'd need to switch to ATX/mATX.
Those are the exact reasons, I'm redoing my SAN nodes. Needed more flash, but was out of slots. So...



10x pci-e 3.0 x8 slots...The chassis (Chenbro NR40700) will need a bit of surgery to fit this monster, but should give me enough I/O for years to come. The Chenbro already has ~1PB of spinning rust, but more flash is needed (who doesn't need more flash!).
 

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
Those are the exact reasons, I'm redoing my SAN nodes. Needed more flash, but was out of slots. So...



10x pci-e 3.0 x8 slots...The chassis (Chenbro NR40700) will need a bit of surgery to fit this monster, but should give me enough I/O for years to come.
Those Motherboards (Sandy/Ivy Bridge Generation) are quite cheap at least:


Well Flash is also SAS/SATA. I guess you WANTED (some at least) NVMe ;).

Of course we also need to remember that HBAs have a limited PCIe bandwidth, so if you are doing high IO Stuff you'll need several HBAs in Parallel anyways, even though 1 HBA might be able to handle all the Flash Disks Connection-wise.

For Instance the LSI 9306-24i for 24 Drives but can "only" do 64 gbps in one Direction or 128 gbps in both Directions, so above 64 gbps write / 6 gbps / drive ~ 10 SATA Drives or ~ 5 SAS Drives you will bottleneck - although I'd argue that in real Life it might NOT be such a big Problem in the End.

The Chenbro already has ~1PB of spinning rust, but more flash is needed (who doesn't need more flash!).
1PB of Spinning Rust :oops::oops::oops: ???

EDIT 1: That Chembro Chassis seems to be super-expensive and difficult to find though (it's EOL apparently).

Officially it only supports 7xPCI Slots though, so I'm curious how you are going to make that work.

Even the Fractal Define 7 XL "only" has 9 PCI Slots (plus 3 transversal ones) and supports these "big" Motherboards, so I don't think it will work in this one either:
  • E-ATX
  • EE-ATX
  • SSI-CEB
  • SSI-EEB

EDIT 2: the Xeon E5 v3/v4 Model, the Supermicro X10DRX, has 11 PCIe Slots even :oops:

EDIT 3: also the Motherboard you mentioned and the Picture shows 11 PCIe Slots. Why did you say 10x Slots, is one of them not working ?
 
Last edited:

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
2,282
841
113
Those are the exact reasons, I'm redoing my SAN nodes. Needed more flash, but was out of slots. So...



10x pci-e 3.0 x8 slots...The chassis (Chenbro NR40700) will need a bit of surgery to fit this monster, but should give me enough I/O for years to come. The Chenbro already has ~1PB of spinning rust, but more flash is needed (who doesn't need more flash!).
god that would be so rad to run LLMs on. get a shitton of 3090s, 512GB of DDR3, a pair of 2697V2 or 2687W v2's and go crazy
 
  • Like
Reactions: kapone

kapone

Well-Known Member
May 23, 2015
1,890
1,267
113
Well Flash is also SAS/SATA. I guess you WANTED (some at least) NVMe
True. Should have been more specific.

ou'll need several HBAs in Parallel anyways
True. Hence the giant monster motherboard.

LSI 9306-24i for 24 Drives but can "only" do 64 gbps
I'm already using two Adaptec 8 series in the chassis for the spinning rust, one per expander/24 drives.

10 SATA Drives or ~ 5 SAS Drives you will bottleneck
Nah. That's fantasy stress testing stuff. Real life...a single pcie 3.0 x8 HBA is more than enough for 24x spinning rust, even the latest ones.

1PB of Spinning Rust
Yup. And that's replicated (well, not all, about half to 2/3) to the other SAN node. Both are identical.

EDIT 1: That Chembro Chassis seems to be super-expensive and difficult to find though
True. Bought three of them years back. They couldn't give them away at the time. I think it was $200/ea shipped and came with a nice motherboard.

Officially it only supports 7xPCI Slots though, so I'm curious how you are going to make that work
I don't need all 11 slots to have "openings". The Fusion IO SX350, the Adaptec HBA etc are all internal only. They just need to be secure enough in the chassis. It's only the NICs etc that need the slot openings.

so I don't think it will work in this one either:


the Xeon E5 v3/v4 Model, the Supermicro X10DRX, has 11 PCIe Slots even
It does. And they're open ended even! But...that motherboard is like $500...this was $27...less than a lunch.

also the Motherboard you mentioned and the Picture shows 11 PCIe Slots. Why did you say 10x Slots, is one of them not working
No, sorry. It does have 11 slots except one of them is pcie-2.0 x4 from the PCH. I don't even look at these slots. blah...
 
  • Like
Reactions: nexox

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
god that would be so rad to run LLMs on.
Probably better the X10DRX at that Point, 300-350 USD instead of the 50 USD for the X9DRX, but potentially at least double the RAM and CPU Power. You'll need the GPUs anyways but I mean budget wise the increase in Motherboard Cost isn't that much when looking at the Total Cost of the System at that Point.

But ... I Googled for half an Hour but couldn't find any reasonable SSI-MEB or HPTX nowadays, there are a few Threads around, but it's about Cases that were built over a decade ago and were quite a Niche Market (i.e. low availability nowadays):
 

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
Nah. That's fantasy stress testing stuff. Real life...a single pcie 3.0 x8 HBA is more than enough for 24x spinning rust, even the latest ones.
I should have been more specific. 5 SAS SSDs or 10 SATA SSDs ;)

Definitively NOT spinning Rust, those I can see wiring 24 Spinning Rusts on a single HBA without sacrificing Performance.

Yup. And that's replicated (well, not all, about half to 2/3) to the other SAN node. Both are identical.
Holy ****. They must have costed a Fortune !


True. Bought three of them years back. They couldn't give them away at the time. I think it was $200/ea shipped and came with a nice motherboard.
:oops::oops::oops::oops::oops:


I don't need all 11 slots to have "openings". The Fusion IO SX350, the Adaptec HBA etc are all internal only. They just need to be secure enough in the chassis. It's only the NICs etc that need the slot openings.
Fair enough, but is there "free Space" when there are no more exposed PCIe Slots ?


o_O


It does. And they're open ended even! But...that motherboard is like $500...this was $27...less than a lunch.
Nah, 300-350 USD right now, probably down to 200-250 USD or so with some Negotiation.


No, sorry. It does have 11 slots except one of them is pcie-2.0 x4 from the PCH. I don't even look at these slots. blah...
:p

The Paradox is that Supermicro keeps using this bad Habit also in the X10 E5 v3/v4 Generation. DMI x4 2.0 when even the X11 E3 v5/v6 have DMI x4 3.0 :rolleyes: ...
 

kapone

Well-Known Member
May 23, 2015
1,890
1,267
113
Probably better the X10DRX at that Point, 300-350 USD instead of the 50 USD for the X9DRX, but potentially at least double the RAM and CPU Power. You'll need the GPUs anyways but I mean budget wise the increase in Motherboard Cost isn't that much when looking at the Total Cost of the System at that Point.

But ... I Googled for half an Hour but couldn't find any reasonable SSI-MEB or HPTX nowadays, there are a few Threads around, but it's about Cases that were built over a decade ago and were quite a Niche Market (i.e. low availability nowadays):
The Chenbro NR400700 motherboard tray is essentially a full width (~17") 3U open chassis, since the power supplies sit "below" it, making the whole system 4U.



At one point, I had a stack of HGST SAS SSDs mounted next to the motherboard... :)
 
  • Like
Reactions: luckylinux