Enterprise SSD "small deals"

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

luckylinux

Well-Known Member
Mar 18, 2012
1,544
491
83
The Chenbro NR400700 motherboard tray is essentially a full width (~17") 3U open chassis, since the power supplies sit "below" it, making the whole system 4U.



At one point, I had a stack of HGST SAS SSDs mounted next to the motherboard... :)
I guess I would end up with Split Systems for HDD and NVMe.

But IF (do NOT tempt me :p ) I would do something like that, I'd probably just put the Motherboard in an Open Frame on an Acrylic Plate or something, just to have the NVMe Drives via PCIe. A Chassis like the SuperChassis 213XAC-R1K05LP goes for 1200-1600 EUR so nope o_O.

I don't want to hijack the Thread.

Nice Build though ;)
 
  • Like
Reactions: kapone

kapone

Well-Known Member
May 23, 2015
1,881
1,264
113
I'd probably just put the Motherboard in an Open Frame on an Acrylic Plate or something
The thing is that the 48x spinning rust will still need to be connected to it. And the Chenbro is compatible with full height cards. To split this up, it'd end up taking 8U of rack space per SAN node...
 

luckylinux

Well-Known Member
Mar 18, 2012
1,544
491
83
The thing is that the 48x spinning rust will still need to be connected to it. And the Chenbro is compatible with full height cards. To split this up, it'd end up taking 8U of rack space per SAN node...
I said **I** would, also because I don't have access to that cheap Chassis :p .

That being said I definitively have more experience in NAS than SAN. The single Point of Failure of Networking (even with dual NICs, dual switches + LAGG + Spanning Tree Protocol etc) makes it highly unpractical for storing VMs over iSCSI/NVMeOF, unless it's as Part of a CEPH Cluster or similar, which typically requires more than 5-7 Nodes, preferrably Symmetric / Balanced (no BIG.little o_O).

I haven't even started with that (NVMe) so I probably have a few Concepts that are somewhat wrong.
 

kapone

Well-Known Member
May 23, 2015
1,881
1,264
113
Probably better the X10DRX at that Point, 300-350 USD instead of the 50 USD for the X9DRX, but potentially at least double the RAM and CPU Power. You'll need the GPUs anyways but I mean budget wise the increase in Motherboard Cost isn't that much when looking at the Total Cost of the System at that Point.

But ... I Googled for half an Hour but couldn't find any reasonable SSI-MEB or HPTX nowadays, there are a few Threads around, but it's about Cases that were built over a decade ago and were quite a Niche Market (i.e. low availability nowadays):
The other thing about the X10 series (which I don't like) is that it's still pcie 3.0. If I'm gonna spend money to upgrade...I want better than that. My next upgrade will most likely be an Epyc based system with (hopefully) pcie 5.0 slots.

Until then...el-cheapo X9 series is it. :)
 

kapone

Well-Known Member
May 23, 2015
1,881
1,264
113
I said **I** would, also because I don't have access to that cheap Chassis :p .

That being said I definitively have more experience in NAS than SAN. The single Point of Failure of Networking (even with dual NICs, dual switches + LAGG + Spanning Tree Protocol etc) makes it highly unpractical for storing VMs over iSCSI/NVMeOF, unless it's as Part of a CEPH Cluster or similar, which typically requires more than 5-7 Nodes, preferrably Symmetric / Balanced (no BIG.little o_O).

I haven't even started with that (NVMe) so I probably have a few Concepts that are somewhat wrong.
It's really not. CEPH has its place, but an active-active SAN does make things fairly easy. I'm using Starwind SAN on these nodes (on Proxmox as the base/bare metal) and the rest of the Proxmox cluster is completely run over ISCSI (for boot and a few other things) and will run NVME-oF shortly (which is why this upgrade is taking place).

But even with that, a cluster aware file system is really not that critical in my case, because almost all compute in my application writes to the Postgres databases (and I'm moving to an active-active multi master for that as well). It's only the Postgres servers that need ultra fast, low latency disk access.
 

luckylinux

Well-Known Member
Mar 18, 2012
1,544
491
83
It's really not. CEPH has its place, but an active-active SAN does make things fairly easy. I'm using Starwind SAN on these nodes (on Proxmox as the base/bare metal) and the rest of the Proxmox cluster is completely run over ISCSI (for boot and a few other things) and will run NVME-oF shortly (which is why this upgrade is taking place).

But even with that, a cluster aware file system is really not that critical in my case, because almost all compute in my application writes to the Postgres databases (and I'm moving to an active-active multi master for that as well). It's only the Postgres servers that need ultra fast, low latency disk access.
Without Cluster if your SAN goes down each and every VM on every cluster Node will crash/panic since it won't be able to write any longer, wouldn't it ?

Are you using the Free Version ?

Management via PowerShell :rolleyes: . It's true Powershell can also be run on GNU/Linux but I have a bad Feeling should I try that. I don't use Windows at Home any longer for several Years.
 

kapone

Well-Known Member
May 23, 2015
1,881
1,264
113
Without Cluster if your SAN goes down each and every VM on every cluster Node will crash/panic since it won't be able to write any longer, wouldn't it ?

Are you using the Free Version ?

Management via PowerShell :rolleyes: . It's true Powershell can also be run on GNU/Linux but I have a bad Feeling should I try that. I don't use Windows at Home any longer for several Years.
That's why it's active-active... :) Two SAN nodes, real-time replication between them and ISCSI multi-path. If one goes down...the cluster doesn't even hiccup.

Yes, I'm using the free version, but you're a bit behind the curve. The KVM version of Starwind SAN has no restrictions...It allows HA LUN creation via the web-gui... :) No Windows needed (although they do use wine internally in the VM...which is ...well, meh.)
 

luckylinux

Well-Known Member
Mar 18, 2012
1,544
491
83
That's why it's active-active... :) Two SAN nodes, real-time replication between them and ISCSI multi-path. If one goes down...the cluster doesn't even hiccup.

Yes, I'm using the free version, but you're a bit behind the curve. The KVM version of Starwind SAN has no restrictions...It allows HA LUN creation via the web-gui... :) No Windows needed (although they do use wine internally in the VM...which is ...well, meh.)
Thanks for the Explanation :). I actually have 3 x Supermicro SC216 so up to 3x24 SATA/SAS SSD Bays there to put to good use :) .

Plus the Supermicro X10DRi with 3xIntel P4608 6.4TB (2x3.2TB) that I just assembled last Week. I'd need another good Deal on those 6.4TB Drives soon :).

Of course everything will be Network Bottleneck like crazy in my Case, 10gbps Switch is the Max I got. And 100gbps Switches are too Noisy for my House I believe ...
 

ca3y6

Well-Known Member
Apr 3, 2021
772
753
93
Of course everything will be Network Bottleneck like crazy in my Case, 10gbps Switch is the Max I got. And 100gbps Switches are too Noisy for my House I believe ...
Where will you be consuming that storage from? Might be worth doing server to server 56gbps or 100gbps connections, all you need is 3 cards with double ports, and each server is connected to each others.
 

kapone

Well-Known Member
May 23, 2015
1,881
1,264
113
Of course everything will be Network Bottleneck like crazy in my Case, 10gbps Switch is the Max I got. And 100gbps Switches are too Noisy for my House I believe ...
Yup. A good network fabric is essential, especially if you're doing RDMA... Still cleaning up the wiring and hooking up the rest of it, but switched out the ICX 6610 (was doing a bunch of 10g ports) to a Mellanox SX6036 (36x40gbps ports, and it'll do 56gbps on each port with the right cables and NICs).



40/56g DAC cables are thick....and heavy...should have gone with AOC cables...well, maybe someday.
 

luckylinux

Well-Known Member
Mar 18, 2012
1,544
491
83
Where will you be consuming that storage from? Might be worth doing server to server 56gbps or 100gbps connections, all you need is 3 cards with double ports, and each server is connected to each others.
Short Answer is: mostly (> 70% ?) in the same Room.

I got 4 x ConnectX-4 Dual Port 100gbps I could use for Point-to-Point, but definitively not for many Hosts.

I also have an Infiniband Switch & Infiniband ConnectX-2 I believe 40gbps HCA I bought a long Time ago and never used because the Fans on that Switch is absolutely horrendous.

EDIT 1: Switch is Voltaire 4036 if anybody was wondering. Still pretty sure I disassembled the Fan Cage to try to silence the Beast around 10 Years ago and never managed to go forward with that Project o_O .
 

luckylinux

Well-Known Member
Mar 18, 2012
1,544
491
83
That's why it's active-active... :) Two SAN nodes, real-time replication between them and ISCSI multi-path. If one goes down...the cluster doesn't even hiccup.

Yes, I'm using the free version, but you're a bit behind the curve. The KVM version of Starwind SAN has no restrictions...It allows HA LUN creation via the web-gui... :) No Windows needed (although they do use wine internally in the VM...which is ...well, meh.)
Just to be sure: which one of these are you using ?
- Virtual SAN Free (VSAN Free)
- Virtual HCI Appliance Free

It looks that I need to register with all my (full) Contact Details either Way :rolleyes: .