WTB: LSI (Broadcom) LSI SAS 6160 switch (or similiar)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

HaRD

New Member
Apr 20, 2021
5
4
3
Hello,
quite a longshot for this kind of stuff, but if anyone would have LSI (Broadcom) LSI SAS 6160 switch for sale for reasonable money, I am definetely looking for one.

Thank you in advance!
 
  • Like
Reactions: Samir

dbTH

Member
Apr 9, 2017
152
60
28
I had an LSI SAS 6160 switch in my lab over 10 years ago. This is a discontinued product, and the firmware available from Broadcom/LSI is also over 10 years old. I’m wondering why you are now interested in such an obsolete product, which I believe was not widely used in large-scale production environments.

By the way, a few eBay sellers are selling it, with the lowest price being about $325. Perhaps it is not worth that price nowadays.
 

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,523
1,637
113
49
HSV and SFO
I had an LSI SAS 6160 switch in my lab over 10 years ago. This is a discontinued product, and the firmware available from Broadcom/LSI is also over 10 years old. I’m wondering why you are now interested in such an obsolete product, which I believe was not widely used in large-scale production environments.
I looked into what this does--it's not necessarily obsolete since you can get new ones or equivalents by other manufacturers. It's use case is just niche as it's intended more for SANs--still a part of homelabbing imo.
 

HaRD

New Member
Apr 20, 2021
5
4
3
I had an LSI SAS 6160 switch in my lab over 10 years ago. This is a discontinued product, and the firmware available from Broadcom/LSI is also over 10 years old. I’m wondering why you are now interested in such an obsolete product, which I believe was not widely used in large-scale production environments.

By the way, a few eBay sellers are selling it, with the lowest price being about $325. Perhaps it is not worth that price nowadays.
If I had printer for money, I would not need to bother with "obsolete product" and I would just buy myself whole datacenter with latest servers for homelab, lmao.

On the serious note; I will have 3 disk arrays (currently own 2 HP disk shelves - can't remember model, it's connected to HPE ProLiant DL380p Gen8, plus planning to buy 1 DELL PowerVault MD1200 - planning to move drives from R720 server there) shelves to which I would like to have access from all servers (5 server in total; only HPE DL20 gen9 will not have access to those) in my homelab and figured SAS switch would be the easiest way to go about it. Sure, I could buy 10+ Gbps SFP+ switches instead, but that would not solve the issue of accessing data on said diskshelves if one of the servers to which those disk shelves are connected, would need to go offline for any reason.

Also, since I do have practically 10 years old servers (R720, R820, R630, SuperMicro CSE-815, DL20 gen9 + building one more SuperMicro system, which will be admittedly newest on 2nd gen scalable Xeon architecture) , then I am practically using time relevant hardware, as the newer stuff is too much expensive for me.

If you have other ideas on how to resolve access to data on drive arrays for all servers in my homelab - I am all ears. Even betterr, if it could be resolved in cheaper way.
 
  • Like
Reactions: Samir

ericloewe

Active Member
Apr 24, 2017
323
149
43
30
This seems like a very convoluted and expensive (up-front and power-wise) way of attaching disks to servers. All for a little extra flexibility and consolidation into fewer disk enclosures?
 
  • Like
Reactions: Samir

HaRD

New Member
Apr 20, 2021
5
4
3
Well, also looking for a way to enable HA mode for a few VMs, which - true - are not exactly mission critical (it's homelab, not pro DC), but I have people paying for them, thus downtime on them is undesired - and sharing storage via SAS switch across multiple servers just seems like the easiest way to do this. Power-wise I am good, since I have essentialy free electricity (but reason for that is entirely different topic).

So - it is not exactly just disk enclosures consolidation. I did not even mention that there are some issues with R720, on which I have currently the majority of all VMs and data, etc. - and yeah, about to swap motherboard in the server, so current unstability issues are quite serious there.

Let's just say that HA and storage sharing are currently my main concerns with other reasons behind it as well.
 
Last edited:
  • Like
Reactions: Samir

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,523
1,637
113
49
HSV and SFO
This seems like a very convoluted and expensive (up-front and power-wise) way of attaching disks to servers. All for a little extra flexibility and consolidation into fewer disk enclosures?
It's actually not--it's being done exactly as intended. The whole point of these SAS switches was to completely bypass running block storage over the network and then serving it out to clients since that doubles network traffic and reduces storage speed. By using an SAS switch, you get full drive speeds and any server needing access has direct access (DAS) vs restricted access via a particular server (NAS). I'm actually surprised these are not found more in older homelabs.
 
  • Like
Reactions: HaRD

nabsltd

Well-Known Member
Jan 26, 2022
547
389
63
The whole point of these SAS switches was to completely bypass running block storage over the network and then serving it out to clients since that doubles network traffic and reduces storage speed.
Storage and client should be separate networks, so even though you have to transfer the data over both, it doesn't really "double" any network traffic.

Today, a good storage server can provide access to storage to clients over the network better than the client could provide the storage for itself over an SAS switch. The storage server could use caching that is tricky (or maybe even impossible) to set up to allow multiple clients to take advantage. And, the storage server helps prevent issues that would come up if the raw storage were made available to multiple clients. The switch itself likely doesn't have any access control, so you have to trust the clients not to access the wrong disks.
 
  • Like
Reactions: Samir

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,523
1,637
113
49
HSV and SFO
Storage and client should be separate networks, so even though you have to transfer the data over both, it doesn't really "double" any network traffic.

Today, a good storage server can provide access to storage to clients over the network better than the client could provide the storage for itself over an SAS switch. The storage server could use caching that is tricky (or maybe even impossible) to set up to allow multiple clients to take advantage. And, the storage server helps prevent issues that would come up if the raw storage were made available to multiple clients. The switch itself likely doesn't have any access control, so you have to trust the clients not to access the wrong disks.
These are very, very well thought out devices. I didn't really understand them well without reading up on them and how they operate. They have their own out of band management and are not some sort of dual host or SAS expander where you would run into the problems you mention.
 
  • Like
Reactions: HaRD

dbTH

Member
Apr 9, 2017
152
60
28
It has been a long time now. Let me recall what’s needed to make it work:

  1. You definitely need an SAS drive for it to be recognized by different hosts.
  2. Yes, there is out-of-band management, but it uses an old Java-based SDM-GUI and has no CLI.
  3. You need a compatible SAS expander chipset on the storage array. More importantly, you must know the Zone Manager password for the SAS expander.
@HaRD, an SAS switch is not a very flexible SAN setup, even for a homelab. In my opinion, building a Ceph storage for block device access or just building a NAS would be better.
 
  • Like
Reactions: Samir

HaRD

New Member
Apr 20, 2021
5
4
3
I am sorry, but Ceph or ZFS (the latter one especially due to RAM requirements) is not really solution in my case for various reasons - it's not like I have not considered it. So ultimately I am looking for hardware solution - and SAS switch came up as the solution I need for my purposes as the best and easiest; 2nd best solution came up to build everything over 25 Gbps network, but when I looked on prices, well .....

So - ultimately I am fine with limited flexibility.

And for the last - building another NAS (though I am building one from leftover from my previous gaming / workstation configuration) would not really solve anything, since I already own like ~90% of all that server hardware (servers themselves + disk arrays).
 
  • Love
Reactions: Samir

BackupProphet

Well-Known Member
Jul 2, 2014
1,155
732
113
Stavanger, Norway
intellistream.ai
40-56Gbps networking is cheap, like 10 USD for a Mellanox ConnectX3 and 100 USD for SX6036 and 10 USD for a cable that supports 56Gbps.

ZFS doesn't require much ram, you can run it just fine with only 1GB ram. Ceph however do consume a shit ton of system resources.
 
  • Like
Reactions: Samir