Maximum Distance Between Workstation and RAID Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ultra

New Member
Jan 2, 2016
10
1
1
61
Hi guys,

we are looking to move our 8 disk RAID 0 (which is currently inside a workstation) into a 24 unit server chassis, so we can extend the Raid 0 to 24 drives and increase the max I/O of the Raid.

Read/write speed is the most important factor for us - we deal with very large video/image files. We have backup solutions in place so don't worry about loss of data in case of failure etc

Now, we'll be using a LSI 9361 in combination with a Raid expander (prob Intel), which will then connect to the SAS backplanes in the server chassis.

Due to (possible) noise we want to keep the RAID server as far away as possible from the actual workstation. The LSI and the Intel Raid controller are connected via SFF-8087 SAS cables... I believe the max. distance per spec is 10 meters (33ft), although some state not to use more than 8 meters...

So one option is to keep the LSI 9361 in the workstation and put the RAID expander in the server (and then use 10m or 8m SAS cables), but we cannot find 10m or 8m SAS cables online... we finally found one supplier and the cable costs more than US$1,000 which is idiotic...

Where can we find long SAS cables at a reasonable price ?

Or, what are other options to move the RAID server as far as possible away from the actual workstation without introducing a bottleneck and ultimately lose RAID speed ?

Again, the only reason we introduce the external RAID server is speed, so it does not make sense to lose that...

Thanks and happy new year !
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
First off, 24 drives in a raid 0 is pretty risky. It may be fast but you have a high risk of loss of data. You may want to look at some sort of protection, be it raid 10 or something to help.

That said, I found some 10m cables on eBay for ~$50

eBay
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
For longer distance you really need to consider moving to optical interconnects. If you are just a bit over the 10M, you can probably rig up something with a switch/repeater in the middle and get 16-20 meters between the 2 points, but may end up more expensive than optical at the end of the day.
Without knowing your iops target and IO workload patterns, I would say you have 3 classes of options
  • Fibre Channel - Somewhat older standard but you can get 16Gb FC hardware and it is all designed for "enterprise" workloads and scale (DB servers, SAN storage etc) big issue here is the difference between initiator and target side. Easy if you use a vendor's SAN array for the target side, but can be a pain if you roll your own.
  • 10Gb Ethernet or 10/40GB IB - Upgrade your storage chassis to a proper storage server and do a point-to-point link with 10G or 40G
  • Spend big $$$ on Active Optical Cables for SAS. Quick searching it seems these are damn expensive, probably better of spending the $ on upgrading to a server on the storage side. ex: HD MiniSAS (SFF-8644) - HD MiniSAS (SFF-8644) AOC (Active Optical Cable) 12Gb/s optical AOC and Optical Interconnect | Mini-SAS HD Active Optical Cable | FCI
If your working set is a few TB, then pairing SSD's in the workstation with 24+ drive chassis for bulk storage might be a good plan connected by IB or Ethernet. I do something similar for processing 100's of video streams in parallel and chunking them out into hour+ blobs for archive on the huge storage server's RAID array for faster access.

If you have multiple workstations accessing the same storage pool, you should upgrade to a full storage server with the speed at the remote location that can handle the noise and spend $ on network between your workstations and the storage server.
 
  • Like
Reactions: whitey

Ultra

New Member
Jan 2, 2016
10
1
1
61
@ chuckleb: thank u for the link ! I searched on ebay yday and did not find anything... don't worry about R0 data safety, we got that covered.

@ lucidrenegade: yes, that seems to be the way to go. thank u.

@ Blinky42: Thank u for the overview. We need 24Gbps+. The current 16GFC is too slow for our needs. 40G Ethernet sounds good but when looking on Amazon on the ethernet card prices from Mellanox, they are very expensive and at this point we need less than 8m of difference, so a SAS cable solution is much cheaper...

I have a follow up question:

do these 8087-to-8088 adapters/converters reduce data throughput ? If so, by how much ? or do they literally just re-route the wiring for the 2 different plugs ?

Anything I need to look out for when buying the 8087-to-8088 converters ? Any recommendations for cards from specific vendors (LSI etc) ?

Thanks !!!
 
  • Like
Reactions: noths

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
Ummmmmm. Why???? Just buy nvme drives and be done with it. More IOPS. Less hassle. Let me explain why

1st expanders kill IOPS. 10-35%. And with one raid card you will max out at about 6GB a second no matter how many dives you have
2nd ok so now no expanders next problem. you will need multiple raid cards to do anything more than 200-300k IOPS and that is using ssds. If you are looking for sequential performance then you can only get 3.5GB a sec with one raid card. 24 drives 3 raid cards
3rd investment 24 ssd 3 raid cards. 6 Long ass sas cables, enclosure = 5k at a min.

What do you get for that. 10 meters between you and the noise. 12 tb of ssd space. And 8-10 GB/s for $7k. Less if you use smaller SSDs. If you are thinking HHD then sequential performance is not important. Space is important


4 2tb data center quality nvme drives would give you the 10-12 gb/s performance in sequentials and much more in IOPS ssd drives rear in perf and only set you back 6-8K

Or get 4x intel 750 1.2 tb each and 2 super macro cards for less than 4K. This would give you similar perf in sequentials and much more IOPS. Less space
 
Last edited:
  • Like
Reactions: Quasduco

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
40Gb is not too bad price wise, $620 for cards & modules + ~$125 for 20M of OM4 8 strand MPT for 40Gb point-to-point

The QSFP+ modules are available new for $130
Mellanox MC2210411-SR4 Compatible 40GBASE-SR4 QSFP+ Transceiver -Fiberstore and even cheaper on ebay

$180 for single port Connect-X3
IBM Mellanox ConnectX-3 VPI QSFP FDR14/40Gbe HCA 00W0039 - 00W0037 MCX353A-FCBT

or $250 on for a dual port
Mellanox MCX354A-FCBT FDR 56.6Gb/s INFINIBAND + 40GbE HCA CARD CONNECTX-3 CX354A

Also gives you the ability to expand beyond one workstation connected to the same storage pool if you switch to Ethernet.

The 8087-to-8088 adapters I have used are all passive, they just change the connector.
 
  • Like
Reactions: MiniKnight

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
Ok I just saw you wanted 24GB/s. So. A forget expanders. And you are looking at very expensive sas drives. I am not sure on price. For $12-15k you could get a server set up like in my F2uSoe posts 2 Mellanox cards (150 each on eBay). The. 2 more cards on you workstation and use 4 active optical cable and get 100m between you and the noise. OpenSM is required. You will only get 18-20 GB /s. However with a switch you can add additional work stations
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
He's shooting for 24Gb/s, not GB/s. This is doable over SATA drives and within the limits of a SAS card. He'll have to balance cost vs performance. His bottlenecks will be disk then SAS card. Disk being not enough disks probably.
 

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
He's shooting for 24Gb/s, not GB/s. This is doable over SATA drives and within the limits of a SAS card. He'll have to balance cost vs performance. His bottlenecks will be disk then SAS card. Disk being not enough disks probably.
Oh. Well that makes a difference. Well if 2tb is enough for the temp usage then 1 nvme drive. 2k is much cheaper than 24 drive + enclosure and cables.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
So here's my thought. If you are at 24 drives and looking for tons of performance, at some point you are going to want to add a second and maybe third machine. There are basically three paths to take:
  1. The big box - fit all 24 drives into a single enclosure
  2. The far flung DAS - what you are trying to do building a DAS box sitting far away.
  3. The networked approach - turn the DAS box into a network box
With 1 you do not have to worry about the complexity of distance, but you do deal with noise. It is often easier to make these things quiet than to go to a hard DAS solution. With 2 you are trying to work with big cables. If you look at the enterprise storage vendors, every one of them tries using as short of cabling as possible. With 3 you spend a bit more up front, but end up with a solution that can scale.

As a bit of background, when I started getting more into storage, I did #1 first. I then started moving towards #2. Every time I end up at #3.

I might be inclined to get: Mellanox MCX354A-QCBT / MCX354A-TCBT / 95Y3455 FDR/QDR INFINIBAND CONNECTX-3 VPI and run FDR IB between the two. When you get a second workstation you can get another card and cable and be OK with two. Distance wise 33m is not an issue with optical interconnects. Build the DAS instead as a NAS/ SAN and give yourself an easy upgrade path later. The other advantage you can get is that you can use more HBAs/ RAID cards and remove the expanders.

Apologies for getting a bit off topic, but I have gone through this cycle.
 
  • Like
Reactions: Chuntzu and Naeblis

Ultra

New Member
Jan 2, 2016
10
1
1
61
Hi guys,

thank u all for the great input. Really helps putting everything in perspective for future setups.

Alright, fortunately the current requirement is much "easier" than some of the scenarios described here ;-) I need to deliver hard drive read speed of at least 2.3 GB/s (preferably higher) to a single workstation, and I need at least 20TB of space on that RAID.

I was looking at a 8 disk SSD Raid first. Advantages are easily 2.3 GB/s speed, almost no noise, all internal solution. Disadvantages (compared to my other solution) are cost, limited write cycles, and space.

24 x 2TB disks on the Intel expander will deliver prob ~ 3GB/s (maybe a tad less), I will have enough storage space and I can use just one 8087/8088 SAS 2.0 spec 24 Gbps cable to bring it back to the LSI, which has two x4 ports, so I got another lane open for another RAID etc.

With this approach, all I would currently need is one long 8087/8088 cable (and the converters) and that's it. Cheap and done.

For the future, I will look into the 40G setup. That makes absolute sense to me. But for right now, it would mean I'd need a mobo, CPU and RAM for the server to plug the 40G ethernet card in and then the 40G setup... all unnecessary for now with one single workstation connection.

Regarding server chassis, I was looking at the NORCO 4U Rack Mount 24. Peeps have reported it uses SAS 1.0 on the 6 backplanes...

Questions, since I saw it was tested here on STH:

(1) there was a backplane revision, do u know if this introduced SAS 2.0 ?

(2) what does SAS 1.0 on the backplane mean in terms of data throughput ? Is the entire backplane limited to 3 Gbps or are each of the 4 ports on the backplane limited to 3Gbps, which would ultimately put the max data throughput at 12 Gbps per backplane ?

(3) Any other recommendations for server chassis of 24-28 disks ? I know Supermicro has nice ones but they cost multiple times as much... Somebody mentioned the Lian Li PC-D8000 case... looks fantastic IMO but is more DIY hassle... thoughts ?

Thanks !!!
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,420
470
83
the 24 bay Norco is 6 separate 4 bay boards. I would recommend for best throughput I would recommend using 3 HBA's instead of 1 HBA and a SAS expander (if you have the PCIe slots). the newer cases report being able to support 12 GB SAS.

You do not mention which OS you are using/prefer. in any case I would do 16-20 SATA drives and 4-8 SSD's to help with read and write caching.

and when you say large files. what sizes are you talking about?

Chris
 

Ultra

New Member
Jan 2, 2016
10
1
1
61
I don't have the PCIe slots, but besides how would I merge all 3 HBA's to one single RAID ? main objective is to get max speed with one large Raid 0... what am I missing ?

this particular RAID is used to play DPX files back in realtime @ 24 fps, the DPX are ~ 96MB per frame. so we need 24*96MB/s - which 24 spinning SATA III drives should do...

currently (the idea is that) the server chassis only houses the 24 drives, the RAID expander and a PSU. No mobo, CPU, or RAM needed. One long 8087/8088 cable back to the workstation connecting to the LSI 9361. If I see that the single cable connection does not really deliver 2.3 GB/s, which a 24 Gbps connection with 24 SATA III drives should def do, I could connect another cable to the expander as the 9361 has two x4 ports. I'd then have a 48G connection to the expander which should easily give me what the 24 HDDs can deliver, which will prob be above 3GB/s...

OS on the workstation is Win 8.1. LSI Megaraid sw handles the RAID controller...
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
You currently use a DAS solution with local storage. This may be the fastest option.
If you can reduce your needs and its hard to believe that you need the 24 Gb/s continuosly,
my suggestion would be

- use a local NVMe for real time critical workloads, up to 1 GB/s or more read and write (Intel P750, Samsung 950 Pro NVMe)
- use a 10G NAS/SAN for your normal data.
With 10 GbE you can go up to 600 MB/s write and 900 MB/s read even with a shared SMB storage
FC/iSCSI may be a little better but such a storage is dedicated to a single client and more complicated/expensive.

For several workstations, you can use dedicated 10G links or trunk 2 links
but most cost effective is a ultrafast local NVMe and 10G common storage.

If you use ZFS for shared storage, you have the security and ramcache support as an addon
and you can avoid the expander as ZFS can Raid over simple HBSa.
(Expander may limit performanceand I would not suggest with cheap Sata disks)

see my performance tests at
Shared video editing storage on Solaris (OSX and Windows)
 
  • Like
Reactions: MiniKnight

Ultra

New Member
Jan 2, 2016
10
1
1
61
Gea,

as I posted, the internal RAID we have already delivers 1GB+/s...

we need 2.3GB+/s read performance to play back 6K DPX image sequences in real time 24fps. continuously. throughout the day.

also, video editing is usually not done on 6K files (it is done on much smaller proxies), although we did edit the movie in 6K... but that was off the RAW files, different setup and requirements... 10G does not cut it for DPX sequences of that size.

Thanks.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
Then your best option is DAS Storage and SAS with

- use a local silent external SAS case with SSDs
directly connected to the raid controller

- or care about length restrictions of SAS
(max 10m between raidcontroller and Expander in the external box)

For several clients or greater distances (around 20m) a SAS Switch me be another option
6160 SAS Switch
 
  • Like
Reactions: MiniKnight