direct attach 10GbE with 3 or more points? fiberchannel?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
A post in another thread inspired me to ask about this question i've had in the back of my mind for a bit...

I know 10gig cards are down, but I was less sure on issues of switches and cabling esp if wanting to interoperate with slower newer 2.5gig and 5gig ethernet.

Is it viable to do direct attached storage if you need to link more than 2 points? (assuming it's all needing the fast link from one central server like a star topology) Do you just run one card per attachment point in the server I assume to make this work? How would files be sent between workstations thru the server or is that difficult? (option B is just have a separate 1gig or whatever ethernet network for workstation-to-workstation)

Is anything from fibrechannel cards usable to do a star topology of multiple workstations (from 3 to 6) to one central server, like any deals there or does hassle or admin problems make it a poorer choice than ethernet? (i'm told 25gig cards are even affordable now too ethernet) PS i dont know ANYTHING about fibrechannel or if it even works like setting up ethernet networks at all.

The specific use of this is a 'jellyfish' like video server so multiple workstations can be hitting fast shared SSD's of adobe premiere project files in 4k raw or higher. 2-3 users/workstations is normal, 5-6 is a soft limit i'd plan for - if someone tells me this idea couldn't work past 5 users due to pcie lane limits or card limits like needing 1 card per attached user for instance that's fine, i'm just trying to explore this idea curious if or how it might work. I'd prefer to use older-ish xeon hardware (sandy/ivy era) for a server insofar as possible and just throwing alot of cheap cores and DDR3 cache ram at it - if that's unviable speed wise though tell me. It's not that i'd refuse to buy newer hardware, it's that i'd rather buy nicer cameras and lenses with the same hobby-til-its-not money if older hardware will work.
 

kpfleming

Active Member
Dec 28, 2021
392
205
43
Pelham NY USA
It's certainly possible. You could put dual-port 10GbE SFP+ cards in the server getting you two ports per PCIe slot; using an EPYC-based board you'd have plenty of lanes for that. If you can afford them you could even get quad-port cards and get four ports per slot.

Each port would then connect to a workstation via DAC or AOC; each server-workstation link would be its own IP subnet, and traffic between those subnets (if needed) would be routed by the server.