Fibre or copper for NAS access

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

peter_cass

New Member
Jul 15, 2023
1
0
1
Hi everyone!

I am aware that this may be a controversial topic that has been discussed before, but some of the existing threads include arguments probably not of relevance any more (e.g., power consumption of first generation NICs) or do not apply in my specific scenario (e.g., no plans to change to 10Gbit/s switching). So here goes:

We have a NAS (Synology DS1821+) which is used for central data storage via 1Gbit/s LAN to a multitude of users - there is no intent to change this. However, one of our servers needs access to a lot of data being stored on this server, which is why we thought to upgrade to 10Gbit for this single connection (going up to 25Gbit does not seem warranted since we use SATA drives in the NAS). The distance between server and NAS are just a few meters, so we would not need the benefits fiber provides in this area. Costs should be kept as low as reasonable.

- Would you suggest copper or fiber in this context?

- We initially planned to use Mellanox ConnectX-4 on the server, but even off-brand copper SFP+ modules are way more expensive than I thought - on the contrary, the fiber version of synology's NICs are more expensive :D

- We could also purchase a 10Gbit NIC for the server with RJ45: Do you recommend an Intel version? I have heard that Intel is leaving the networking market (but that may have been just for switches), so I am a little worried about long-term support. There are also versions of Intel NICs from different manufacturers (HP, DELL, etc.)...are these 100% compatible in any server (running Windows Server 2022) or just in servers of that manufacturer?

Just let me know what you think :).
 

i386

Well-Known Member
Mar 18, 2016
4,251
1,548
113
34
Germany
I would go with sfp+ (or the newer versions), it "scales" a lot better than gbase-t ethernet:
up to 3m I would use dac cables (also made of copper), used dac cables from cisco are "dirt cheap" :)D) on ebay
for >3m I would use fiber over short reach optical transceivers (good for up to 300m), original cisco short reach multimode transceiver can be found on ebay pretty cheap and in lots the price can be as low as 10$/€ per transceiver. the fibers are pretty cheap even new (amazon, ebay, fs.com and many more places)

for nic brands:
I like to reuse my nics when possible and the mellanox nics are often supporten on qnap & synnology nas and could be used on "normal" hardware for rdma etc.
if you need great driver support on many platforms as possible than intel is the best choice

10 vs 25gbe:
if price is the same or pretty similar go with the faster one. even when used on systems with spinning rust data cached in ram could be served at faster throughput rates. (I do this at home with a hdd based fileserver with 128gb ram and a 100gbe nic connected to a 40gbe switch -> cached data is send with 3.8GByte/s :D)

oem branded nics:
they work fine in the common operating systems, but the sfp+ version can be picky about dac cables and optical transceivers.
 

tinfoil3d

QSFP28
May 11, 2020
883
407
63
Japan
I can't really give unbiased comment on this(I run primarily SFP+ and above, and the push for it was initiated not just because of the speed requirements but because thunderstorms in my area sometimes are devastating so just want to keep computers, especially the important ones as separated as possible so I tend to use AOC in those), but for 10gbe and just one connection use whatever is cheaper. If you were talking about moving your entire office of tens of computers or more entirely to 10gbe then this amount of consideration is valid. Just one? Go with the cheapest option. Figuring out all the funny things about fiber and transceiver modules and NICs and their compatibility is challenging at best for a newcomer. I've been through that stage and it took a considerable amount of time and I still learn new things to this day because I don't get to touch the really high-end DWDM interconnect optics for example.
 

rtech

Active Member
Jun 2, 2021
308
110
43
10G copper is quite the space heater and you said one server will access the data a lot. Consider this fact before making the upgrading move.

Example: Intel X550-T2
Power consumption 6.6 W / 100Mbps 9.5 W / 1Gbps 17.4 W / 10Gbps
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Lol, 1 or 2 ports is going to make significant difference, nope.
Do whatever you want for couple ports unless you plan for whole business or data center.