Optical fiber: a bottleneck?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Andreas1138

New Member
Apr 2, 2016
13
0
1
38
Hello there!

I am working with a client and the company has different floors. The server is based on the higher floor, and some PCs are based on another floor. There is an optical fiber connection from this floor to the server floor.

The connection is quite simple: there is a switch where all PCs are connected. From this switch, a network cable is connected to an inverter that is connected to an optical fiber cable that runs through the floors to the server room and is connected to another inverter. From this inverter, runs another network cable that connects the main switch where the server is connected as well.

There are about 15 computers connected to the lower floor switch.
Since the net is quite slow, do you think that there is a bottle neck, since the optical fiber is basically connected to a single 1Gb port?
It seems that the switch has SFP connections at 2 Gb, but I have to check that.

To make myself clear, I prepared a network map that you can check to see the network of my client.

Thank you for your time :)
 

Attachments

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
When you say 'net' do you mean the LAN or to the Internet?

What's your internet connection?

What's your definition of 'quite slow'?
 

Andreas1138

New Member
Apr 2, 2016
13
0
1
38
Hey, T_minus, sorry for not being precise.

By net, I mean the LAN. The network is slow because file transfers from a pc to the server is not fast compared to a direct connection to the primary switch where the server is connected.

I know that there are other possible causes, but I want to find out if the internal network connections are improvable.

Thank you for your time :)
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Seems like the question is "Is it possible that 15 clients sharing a 1G link to a server are maxing out the link?" - answer is yes. The fact that the shared 1G link is optical instead of electrical is irrelevant.
 
  • Like
Reactions: Jon Massey

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
A 1GB link can be saturated by a single desktop with a 1GB interface. If users don't "really need" a gig link, if it's a managed switched then you could look at disabling gig for the ports feeding the desktops.
 

Scott Laird

Active Member
Aug 30, 2014
312
144
43
A 10Gb link can be saturated by a single desktop with a 10Gb interface. It just depends on what you're trying to do. Practically speaking, though, most desktop workloads don't actually send a whole lot of network traffic most of the time. Remember, 1 Gb Ethernet is still 100 MB/sec--much slower than an SSD, but still fast enough that a 10 MB file would only spend 100ms in transit.

In general, fiber is much more capable than copper cable. With enough money, you can jam over 1 Tbps onto a single strand of single-mode fiber, while Cat 6 runs out around 10 Gbps. Practically speaking, there's no real performance difference between 1 Gbps over fiber or copper, though. The big advantage of fiber insides of buildings is that it isn't electrical, so problems with grounding can't route ~100 volts onto the copper link between devices. In commercial settings, you'll usually want to run fiber between floors, and always between buildings to avoid this sort of problem.

I'm not sure what an "inverter" is in this context; to me that usually means something that produces AC power from DC. I assume that you're probably looking at some sort of converter box. My first question: are you sure that it's actually a 1 Gbps device? There are standards for 100 Mbps over Fiber.

Can you take some specific performance measurements across the various links?
 
  • Like
Reactions: Chuntzu

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
What make/ model of switches and convertors are you using, are they the same device at both ends of the link? 1Gbps isn't the worst I have ever seen, but for 15 users all using the Internet and all sharing files from a server etc, I would say having a single 1Gbps link is pushing it by today's standards in terms of what users will expect. What is the link speed to the server? I suggest you re-draw your network map and label the devices so everyone can see what equipment is in use. Do you have any kind of baseline documentation that you can compare against?
 

bds1904

Active Member
Aug 30, 2013
271
76
28
There is no doubt you are saturating the link between switches. It sounds like you need a complete upgrade, switches on both ends, no media converters and a 10Gb link (or 2) to the server. Possibly a completely new server too since that could also be the bottleneck.

On a new network install in a business setting with 10+ clients on a switch, connecting to another switch I never run less than 2x 10Gb connections in LACP. I also never run less than 12 strands of fiber from the MDF to any IDF. If the IDF terminates 100-300 drops I run 24 strands. I also never, ever run multi-mode fiber. 9/125 OS2 fiber has been around for years and years and you can run 40Gb+ over it with no issues. It just won't go out of style. I've run 40Gb over single-mode fiber installed in 1988 before.

Retrofits get the same treatment as long as the fiber is available or I can run more fiber. If I can't do either of those then they get the CWDM treatment most of the time. (Read more on CWDM: Wavelength-division multiplexing - Wikipedia, the free encyclopedia )

2x 10Gb gives you the speed you need, balances the load between the 2 links (with multiple clients accessing resources on the "core" switch and gives you transceiver redundancy. In all my years of networking the thing I see fail the most inside is a SFP/SFP+/XFP/etc. They usually fail because the run hot.

Long story short: I suggest your client hires an IT company and gets ready to spend $7,000-$15,000. You don't want to be the guy messing it up, it could cost you your reputation or more.
 
Last edited:

Scott Laird

Active Member
Aug 30, 2014
312
144
43
I'm not going to argue against a real IT staff and a network upgrade, but there are a *lot* of office workloads where even 100 Mbps is still plenty, even shared across multiple users. Loading and saving <10 MB Office files doesn't need a lot of bandwidth, and data entry barely uses any network at all.

The first two things I'd check in this case are the speed of the optical link and the duplex settings between the switches and media converters. If someone half-hardcoded duplex settings then it can do magically bad things to the network. Pings are usually okay, and SSH is mostly fine, but anything that transfers a non-trivial amount of data is dog slow.

Compare ping with the default (~64 byte) size, 1300 byte requests, 2000 byte requests, and 60k byte requests. If default and 1300 bytes are good but 2000 is sometimes flaky and 60k barely works, then check duplex. If 60k works, then compare the RTT between 1300 and 60k; that should tell you roughly how fast your network it. 1 Gb Ethernet should add around 1 ms. If it's much over that, then it's slower than 1 Gbps.
 

bds1904

Active Member
Aug 30, 2013
271
76
28
I'm not going to argue against a real IT staff and a network upgrade, but there are a *lot* of office workloads where even 100 Mbps is still plenty, even shared across multiple users. Loading and saving <10 MB Office files doesn't need a lot of bandwidth, and data entry barely uses any network at all.o
My exact point. They could be using the link for next to nothing and have slow speeds because of a failing server and/or failing storage. Server upgrade $3000-5000 + labor, network upgrade $4000-8000 + labor. Could be one or the other, or both.

No offence to anyone but if you are here asking where to start when a business is involved, you should not be making any recommendation to the business other than "hire an IT firm". I've seen many people loose their jobs or business relationships because they made the mistake of getting involved.

Even troubleshooting the issue for the client could make you look like an a** when they do hire an IT guy. What if you were wrong & the IT firm fixes things? Even if the IT firm doesn't talk trash about you, you could still be wrong and looked at negatively for offering your opinion instead of admitting you didn't know.

In a business IT setting having the wrong answer is always worse than not having an answer.
 
Last edited:

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
The power of the internet, it's free advise...as with anything, it's up to you to decide what is valuable information and what is not.

Per the Fiber over Copper discussion
- No EMI issues
- Long Distance
- Better latency vs copper with longer cables
- Smaller diameter cabling

Copper over Fiber
- Cheaper

I know of a lot of businesses still running heavily 100MB switches everywhere and even a few still running 10MB hubs. Do they all work? Yes, depending on what your needs are. 100MB edge ports + gig uplink ports make an easy way to prevent overloading the gig link and is still done in many orgs. If your daily transfers are in MB, then gig becomes optional....however if I need to push a 100GB image 100MB is useless to me.
 

keoki

New Member
Jun 2, 2016
21
19
3
61
So I ran a small team that operate the network gear for a chain of 10 hospitals and 65 clinics. I had about 15,000 clinical users, and the network structure was gig in the core, and gig between the hospitals. There were 4 core switches (6509) in every hospital serving the building, and 4 more serving the hospital datacenter. On the hospital floors were stacks of 100m switches, that had gig fiber uplinks back to the core. We had 1100 switches, 300 firewalls, a lot of VOIP (wired and wireless) as well as about 5oo wireless AP's. There were no bottlenecks. The majority of the traffic came from visitors using the Free Internet. Free Internet was at least 75% of the traffic on the network.

Most of the clinical workstations were virtual desktops. the data center hand banks of VMware servers, running virtual PC's for the clinical staff. The staff could walk up to any PC anywhere, and when they logged in that would be routed to their virtual desktop in the VM pools. We had desktop terminals all over the place, and rolling wireless carts with large batteries in the base for mobile computing. No matter where you were, logging into any of these machines would connect you to your own desktop, with all of your applications still running as they were when you walked away from the previous terminal. And some of the open applications were windows into other virtual systems being hosted by EMR providers at other sites, so heavy use of virtualization. Most of the clinical data was remote desktop type data, and still it was the minority of the traffic. Obviously the VMware servers had gig connections to the SANs, and to the network.

The network ports around the hospital, as well as the wireless, were all controlled by a system that configured your connection after your PC was identified to the network. If you were unknown, you got a captive portal system that only took you to the Internet. If you had a computer that was assigned by the hospital, you were connected to your assigned VLAN, regardless of what port you plugged into. So the network configuration was complex in only a subtle manner, in general it was really quite simple, and self configuring. The only problems we ever had was when the VLAN database server was down, which caused new connections to fail... but that was rare. Obviously DHCP was also critical, but DHCP servers almost never go down.

The cross town gig links between the hospitals averaged peaks in the 100M range. The SAN's that synced across the cross-town links synced in real-time, so there was never a buildup of latent transfers. We did a lot of "heavy" things like moving medical imaging around, and only some of the largest images (2 or 3 terabyte CT scans) had any bandwidth issues, and those images had dedicated paths, and were served mostly from local drives.

But Fiber is never a bottleneck just because it is fiber. If your fiber connections are too slow, that is a different issue. But a fiber gig connection is a gig connection. If that is not fast enough you can go to a 10gig connection. But in the diagram you drew, your design is fine. Fiber is the best way to travel distances that are too far for copper. I think by inverter you mean tranceiver.

Sure, it is possible for that to be a bottleneck if the loads are high, but if you have people sharing files, the odds of two people saturating the link at the same time are low. In my hospital example, we had 15000 people that were not able to saturate anything. The general nature of average users is not likley to saturate links on a network as small as yours. Sure, anyone can come up with a dozen ways I am wrong, but the reality is that most network connections are idle most of the time, regardless of what the users think they are doing. Most PC's can't saturate a 1g link with a common workload. And networks naturally share bandwidth in a fair manner.

Yes, there can be really busy networks, like if those 15 workstations were video editing stations, and the server was a shared SAN holding the video data. On the other hand, if it is a video streaming server, 15 users would not saturate it at all. So a lot depends on the application. But in really big environments, average stats on the load from each user show mostly idle time. It is the guys playing on the internet that tend to consume most of the bandwidth.
 
Last edited:
  • Like
Reactions: NTL1991 and Marsh

wildchild

Active Member
Feb 4, 2014
389
57
28
I'm not going to argue against a real IT staff and a network upgrade, but there are a *lot* of office workloads where even 100 Mbps is still plenty, even shared across multiple users. Loading and saving <10 MB Office files doesn't need a lot of bandwidth, and data entry barely uses any network at all.

The first two things I'd check in this case are the speed of the optical link and the duplex settings between the switches and media converters. If someone half-hardcoded duplex settings then it can do magically bad things to the network. Pings are usually okay, and SSH is mostly fine, but anything that transfers a non-trivial amount of data is dog slow.

Compare ping with the default (~64 byte) size, 1300 byte requests, 2000 byte requests, and 60k byte requests. If default and 1300 bytes are good but 2000 is sometimes flaky and 60k barely works, then check duplex. If 60k works, then compare the RTT between 1300 and 60k; that should tell you roughly how fast your network it. 1 Gb Ethernet should add around 1 ms. If it's much over that, then it's slower than 1 Gbps.
Fully agree..

@ts.. try to monitor the switch port where the inverter is connected and get yourself an snmp tool to graph the bw.

I've seen people advise to just put in 10g , which is silly without even knowing how much you are using.

If you're not continually seeing more than 85% usage , i would try to see if you can remove the inverters and replace them with 1 gb gbics.
I've odd stuff happening with those inverter boxes
 

Andreas1138

New Member
Apr 2, 2016
13
0
1
38
I admit the topic isn't clear at all, this is not my primary language and I messed up some terms. Fortunately I found you guys, willing to help :)

First mistake: I don't know why I wrote "inverter", what I meant was transceiver, one like this. o_O
Second mistake: talking again to the client I found out that there are more than 15 computers connected: there are about 25 PCs, all connected to a switch. Then they go to the server floor using that fiber cable (via transceiver). At first I thought that other floors ran directly to the main switch
Third mistake: I wrote that a fiber cable can be a bottleneck, but that's not the issue.

So, we have 25 PCs connected to the same switch. This switch is connected to another switch via fiber cable. But that's not all: there is a transceiver that converts the signal of the fiber optical cable to an Ethernet cable (1Gb).
To simplify, since there is this transceiver, to me is like having 2 switches: one where all the PCs are connected, one connected to the server. Between these switches, there is basically another ethernet cable. We have optical fiber cable, but it runs like an ethernet cable because of the transceiver. Is that right?

@TuxDude
I think you are right, you explained the topic better than I did :)

@cheezehead
I can manage the switch, maybe I can allocate different resources, thanks :)

@pricklypunter
I am still studying the network, but I hope to have more info from the client to understand how the network was built, so I can draw a better network scheme. Thanks for your input.

@bds1904
That's what a I think. I am not in charge of that part, I wanted to ask you guys what did you think, even if I know that a network can be really complicated, since there are other devices connected to the network. I was hired to take care of the server and install the new PCs they brought and install the software that runs from the server. Since the network is quite slow compared to other clients, I wanted to speak to you to understand my doubts.
Thanks for the link, I'll study that.

@Scott Laird
I'll do some tests and I'll try to understand what kind of configuration was made when the electrician set the cables years ago (I wasn't there yet). What I noticed so far (haven't the time to do a lot of tests yet) is that other PCs, directly connected to the switch that goes to the server, are faster. I tried moving a file (an iso image of 4 GB) to a NAS, connected on the same floor of the server. From the main switch it took minutes, from the other switch it was taking hours and I didn't finished the upload. This was made during working hours.

@keoki
Thanks for your experience, it was really interesting. I like the way you configured the network there :)

@wildchild
That's a nice test to do, thanks for your input. I'll do some tests soon, I hope :)

@ all
I appreciate your concern, but as I wrote to bds1904, I was hired to take care of the server and the PCs. Using the Ntop function on the firewall (Zeroshell) I made sure that the clients weren't taking a lot of bandwidth (even if we have to find the bottleneck yet) and I checked all PCs for virus and malware. I also connected all the network ports of the server for a better management of the bandwidth (there are multiple VMs installed). I have to upgrade the firewall and the OS installed on the server, but my job finishes here. However, I am always curious and since I am the younger on my company, I want to study other fields as well (even if these fields are not our expertise).

I was thinking about the transceiver because when I test the network on the main switch, everything is faster, but running from the other switch, everything is slower.

Thank you for your replies and sorry for the mess :)
 
Last edited:

maze

Active Member
Apr 27, 2013
576
100
43
Have you done a simple interface counter check for errors? Might be as simple as a bad cable/transceiver..