Quanta LB4M 48-Port Gigabit Switch Discussion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lnxpro

New Member
Mar 1, 2016
15
5
3
45
Log into the web interface (I've only done this in switch mode, so I cannot comment if your in router mode).
Once logged in to the web interface, go to :
System >
SNTP >
Server Configuration >
Choose Create on the drop-down
Enter the IP of the local NTP server (usually your router or AD server if you've configured it that way)
IPv4
Port: 123
Priority: 1
Version: 3
Submit

I then went to SNTP > Global Configuration
Client Mode: Unicast
Port: 123
6, 6, 5, 1 (for the remaining values in order).
Submit.

It then takes the switch a little while to actually trigger the NTP query and update its time-stamp. The only thing that I have not been able to configure is my timezone, so it always wants to set its time to UTC time, not my current TZ (EST = GMT - 5). So if someone figures out how to specify the TimeZone that should be the final step needed.
Thanks. I followed those and got a success on the status page a few minutes later. Now if someone can figure out how to set the time zone that would be awesome :)
 

charlie

Member
Jan 27, 2016
58
3
8
Budapest, HU
You can pick between switching (L2) or routing (L3) firmwares. Routing has GUI and some version of switching also has GUI. Routing firmware only has 1 version AFAIK, 5.13.12.14. In terms of performance, switching will be fastest but it seems fine in routing mode for my use case.
Hi,

What is your experience about routeing performance?
 

eroji

Active Member
Dec 1, 2015
276
52
28
40
Hi,

What is your experience about routeing performance?
I did not do any empirical tests as I mentioned previously, but for my homelab usecase with a 150/150 WAN connection, I did not notice any slow-downs.
 

Döhler

New Member
Mar 16, 2016
1
0
1
39
Hello,

I also have an LB4M and I got with putty in the menu and activated the Java interface but I can not connect with Java to the interface. I connect the server to the console port and nothing to the switch ports. The dhcp is an home server.
 

Nnyan

Active Member
Mar 5, 2012
142
42
28
Just finished reading this thread and I hope I didn't miss this but has anyone tested the routing firmware with inter vlan routing?
 

sthsep

Member
Mar 7, 2016
72
10
8
Also readed the thread. I'm right that I need a console Cable to access the switch after flash because all other ports are disabled? Or does the mgm port get an IP from dhcp?
 

Torbjørn Sandvik

New Member
Jun 10, 2015
24
1
3
I'm getting a lot of log entries.. All i have done on the switch is set all ports in admin mode. To get them active.
And I get this log message on all active ports:
MAR 19 20:57:35 192.168.0.250-1 UNKN[147405856]: rlim_api.c(1112) 24475 %% invalid tunnel intfid (37)
 

Matt C

New Member
Jan 26, 2016
6
0
1
50
Ok... So far switches run more or less great, I just want to ensure I have things set up right... Company is too cheap to spring for any version of vSphere that supports vdswitches... So I am stuck with standard vswitching. Reading the VMWare best practices I see I can still link aggregate on the VM side using IP HASH teaming mode, and IP-SRC-DST on the switch side (which is obviously referring to Cisco)... I know the aggregation type has to be static.

What is the equivalent Quanta type to Cisco's IP-SRC-DST?? Choices are

- Src MAC, VLAN, EType, incoming port
- Dest MAC, VLAN, EType, incoming port
- Src/Dest MAC, VLAN, EType, incoming port
- Src IP and SRC TCP/UDP Port Fields
- Dest IP and Dest TCP/UDP Port Fields
- Src/Dest IP and TCP/UDP Port Fields.
 

eroji

Active Member
Dec 1, 2015
276
52
28
40
How about LAG/LACP with FreeNAS for NFS serving? I'd like to find out if this is possible. I have 2 host servers currently and 2 NICs in the FreeNAS.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Someone stronger in networking than I can interject and not hurt my feelings here.

But my understanding is that LAG:LACP juts creates a round robin ish style of connections - based on Ip, Mac, hash, port, etc. At no point will two devices talking together use more than one link. So if you have a file server talking to a backup server and 4 links, 1 will be used. But if you have 20 clients connecting, you'll likely get 5 per link, providing up to 4gbit.

NFS doesn't do any sort of MPIO or round robin. The best you could do is have two IP's on both the host and NAS and mount two datastore - one one each IP (subnet). But you'd okay ever really get 2gbit at any time when both links are maxed out and nether datastore would ever see > 1gbit.

The question you really should be asking is if you care about throughout and why you're fighting to get it. You likely care more about IOPS and latency. Which doesn't need LAG/LACP.

VMware uses 4kb blocks so over a single 1gbe link you in theory can pull like 30,000 IOPS.

More to the point - that would be what the VM and host can get from the storage. If there's a file server on it, is it going no to have 4 links out to the clients? Otherwise the only benefit is internal disk to disk transfer, at best.

Anyone got any corrections? I'm sick and there's a lot of DayQuil going on here and a little screen ;).
 

eroji

Active Member
Dec 1, 2015
276
52
28
40
I understand that perfectly. This NFS mounts are purely shares for VM and I prefer to have a single IP/hostname to mount with for the sake of config management. So if I can enable LACP using 2 NIC ports, that would be perfect, plus getting some more total throughout.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
I'm going to above to suggest you don't understand it perfectly.

You can't do what you want to do. Even if it did work, it's over complicated.

You will need to have multiple IP's for your mounts so that the IPHASH LACP will select different paths. Otherwise, no matter what you do, they're going to pin all traffic on the first matching link.

This is, by far, the reason I much prefer ISCSI mad MPIO with around Robin. It DOES do what you want.

Chris Wahl over at wahlnetwork.com has some EXCELLENT blog posts about this.

You would only ever, at best, get 1gbit to each datastore separately. You would never achieve 2gbit to a single datastore, regardless of how many NIC's you bind in an LACP.
 

eroji

Active Member
Dec 1, 2015
276
52
28
40
I'm not using NFS for datastore. I do have iSCSI MPIO over 2x 10G link for each host. I'm only mounting the NFS volume within the guests running on each ESXi host, and there are 2 host servers. I also know that any single link mounting the NFS volume will not exceed the line speed with or without LACP. So that's all established. I'm only asking (based on hypothesis) is it possible to LACP on the FreeNAS end then export the volume over that LACP link and have multiple guest VMs on both host servers mount the volume and access it at theoretical 2x 1g speed. If my assumption is totally wrong, then I stand corrected.
 

kroem

Active Member
Aug 16, 2014
248
43
28
38
Anyone run these without fans? I mean, fans are for fully populated switches, right? :)
 

DavidRa

Infrastructure Architect
Aug 3, 2015
329
152
43
Central Coast of NSW
www.pdconsec.net
I posted separately, but perhaps a dupe here would be useful - I have a dead PS (a Delta DPSN-300DB D) and no more spares. I see many DPSN-300DB on Ebay, some with specification D, others F, H and J. Anyone know if they're all interchangeable? Delta's website is specification-free, so is of no help.
 

spyrule

Active Member
Probably your best bet is to send an email to Delta and simply ask them. I've done some quick looking and there isn't any described difference between the model numbers (same output specs, etc). The other option is to reach out to an ebay vendor like DigitalMind2000 (http://stores.ebay.ca/pcserversandpartsinc/) and see if they have an extra psu for that switch. The only reason I suggest them is that I've purchased several of these off of them now, so I'd guess that they have extras.