Search results

  1. F

    Quanta LB6M (10GbE) -- Discussion

    with 'mtu 9000' set in /etc/network/interfaces , jumbo needed to be set on the switch. that fixed all our issues. note this was not a multi cast issue. in addition nfs clients could not access nfs mounts so to enable mtu 9000 on the switch: telnet@quanta-1#configure terminal...
  2. F

    Quanta LB6M (10GbE) -- Discussion

    Note after setting up brocade firmware - as you predicted - we have the same issue. before doing that we'd tried a lot of things . so will of course continue to debug until fixed. also thank you for the Brocade TurboIron firmware . these are great switches with now excellent software...
  3. F

    Quanta LB6M (10GbE) -- Discussion

    Will I tried using the stock firmware - cluster communications issue I could not solve. I tried setting up IGMP snooping and Multicast . Searched , tried settings etc.. anyway I'll change to Brocade as I have documentation for setting up igmp etc. I knew you were just dying to know that.
  4. F

    Quanta LB6M (10GbE) -- Discussion

    also this was the network config , the only difference from the normal one used is the address #iface enp129s0f0 inet manual #iface enp129s0f1 inet manual #auto bond2 #iface bond2 inet static # address 10.11.12.2 # netmask 255.255.255.0 # slaves enp129s0f0 enp129s0f1 #...
  5. F

    Quanta LB6M (10GbE) -- Discussion

    thank you for the reply. It could very well have been a configuration issue. It was almost working, drives mounted, multicast test worked. I may give it another try . Multicast tests worked [ very fast ] . ceph drives mounted. I'll check logs and try to debug. I did have...
  6. F

    Quanta LB6M (10GbE) -- Discussion

    I tried using native firmware with ceph cluster. there were cluster communication issues. after an hour - gave up. next will install Brocade TurboIron firmware.
  7. F

    Quanta LB6M (10GbE) -- Discussion

    New Question - we just received two Quanta LB6M - that we will use for a ceph storage network. The network does not need vlans or routing etc. I've read a lot of people change the firmware to something else. For our use - are there speed and reliability or other good reasons to...
  8. F

    Quanta LB6M (10GbE) -- Discussion

    searched and found a connections is needed. We've done this before - it was at the end of one of those really long days when i asked.. so for anyone reading my question in the future: https://www.kernel.org/doc/Documentation/networking/bonding.txt 11.2 High Availability in a Multiple...
  9. F

    Quanta LB6M (10GbE) -- Discussion

    with active/backup and non stacking switches - does a patch cable need to connect the two switches? the two switches are for an isolated storage network.
  10. F

    Quanta LB6M (10GbE) -- Discussion

    thanks, active/backup is all we need.
  11. F

    Quanta LB6M (10GbE) -- Discussion

    I assume then linux bond is not possible using two non stacked switches. hopefully am I wrong?
  12. F

    Quanta LB6M (10GbE) -- Discussion

    Hello tjk - thank you for the fast response you wrote ' LB8M' we have LB6M . Which I assume is the same - not stackable.
  13. F

    Quanta LB6M (10GbE) -- Discussion

    Are these switches stackable? Looking to set up LAGS between switches for linux network bonding.
  14. F

    [solved] Quanta LB6M connection to Mellanox ConnectX-4 /5

    Hello Terry Wallace. It is for future proofing. I based my decision on the following: the refurbished X5's are reasonably priced . the latest Mellanox IB linux driver does not support ConnectX-2 . See release notes at...
  15. F

    [solved] Quanta LB6M connection to Mellanox ConnectX-4 /5

    I spoke to Mellanox pre sales support - and standard sfp+ cables will work. So will purchase ConnnectX-5 cards .
  16. F

    [solved] Quanta LB6M connection to Mellanox ConnectX-4 /5

    We have ordered two Quanta LB6M switches. These will be used in an existing 7 node proxmox cluster using 65 400GB SSD's . data: pools: 2 pools, 1088 pgs objects: 137.66k objects, 498GiB usage: 1.46TiB used, 24.1TiB / 25.5TiB avail Disk i/o is very low. Our goal is to...
  17. F

    solarflare or Chelsio

    also for ceph I use a cronjob to check for slow requests. on the average they occur for 15 seconds per day. I'll notice them if doing cli work there will be a lag. these can affect data file writes - especially on a system that needs to run on linux 2.6 . the slow requests normally occur...
  18. F

    solarflare or Chelsio

    We are using the intel driver / modules which come with latest Proxmox / Debian . I'll check in to using latest stable drivers from Intel. Thank you for the suggestion.
  19. F

    solarflare or Chelsio

    Hello to increase our proxmox ceph storage cluster reliability, I'm looking to upgrade ceph network from Intel 10G ethernet to SFP+ . we have an issue with slow requests , much like this person had Ceph, SolarFlare and Proxmox – slow requests are blocked . the issue has been going on...