Advice on SAN switch

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mbk

New Member
Sep 2, 2013
5
0
0
Hello experts,

I'm building a compute cluster out of retired R610 and R710 servers.
A great deal of them have dual Qlogic fibre 2460 channel cards, and I have gotten a bunch of cables also.

Infiniband is probadly the best network for a compute cluster / HPC, but it is doubtful that I well get the funds to ever add infiniband.

Now I have a 1 GBit network with a HP 2510-48G, and a Intel four Gbit nic card for storage/frontend in the cluster. Since the cluster is gonna be aroung 40 compute-nodes, I will probadly have problems with having both communication between nodes and shared storage (NFS shares) through 1 Gbit, and it would probadly help a lot building a fibre channel target with eg. Linux SCST, perhaps with a dedicated os such as https://code.google.com/p/enterprise-storage-os/
I could use one of the R710 for this.

It seems fibre channel switches are really difficult to spec, since they seem to need licenses for active ports, trunking, etc. and GBIC modules. So I'm afraid that I'll buy a dud that is missing some license etc. So if somebody here are willing to give me some advice on fibre channel switch buying, I would really appreciate it.
Eg. would it make more sense buying 2x32 port switches or should I go for eg. a 48 or 64 port?
Any one that I should avoid?
Something that is a safe bet?

I also have some questions, if it is possible to bond/aggregate interfaces for the target/storage server, eg. find a couple of QLE2462 with dual ports, and get effective 4x4Gb bandwith, or is this not possible?

Any help is appreciated, and if you live nearby I'll gladly change beers for advice :)
 

phroenips

New Member
Jul 14, 2013
15
0
1
Disclaimer: I work in the storage industry, but I'm not very familiar with specific licenses required

Just like ethernet switches, getting a higher port count in a single chassis is better. With 2x32 port switches, you'll be using valuable ports for ISLs, and depending on workload and throughput, it may also be a bottleneck.

Are you planning to create a single fabric? Or dual fabrics?

The ability to utilize multiple FC ports for a single LUN will depend on the storage controller (Linux SCST in your example...I'm not very familiar with it), and multipathing software for each of the hosts/nodes.

As an alternative, assuming each of the nodes has multiple ethernet ports, could you not segregate your inter-node communication and storage on different LANs? You could segregate them physically if you really want to, or just use VLANs. It will likely be much less expensive and less complicated, especially if you don't have much Fiber Channel experience.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
what about the hp 8/20q they sell for $295 on ebay with 8 ports activated (8gbps)? I suspect its just another brand being badged hp. I mean 8 ports of 8gbps is pretty slick for $295 a switch, two switches, could setup a redundant fabric pretty cheap using what you have and upgrade to 8gbps as money permits.
 

mbk

New Member
Sep 2, 2013
5
0
0
Just like ethernet switches, getting a higher port count in a single chassis is better. With 2x32 port switches, you'll be using valuable ports for ISLs, and depending on workload and throughput, it may also be a bottleneck.
That is my thoughts also, higher port count is better. But the reason I addressed it was mainly because of cost. Is it much cheaper to go for 2x32 port than 1x64 port?

Are you planning to create a single fabric? Or dual fabrics?
A single fabric shared on all machines with eg. GFS2 filesystem or similar.

The ability to utilize multiple FC ports for a single LUN will depend on the storage controller (Linux SCST in your example...I'm not very familiar with it), and multipathing software for each of the hosts/nodes.
I probadly need to play around with SCST before finding out if it is possible.

As an alternative, assuming each of the nodes has multiple ethernet ports, could you not segregate your inter-node communication and storage on different LANs? You could segregate them physically if you really want to, or just use VLANs. It will likely be much less expensive and less complicated, especially if you don't have much Fiber Channel experience.
I think that you have a good point here. The compute-nodes currently have 3 free nics onboard, so it is for sure possible. The problem is that I need to get a switch more, while a 48 port gigabit switch is fairly cheap, I paid ~1000 USD for the 2510-48G, the problem is the uplink to file server. If all 40 nodes are reading the same files, I would need to aggragate some links or similar to not complete saturate the server, but it eats ports and does not scale linearly. An option is to get a switch with 10GbE uplink, but the switch will cost 3x the cost of the 2510-48G, plus I need to get a 10GbE card and cable. So for cost, it may be cheaper to use the current fibre channel cards, as I have cards and cabels for this, but I need the switch (and can probadly source some GBICs for the project). And it seems like a used fibre channel switch is a less expensive solution.
But yes fibre channel is probadly more problematic to set up, but it seems like latency and the bandwith is better, so maybe it is worth the troubles.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
the whole 10gbe -> 1gbe is a problem. You would have to choke the 10gbe ports heavily to prevent buffer runs. congestion like this is common with ISCSI or nfs and causes horrific latency.

FC uses a credit buffer flow control where it negotiates a window size to prevent buffer runs up front. It is arguably better without deep buffers
 

mbk

New Member
Sep 2, 2013
5
0
0
I have just realized that the 64 port switch is a nogo, the machines are en two adjacent racks, but there are no feasible way to route ~20 fiber cables from one rack to the other.
So now I am looking for 2x32 port SAN switches. I can get the GBICs for free, so I need to find the switches. Brocad 4100 or 5000 series seem to be what I'm looking for. There seems to be some really good deals on these, but the licenses is often not specified.
If I build my own FC target machine, and connect these two switches with some aggregated fibers, would I need the following licenses:
- 32 port active license
- trunking license

Would I need more than this?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I would be leary of brocade, their shit tends to only like brocade gbic
 

mbk

New Member
Sep 2, 2013
5
0
0
I would be leary of brocade, their shit tends to only like brocade gbic
Okay. I read somewhere this was a problem from 8Gbps switches, they had a chip in them to check if the GBIC is Brocade.

Do you have any suggestions for alternative gear? I have no experience with fibre, so I can use all the help I can get
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
HP is the same way (H3C not so much, but Provision definitely!)

Brocade has always been "Brocade nic, brocade dac/sfp, brocade switch" imo. It is why they are so cheap (the CNA is dirt cheap for 10gbe but the cost of optics and dac cable is stupid".

I can get gov/edu DAC cables for $10-20 each (1m to 5m) which is stupid cheap.

if you stick to emulex (light pulse) you will be very happy