Its not completely inconceivable (nor would it be very hard). You would have to find a MB with more than the average number of PCIe x8 slots. It really wouldn't even take too much CPU to keep up with 80Gbps - an E3 could do it pretty easily, though you are probably looking at E5 series CPUs in order to get more than two x8 slots without using a PCIe switch chip (or maybe that could be a use for the X8SIA board and X5550 chips I am trying to sell...). You could build it using readily available software (RouterOS) if the drivers will recognize this card.Would get expensive soon, but I wonder if you could build a "switch" with 4 of these in a box. Guessing 80gbps would be too much for an x86 based switch but interesting.
Congratulations on post #300! Would just be interesting to see how bad it isIts not completely inconceivable (nor would it be very hard). You would have to find a MB with more than the average number of PCIe x8 slots. It really wouldn't even take too much CPU to keep up with 80Gbps - an E3 could do it pretty easily, though you are probably looking at E5 series CPUs in order to get more than two x8 slots without using a PCIe switch chip (or maybe that could be a use for the X8SIA board and X5550 chips I am trying to sell...). You could build it using readily available software (RouterOS) if the drivers will recognize this card.
The problem with this is idea would be latency. While you could keep up with the raw throughput, you'd never do it without adding a bit of packet latency. Without going into detail, you would really be building a layer-3 bridge (a router with the same subnets on more than one interface) and not a switch. You'd be adding a ms or more latency for every packet. Doing this for a GigE "switch" might work pretty well, but adding additional inter-packet latency for a 10Gbe application is unlikely to have satisfying results
Could you not just use direct attracted copper sfp+ cables between your ex-2500 and this card?Of course, the project is just not all that interesting when you have a Juniper EX-2500 in your rack...
Yes, these have SFP+ connectors on them and are made to work with other 10GbE networking gear. Spec states 14.9W with SR opticsDo these work just like 10GbE cards or are these 10Gb IB? How is the power?
Very impressive. Maybe not full 10GbE on bonnie++ but certainly better than you'd get even with multipathed 1gbe ports.This guys listing is still active and this is a great price for 10Gbe SFP+ cards. I got mine and installed into one sled of a C6100 running Proxmox VE (basically Debian). My Solaris-based ZFS server has Intel X520 NIC and everything is connected on a Juniper switch. Without doing any tuning (9000 byte jumbo frames on):
60 second netperf from client to server: 9891.16 Mbits/sec (pretty darn close to 10G)
60 second netperf from server to client: 9877.13 Mbits/sec (negligible difference)
Bonnie++ to from Proxmox box using NFS mount:
Version 1.03d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxmox-node 96576M 68845 98 551492 34 153044 15 68486 94 424345 19 213.3 0
Kinda hard to read in the forum font...but basically 551.5MBytes/sec seq write, 424.3MBytes/sec seq read (pool benches @ 600MBytes/sec write & 1,000MBytes/sec read directly on server).
I'll post comparisons to Intel-based 10Gbe cards later.