basic 10gb setup for two PC's

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Aluminum

Active Member
Sep 7, 2012
431
46
28
as usual, I ****ing hate windows

I figured I would test out linking my two windows boxes, since the ZFS box is not getting messed with until I pick my next OS. (modern games and recording protected TV still require windows, sigh)

-Installed cards and cable
-Using 3.0.0 WinOF from like 3 years ago since thats the last supported version for my hardware...I'm having serious buyers remorse now.
-Updated both cards 2.9.1000
-Links are lit up, but windows is convinced they are disconnected
(went nuts for 2 hours trying to figure it out with obscure google query lottery)
-Started opensm
-Windows sees 32 Gbps ethernet, so like a fool I go "yay!" and get ready to fire up some file sharing between SSDs for fun.


Set up the interfaces, for now IPv4 only:

PC1
172.16.0.1
255.255.255.0
no gateway, no dns, etc

PC2
172.16.0.2
255.255.255.0
no gateway, no dns, etc


So I wasted my friday night and here is the result:
PC2 can ping PC1
PC1 cannot ping PC2

Checked all these settings about 5000 times.

Did I mention I ****ing hate windows?


PS I turned off winfail firewall for both IPoIB interfaces anyways just in case it slows anything down.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Reminds me of the netxen cards I have. too much of a pita to bother with. so they rot.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
Cards work fine, its just windows being retarded.

Long story short, windows is obsessed with having a gateway set even with a point to point connection of static ips, even then it is still retarded and needs manual metrics. Don't trust their GUI settings at all, it might think its on the internet via interface A but the actual part of windows that does the moving of packets thinks something else.

Anything beyond one lan is an alien concept to microsoft, they botch it so damn bad compared to unix. (my pfsense has 7 interfaces and vlans, never had a config problem) I guess they still haven't learned since tacking it on with duct tape in the 90s.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
When adding an IP over IB network in parallel with an Ethernet network on Windows, I have a simple formula that has always worked for me:

1) Configure your standard Ethernet network first - IP address, gateway, DNS, whatever.
2) Add you Infiniband card and configure it differently:
a) IP address - use a different subnet. I'm 192.168.1.* for servers and 10.11.12.* for the IB subnet
b) Gateway - leave blank.
c) DNS - leave blank. I use IP addresses as opposed to hostnames to address the storage nodes. This is just fine in a lab environment.

Edit: Also, make sure that the IB interfaces are not labeled as "public" interfaces, and move them to "private" if they are. If they are public then the default firewall rules will block traffic.

Cards work fine, its just windows being retarded.

Long story short, windows is obsessed with having a gateway set even with a point to point connection of static ips, even then it is still retarded and needs manual metrics. Don't trust their GUI settings at all, it might think its on the internet via interface A but the actual part of windows that does the moving of packets thinks something else.

Anything beyond one lan is an alien concept to microsoft, they botch it so damn bad compared to unix. (my pfsense has 7 interfaces and vlans, never had a config problem) I guess they still haven't learned since tacking it on with duct tape in the 90s.
 
Last edited:

Aluminum

Active Member
Sep 7, 2012
431
46
28
When adding an IP over IB network in parallel with an Ethernet network on Windows, I have a simple formula that has always worked for me:

1) Configure your standard Ethernet network first - IP address, gateway, DNS, whatever.
2) Add you Infiniband card and configure it differently:
a) IP address - use a different subnet. I'm 192.168.1.* for servers and 10.11.12.* for the IB subnet
b) Gateway - leave blank.
c) DNS - leave blank. I use IP addresses as opposed to hostnames to address the storage nodes. This is just fine in a lab environment.
Regular lan is 192.168/24 with pfsense doing all the gateway/dns/dhcp lifting, used 172.16/16 static ips with nothing else for these.

The final thing that actually solved it all was using secpol.msc to treat all "unidentified networks" as private instead of public. Wonderfully designed windows forces anything with no gateways/dns/dhcp/voodoo into a limbo land, even if you turn off the firewall for those interfaces it won't let most of the OS "see" that network.

Meanwhile in unix reality, if I set up something and it shows up in ifconfig as eth/em/ath/ib/etc{#whatever} that interface is completely usable and follows the configuration I gave it.

Installing cards, drivers, spooling fiber and taping it up so it won't get tangled: 15 minutes
Turning hairs gray while sorting out windows: 5+ hours
32Gb ethernet: priceless
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Are you getting RDMA?

Regular lan is 192.168/24 with pfsense doing all the gateway/dns/dhcp lifting, used 172.16/16 static ips with nothing else for these.

The final thing that actually solved it all was using secpol.msc to treat all "unidentified networks" as private instead of public. Wonderfully designed windows forces anything with no gateways/dns/dhcp/voodoo into a limbo land, even if you turn off the firewall for those interfaces it won't let most of the OS "see" that network.

Meanwhile in unix reality, if I set up something and it shows up in ifconfig as eth/em/ath/ib/etc{#whatever} that interface is completely usable and follows the configuration I gave it.

Installing cards, drivers, spooling fiber and taping it up so it won't get tangled: 15 minutes
Turning hairs gray while sorting out windows: 5+ hours
32Gb ethernet: priceless
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
Are you getting RDMA?
Nope these are connect X cards, not X-2. The windows boxes are just 7 (cablecard media center) and 8 anyways, no server. I'm running away from microsoft at full speed as it is, only stuck there for cablecard DRM and 3D drivers for now.

I wanted dual port, going to do a p2p linkup with 3 computers, a switch is out of my "fun" budget for this year at least.
 

donedeal19

Member
Jul 10, 2013
38
12
8
Small update

I figured I would test out linking my two windows boxes, since the ZFS box is not getting messed with until I pick my next OS. (modern games and recording protected TV still require windows, sigh)

-Installed cards and cable
-Using 3.0.0 WinOF from like 3 years ago since thats the last supported version for my hardware...I'm having serious buyers remorse now.
-Updated both cards 2.9.1000
-Links are lit up, but windows is convinced they are disconnected
(went nuts for 2 hours trying to figure it out with obscure google query lottery)
-Started opensm
-Windows sees 32 Gbps ethernet, so like a fool I go "yay!" and get ready to fire up some file sharing between SSDs for fun.


Set up the interfaces, for now IPv4 only:

PC1
172.16.0.1
255.255.255.0
no gateway, no dns, etc

PC2
172.16.0.2
255.255.255.0
no gateway, no dns, etc


So I wasted my friday night and here is the result:
PC2 can ping PC1
PC1 cannot ping PC2

Checked all these settings about 5000 times.

Did I mention I ****ing hate windows?


PS I turned off winfail firewall for both IPoIB interfaces anyways just in case it slows anything down.
I finally received my my cables, now on to networking two PC's. And have the same issues and agree windows is dumb. Spent about two hours reading and trying to get the cards to Ethernet connected... Now it shows as unidentified network. Maybe a couple grey hairs now.
My problem is the same as already been explained except that still stuck with unidentified network. I missed a step somewhere.

This is crazy I could replicate same issues in the same order ha ha. The next thing is server 2012 is rdma enabled and windows 8 is not. Both using same firmware and driver. I think I will write me a step by step so I don't forget how I got it working.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
You may need to hack the registry to allow unidentified networks to pass firewall traffic, if the firewall is enabled. if not, your card is bad or the cable is bad. Just went through this with a deaf nic.
 

donedeal19

Member
Jul 10, 2013
38
12
8
I did get everything running so far. The speeds is below what I have seen posted around. I'm only seeing about 5Gbytes each way. The client PC is an old 775 motherboard, ip35 pro with the card in the top pci slot 16x. If I'm correct it don't have Pci 2.0, which is ok for now as it's all learning process.

Ibv_write_be gives me 1250.73 bw peak 1247.66 on the client PC. About 3100 bw peak on my server PC. All with iperf for tests. Not sure what I can change to get better results. Settings are unchecked hw choose ib and single port on each PC. No rdma just yet have not looked into why not on W8.

What type of speed should I be seeing being Pci 1.0 limited? Any other things I can do now as it stand it's not faster that a pair of ssd in raid 0.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
You may need to hack the registry to allow unidentified networks to pass firewall traffic, if the firewall is enabled. if not, your card is bad or the cable is bad. Just went through this with a deaf nic.
No hack needed, run secpol.msc

Took reading a lot of desperate windows troubleshooting forums to get the correct solution ;)



As for PCI-e 1.x bus, the max data speed for a x8 card would be ~20Gbps

Getting less than full speed from 10+ Gb class connections seems to be par for the course without lots of driver/os/network stack/software changes, some of which you can't really fix until new versions come out. (e.g. mainstream filesharing protocols)
 

donedeal19

Member
Jul 10, 2013
38
12
8
Hi, thanks to the you guys here.

I took the time to play with IB with Ram disc. And while setting it up I was using old hardware and the speeds was not all that impressive at all. Then I tested with an z77 setup and was getting the same results.. I learned that the second slot was pci 2.0 x4. I change cards around fired up iperf and still slow. Ran a Ibv_write_bw both pc's are and they ran a score of 3200. Fire back up ram disc and ran a few benchmarks to see 3200 mb write and 2900 mb reads. Was shocked to see fast speeds and low latency. I forgot to keep the screenshots but will post some later.

I will have some questions but waiting on new hardware to rule out slowness.
I think it will be hard to obtain any real output doing basic file transfers tho. I was thinking 4 256 ssd in some kind of raid to start and expand off that. Each pc wold have 24 gb of ram could use most to ram disc.

How would you build up your harddrive sub system to get say 1500mb/s out of your ib setup?
 

donedeal19

Member
Jul 10, 2013
38
12
8
So just to post a quick benchmark. I have gathered a lot using different windows OS's just don't have the time to sort through all of them.
Server 2012 for these. Will update client pc with more memory. If I can figure out how to take a screenshot of desktop I can post more different tests I used.






I am still trying to come up with an disk sub system that won't break the bank. Intel DC S3700 drives looks to be a drive to work with. Lots of choices tho and not sure where i'm heading with this. I guess I would have to make a new thread for that.
If anyone have some screenshots to share of your disk transfers speeds it help me make a choice. Thanks again.

Will update when I figure out how to get these screenshots.

 
Last edited: