50+TB NAS Build, vSphere Cluster and Network Overhaul

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Silly question: did you plug in both power supplies on the 8024? On my Juniper switch - which looks shockingly similar when opened up - the fans will only go into 'normal' mode if both PSUs are plugged in. If you only plug in one then the switch thinks it is experiencing a power fault and spins up the fans to max rpm to 'protect' itself.

With the fans running slower it is still loud, but tolerable.

Just a thought...
 

nry

Active Member
Feb 22, 2013
312
61
28
No only one! Could be a good shout that, pretty sure they did spin down a little though.

Will give it a test either later or tomorrow as it's getting a little late now to be powering that on and waking the neighborhood up!
 

nry

Active Member
Feb 22, 2013
312
61
28
Just done a quick test with the following on the 10GbE front.

Ubuntu VM on Node 0 with Intel X520-DA2 NIC
Connected to Dell 5524

Node 1 running Ubuntu server with X520-DA2 NIC
Connected to Dell 5524

MacBook connected to Dell 5524 to run tests

iPerf result: 9.09Gbits/sec

Code:
root@node1-ubuntu:~# iperf -c 10.0.21.2
------------------------------------------------------------
Client connecting to 10.0.21.2, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[  3] local 10.0.21.20 port 60386 connected with 10.0.21.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  10.6 GBytes  9.09 Gbits/sec
And the current mess I seem to have found myself in :rolleyes:

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
And for anyone interested

The total power consumption of the following when idle is 45w

APC 1U Switched PDU with all ports on (roughly 8w)
HP ProCurve 1800-24G (19w previously)
Dell 5524

Which means the Dell is using around 18w!!!
 

nry

Active Member
Feb 22, 2013
312
61
28
Not too sure to be honest on that one, my plan is for all iSCSI traffic to go through the 8024F anyway.

Going to experiment a little and see what each one can do as I am still unsure on how to setup this all. Could do with another 10GbE SFP NIC for that though!

In regards to my previous post on power usage, I plugged the 8024F alongside the following

APC 1U Switched PDU with all ports on (roughly 8w)
HP ProCurve 1800-24G (19w previously)
Dell 5524 (assuming 18w roughly)

For a grand total of 134w, so the 8024F idle with one PSU in (removed the 2nd) seems to be about 90w :eek:

I don't plan on leaving that on 24/7, there is absolutely no need seeing as it will be used for either development purposes or accessing data on NAS.

Drawn down in paint my plan which I had in my head for how the two networks would interface, but now I have seen it in drawn out it's probably a bad idea. My thinking behind the below is there would be a iSCSI vlan on the 24/7 network and a corresponding one on the part time network and either node0 or node1 would route between the two.
But more I think about it, this is a little nuts!

(The supermicro node is what I haven't purchased yet, but very tempted!)



Not too sure the best way to do this, after seeing the Dell's power usage I am tempted to lose the HP ProCurve and somehow have the 5524 as my primary 24/7 switch

Think the best way is going to be setup a few different ways of doing things and see what works best.
Looking like my goal of 100w maximum running 24/7 is going out of the window quickly! :(


EDIT:

I am also thinking about getting 3x additional 240GB OCZ SSD's for Node 0 and running them in RAID10 alongside my original one.
This would then be my primary VM store for all hosts. Need to find a suitable RAID card for this though.
Then have the large, slower RAID10 on my NAS server using 6x 7K3000 Hitachi disks (should be 8 soon as my warranty replacements come through :cool: )
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Just tested the 8024F switch, out the box config

Going VM on ESXi box with X520-DA2 > Dell 5m DAC > 8024F Switch > Dell 5m DAC > X520-DA2 on Node 1

9.39Gbit/s :)

Now anyone got any idea why my HP X240 DAC (JD096B) cable refuses to work with the 8024F switch?
They work fine with the 5524 switch :confused:

Was kind of hoping they would just work because I have been offered a bunch of 3m ones really cheap!

EDIT: If I plug the HP cable from the Dell 8024F to my HP ProCurve 1800-24G SFP socket, it works fine too albeit at 1GbE obviously...
So they clearly work, just not with my 5524 switch or my X520 cards :/
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
might read the details on the cable. I've got a bunch of cables if you think they are just not compatible I can trade you but if they are defective :( no dice.

Might want to see what the switch thinks with the log or transceiver info
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I've seen a lot of switches that will reject an active SFP/SFP+ module that isn't the right "brand". HP switches are famous for only working with HP-branded SFP/SFP+ modules (or knock-offs that spoof the ids). Intel NICs will only work with Intel branded SFP/SFP+ modules.

But I've never seen a switch or NIC that will reject a passive DAC cable before. AFAIK, the switch can't even read any brand info at all from a passive DAC...
 

nry

Active Member
Feb 22, 2013
312
61
28
Cheers for the input, will have a look in the logs when I get home tonight.
If they work at 1G speed there must be something off somewhere

Might have to check for software updates too!
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Uh yeah passive dac have eeprom too!!, I've got 2910 and 2920's that won't take but a B or C level cable. the X240 is a H3C (huwai 3com) cable so provision 2910 or 2910's aren't feeling them at all.

Even though it is two pairs of wire in DAC (4 wires) there is still a crypto chip for HP.

The only problems i've had was at 5Meter some nic were sensitive.

btw you do know to always check your cards with a high power magnifier or scanner. about 50% of cards one bay have physical damage.

As I said before the whole networking is a price fix scam and everyone is in on it. when was the last time a 8 year old technology so expensive for no reason? $1100 new $100 used... think bout it.
 

nry

Active Member
Feb 22, 2013
312
61
28
Uh yeah passive dac have eeprom too!!, I've got 2910 and 2920's that won't take but a B or C level cable. the X240 is a H3C (huwai 3com) cable so provision 2910 or 2910's aren't feeling them at all.

Even though it is two pairs of wire in DAC (4 wires) there is still a crypto chip for HP.

The only problems i've had was at 5Meter some nic were sensitive.

btw you do know to always check your cards with a high power magnifier or scanner. about 50% of cards one bay have physical damage.

As I said before the whole networking is a price fix scam and everyone is in on it. when was the last time a 8 year old technology so expensive for no reason? $1100 new $100 used... think bout it.
I do check cards for obvious damage, but never too closely to be honest, usually put it in a testing box and if it works and doesn't blow up then it's probably safe for use in main systems. Might not be the best approach but seems to have worked for the last few years :p

Well had a look at the logs and says the following for the X240 DAC when the other end is connected to either the Dell 5524 or a X520-DA card

Code:
	AUG 15 03:19:19	DRIVER	Invalid Transceiver present in the 0/11 (slot/port)
Connecting the other end to the HP ProCurve

Code:
Notice	AUG 15 03:21:36	TRAPMGR	SFP inserted in Te1/0/5
Notice	AUG 15 03:21:34	TRAPMGR	Te1/0/5 is transitioned from the Forwarding state to the Blocking state in instance 0
Notice	AUG 15 03:21:34	TRAPMGR	Link Up: Te1/0/5

Looking at the version I could probably do with updating it

Code:
 Images currently available on Flash

unit  image1       image2       current-active     next-active
----- ------------ ------------ ----------------- -----------------

1     5.0.1.3      A.1.1.9      image1             image1
Where as the latest appears to be 5.1.1.7a17

And in the release notes for 5.1.0.1 we have
Support for Additional transceivers/optics
Which hopefully will solve my problem :)

Need to investigate a little more before proceeding with anything, not even sure how to update it currently!
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Well updated to the latest firmware... still no luck with the HP DAC :(

Code:
	AUG 15 04:02:17	DRIVER	Invalid Transceiver present in the 0/6 (slot/port)
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
sounds like you need to invoke the lifetime warranty on procurve ;)

I'd be glad to help you. Since I have x240's (1meter) and HP procurves (lots) - I can easily rma - or trade you for something else.

Would be glad to help a fellow STH'er
 

nry

Active Member
Feb 22, 2013
312
61
28
Thing is the cables do work in the 5524 to X520 NIC, its just the 8024F switch not playing along.

Thanks for the offer, but as your on the other side of the world I think postage and any import taxes wouldn't make it worth while. My best bet is to just keep an eye out on ebay for some Dell/Cisco cables (similar to the working one I have)
 

nry

Active Member
Feb 22, 2013
312
61
28
Networking

My possible solution to make the 8024F switch a little bit quieter...



Not too sure how well this will work, or how well it will take a lower RPM speed, will have to test and find out.

Node 0

Some more toys arrived

M1015 RAID card and another OCZ 240GB SSD, going to see how RAID1 performs and maybe look at RAID10 with these SSDs, would like 4x Samsung 840 Pro 256GB but seeing as I have OCZ, I may as well stick with it for now

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Spoke to Dell regarding the 8024F and HP DAC, support couldn't find any possible way of getting them to work. Given up hope on them so plan on just keeping an eye out for more Dell DAC on eBay.

Anyway got some new toys

3x 15m fibre
3x 5m fibre
2x Intel 10Gb/s SFP+ Modules
2x Intel X520-DA2 PCIe cards with low profile bracket
32GB ECC DDR3 1600Mhz RAM for Node 0 :D



 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Workstation 10GbE

So I have been spending some money again :rolleyes:

Finally have 10GbE on my iMac, did go ever so slightly budget for this one but considering it was all new kit or manufacture refurbished with warranty also the seller of the PCIe card said if it wasn't compatible I could refund for a full refund so couldn't complain!

Sonnet Echo Express SE, I did wan't a dual slot one so I could put a USB3 card in, but for an extra £146 for that version I figured I would live without USB3 :p



Myricom 10G-PCIE2-8B2-2S and 2x 10G-SFP-SR modules



Card in place



And in it's new temporary home :)



Installation was so easy, put everything together, plugged it in, installed the Myricom driver and done. Probably spent more time writing up this post!

Initial tests though I am slightly disappointed, using the following setup:
Myricom PCIe
Myricom SFP Module
5m LC-LC fiber 50/125 cable
Intel SFP module
Dell 8024F

➜ ~ iperf -c 10.0.21.105
------------------------------------------------------------
Client connecting to 10.0.21.105, TCP port 5001
TCP window size: 129 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.21.190 port 60854 connected with 10.0.21.105 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 4.21 GBytes 3.61 Gbits/sec
➜ ~ iperf -c 10.0.21.105 -t 30
------------------------------------------------------------
Client connecting to 10.0.21.105, TCP port 5001
TCP window size: 129 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.21.190 port 60855 connected with 10.0.21.105 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-30.0 sec 12.4 GBytes 3.56 Gbits/sec
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Node 0

Finally got round to installing the M1015 raid controller into Node 0 today.

Wasn't really expecting any performance boost with 2x OCZ Agility 3 in RAID1. Would have thought a single OCZ drive would be slightly better than this though! Maybe I should have gone for Samsung 840 Pro's

All tests done inside a Ubuntu VM which could possibly cause a bottleneck?

Single SSD
Code:
root@ubuntu:~# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   19440 MB in  2.00 seconds = 9727.86 MB/sec
 Timing buffered disk reads: 958 MB in  3.00 seconds = 318.81 MB/sec
root@ubuntu:~#

/dev/sda:
 Timing cached reads:   19546 MB in  2.00 seconds = 9781.33 MB/sec
 Timing buffered disk reads: 962 MB in  3.01 seconds = 319.98 MB/sec
root@ubuntu:~#

SSDs in RAID1

Code:
root@ubuntu:~# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   19466 MB in  2.00 seconds = 9741.47 MB/sec
 Timing buffered disk reads: 950 MB in  3.00 seconds = 316.36 MB/sec
root@ubuntu:~# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   19460 MB in  2.00 seconds = 9738.49 MB/sec
 Timing buffered disk reads: 950 MB in  3.00 seconds = 316.45 MB/sec
root@ubuntu:~#
I have tried to use bonnie but this is a learning curve on it's own so may have to leave that to a future time.

Any suggestions to improve performance would be more than welcome :D
 
Last edited: