50+TB NAS Build, vSphere Cluster and Network Overhaul

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vikingboy

New Member
Jun 17, 2014
29
6
3
Did you make sure flow control is enabled too because 10Gb will overrun a system and cause drops which will massively affect throughput.

I decided to go with the Small Tree card, primarily because of the support I think they can offer helping optimise the cards for my intended use.
I've got the newer thunderbolt2 / late '13 MBP here and was thinking of picking up some other cards to do some back to back perf tests to see if theres any noticeable differences etc.
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Have tried with and without flow control, also varying MTUs.

Would be good to see your results with the TB2, have you gone for the Small tree enclosure or just the NIC? Where about's in the UK are you based? Could maybe arrange to send you one of my Myricom cards for testing (just hope I'd get it back :eek:)
 

nry

Active Member
Feb 22, 2013
312
61
28
OS X 10GbE

Been trying to improve 10GbE performance on my iMac this evening, have been working with a number of VMs and large datasets the last few weeks and the slight speed boost would have been nice on file transfers I just haven't had the time to experiment.

Found some pretty informative posts for OS X Mavericks here Performance Tuning the Network Stack on Mac OS X Part 2 | Rolande's Ramblings
And just in case his blog ever disappears.. his suggested settings are

Code:
kern.ipc.somaxconn=2048
net.inet.tcp.rfc1323=1
net.inet.tcp.win_scale_factor=4
net.inet.tcp.sendspace=1042560
net.inet.tcp.recvspace=1042560
net.inet.tcp.mssdflt=1448
net.inet.tcp.v6mssdflt=1412
net.inet.tcp.msl=15000
net.inet.tcp.always_keepalive=0
net.inet.tcp.delayed_ack=3
net.inet.tcp.slowstart_flightsize=20
net.inet.tcp.local_slowstart_flightsize=9
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50
Thunderbolt Bridge

First up I returned to the thunderbolt bridge between my iMac and MacBook Pro, reading the reviews on the internet this has had very mixed reviews with some users reporting less than gigabit speeds! But figure it could be worth spending a little bit of time on seeing as quite often I need to copy VMs etc across from my NAS to the laptop if I'm working away.

Applied the above performance tweaks and ran some tests.

iperf results were pretty disappointing even using a range of configurations, here, typically seeing 2.1Gbits/sec to 2.2Gbits/sec

Setup a 6GB RAM disk on both machines and created a blank file of 5.37GB copying over apples AFP sharing service it transferred in 23 seconds. That pretty much matches my iperf results.
Tried the same test using SMB but the copy simply failed due to an unexpected error!

Figured I'd try the Blackmagic speed test utility over smb



While they are not quite 10Gb/s speeds I have a feeling I'm not going to see much better over thunderbolt. I do wonder if the older generation TB1 is the limiting factor here.

10GbE Network on OS X

Revisiting my performance issues on my iMac's connection to my NAS iperf is only returning about 3.6Gbit which is around 429MB/s, almost enough to max out my SSD which I'm more than happy with to be honest.

Testing one of my iSCSI volumes results the following, not the best write performance!



Copying a 50GB image file from my newly upgraded Ubuntu NAS, now running 14.04 LTS. Upgraded due to the native support of the SMB3 protocol only to find out OS X is still running SMB2!
I see varying speeds from the NAS to my local NAS. Low as 90MB/s high as 292MB/s (note these are only measured a combination of iStat menus and activity monitor on OS X and nmon on the NAS)

Know this probably isn't the most accurate way of measuring this, but shows the inconsistent speeds



And copying the file back to the NAS



Seems to be around the 280-290MB/s mark in each direction.

Also tried copying from NAS to a RAM disk with a 10GB file, same speeds. Going to try install Windows 8.1 on a box tomorrow and see how well that performs using SMB3, expecting 500-600MB/s mark which I'd usually see with sequential transfers server to server!
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Windows 8.1 Benchmark
Pulled out one of my Xeon boxes (E3 1245, 32GB DDR3 1600Mhz and a dual port Intel X520-DA2) and installed Windows 8.1 to run some tests.

First up was iperf with no performance tweaks besides 4 parellel requests for 30 seconds, also jumbo frames are not enabled!

Code:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.5.15 port 5001 connected with 10.0.5.202 port 50204
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-30.2 sec  5.48 GBytes  1.56 Gbits/sec
[  4]  0.0-30.2 sec  5.49 GBytes  1.56 Gbits/sec
[  6]  0.0-30.2 sec  5.49 GBytes  1.56 Gbits/sec
[  7]  0.0-30.2 sec  5.47 GBytes  1.56 Gbits/sec
[SUM]  0.0-30.2 sec  21.9 GBytes  6.24 Gbits/sec
Then tried with my usual -t 30 -w 768k -l 256k resulting in a slightly improved result

Code:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.5.15 port 5001 connected with 10.0.5.202 port 50211
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-30.0 sec  29.1 GBytes  8.33 Gbits/sec
[  5] local 10.0.5.15 port 5001 connected with 10.0.5.202 port 50217
[  5]  0.0-30.0 sec  29.5 GBytes  8.43 Gbits/sec
Figured I'd do some samba performance testing, so setup a 24GB RAM disk and gave that a quick benchmark on ATTO to see what silly speeds are achieved.



Unfortunately I couldn't run this on a network disk, so just used windows' reporting of file transfer speeds while also monitoring nmon running on the NAS

First up is copying a 17GB mkv file from my RAID10 volume consisting of 6xHitachi 3TB 7.2K drives, which under previous testing on the server does up to around 700MB/s

Pretty steady 640MB/s here!



Next up copying from my pretty full RAID6 volume consisting 8x Hitachi 3TB 5.4K drives, under previous testing on the server does around 650MB/s

Slightly lower than I was expecting, but the performance of this volume has never been too brilliant.


Then copied a bluray rip which performed more to what I was expecting.



Not even going to bother measuring IOPS as that's not the purpose of these drives, I use them as quick dumping grounds for VMs and backups.

Until I hear more on vikingboy's OS X results that's probably the end of this set of testing as I'm happy with the performance on everything but my workstation.
 
Last edited:

vikingboy

New Member
Jun 17, 2014
29
6
3
Thanks for the update. My cards and cables are due tomorrow so fingers crossed have some numbers for you at the weekend. I'll try and replicate your tests for comparison.
 

nry

Active Member
Feb 22, 2013
312
61
28
Look forward to seeing your results, really curious to see how well TB2 performs over the older gen.

As I haven't got much work to do today I figured I would try experiment a little more.

OS X SMB performance

Reading into various threads on improving SMB within OS X seems to be a failed venture!
Another issue which drives me a little insane is when connecting to any Samba server it will freeze roughly 30 seconds into the first file transfer for one minute. This issue only exists on OS X, seems to be a fairly common bug reported on Apples wonderful SMB2 stack.

Also one user reported disabling delayed_ack helped.

Code:
sudo sysctl -w net.inet.tcp.delayed_ack=
Seems to improve iperf results by about 1Gbits/s as was only seeing 4.6Gbits/sec before

Code:
➜  ~  iperf -c 10.0.5.15 -t 30 -w 768k -l 256k
------------------------------------------------------------
Client connecting to 10.0.5.15, TCP port 5001
TCP window size:  769 KByte (WARNING: requested  768 KByte)
------------------------------------------------------------
[  4] local 10.0.5.200 port 62903 connected with 10.0.5.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-30.0 sec  19.0 GBytes  5.44 Gbits/sec
➜  ~  iperf -c 10.0.5.15 -t 30 -w 768k -l 256k -P 4
------------------------------------------------------------
Client connecting to 10.0.5.15, TCP port 5001
TCP window size:  769 KByte (WARNING: requested  768 KByte)
------------------------------------------------------------
[  4] local 10.0.5.200 port 62954 connected with 10.0.5.15 port 5001
[  5] local 10.0.5.200 port 62955 connected with 10.0.5.15 port 5001
[  7] local 10.0.5.200 port 62957 connected with 10.0.5.15 port 5001
[  6] local 10.0.5.200 port 62956 connected with 10.0.5.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-30.0 sec  4.74 GBytes  1.36 Gbits/sec
[  5]  0.0-30.0 sec  4.69 GBytes  1.34 Gbits/sec
[  7]  0.0-30.0 sec  4.69 GBytes  1.34 Gbits/sec
[  6]  0.0-30.0 sec  4.74 GBytes  1.36 Gbits/sec
[SUM]  0.0-30.0 sec  18.9 GBytes  5.40 Gbits/sec
Figured I'd give Blackmagic speed test again on my Samba share on the RAID10 volume.



Transferring files to/from my RAM disk to the same volume seems to give me the same results using Apples SMB2, so somehow have managed to improve write performance but not read performance. Going to try setup a RAM disk on the NAS to eliminate the RAID array from this testing.

In my reading of various forums some users suggest using the previous SMB1 stack by connecting to a server using cifs://1.2.3.4
My transfer speeds were around 135MB/s

Quick test again on my iSCSI volume (again on the same RAID array). I don't find this too reliable though as I'm using a very old version of globalSAN on my desktop, don't really want to pay for iSCSI software considering I'm not too sure I need it!

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Time for some updates! A few months ago I went on a bit of a spending spree, buying myself an Xbox One, Apple TV, 8x8 HDMI matrix and MacBook Pro Retina almost top spec (only 512GB SSD). :)



Matrix, Xbox and Apple TV installed in 'rack'



With regards to the rest of the setup I haven't upgraded/changed much recently as everything had been working pretty well, up until a couple of weeks ago where everything has been going wrong!
  • 50mm fans failing in my primary server
  • Running out of memory on primary server
  • Running low on storage on my primary server
  • Areca card in NAS getting stuck on 'Waiting for firmware' during boot 75%~ of the time
  • Areca card dropping drives from my RAID10 array
  • Grinding fans in the NAS, no trapped cables
  • Lack of storage with fast IOPS for DB servers
  • Concern over lack of backups of media on the NAS
I'm hoping that's all the issues for now as that's quite a bit to be dealing with, to start with I plan to switch out the small 2U case[short depth] I have for the 2U with 6 drive bays (will need to investigate buying another 2U case for the computer currently in there). This case can hold 3x 80mm fans, which I hope don't fail as quickly as the 50mm ones.
Also bought another 16GB Kingston ECC memory for £100 brand new, some Arctic F8 PWM fans, did consider Noctua but these are a fraction of the price for the same spec on paper.
Also managed to snap a brand new Synology NAS with new 6TB WD Red for £150 on eBay!

 
Last edited:
  • Like
Reactions: snazy2000

nry

Active Member
Feb 22, 2013
312
61
28
It does but unfortunately hasn't got HDBaseT. So I don't use that aspect of the switch and have separate HDBaseT extenders.

Wyrestorm do a range with HDBaseT, but they come with a 'nice' price tag too ;) This bright orange box was pocket change in comparison!
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Nice matrix. I am doing the same thing with a separate matrix and then hdbaset extenders.
 
  • Like
Reactions: nry

nry

Active Member
Feb 22, 2013
312
61
28
Forgot to post this a few weeks back, Node 0/primary server has been moved over to a larger case with 3x Arctic F8 PWM fans. I have another to fit, just haven't had the time.



Full spec:
  • Xcase 206 HS
  • Antec Earthwatts 380w
  • Supermicro X9SCM-F
  • Xeon E3-1220v2
  • 4x Kingston DDR3 ECC
  • Intel X520-DA2
  • Samsung Evo 840 1TB
  • WD Red 6TB
  • 4x Arctic Cooling F8 PWM fans
It now runs much quieter and I have plenty of storage space :)
 
Last edited:
  • Like
Reactions: snazy2000

nry

Active Member
Feb 22, 2013
312
61
28
Second update. I got a pretty good deal (at least I think so) on a Supermicro 6026TT-HTRF, slight impulse purchase but hoping it should come in useful.





All 4 nodes have 2x L5520 CPUs, 3 have 48GB ram (12x4GB) and the 4th has 24GB (12x2GB).

It sounds like a small jet when booting and it's fairly quiet once idle. I was trying to find the thread on here of fan options for the C6100 but couldn't find it. Wondering if the same fans are suitable for this case.

These are the current fans



Some other photos...







Noticed some of the heatsinks are installed so air can't flow through them easily which probably won't help keep temperatures down.
 
Last edited:

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Noticed some of the heatsinks are installed so air can't flow through them easily which probably won't help keep temperatures down.
That is weird... I would definitely pull those heatsinks off, clean them, apply some new thermal paste, and put them on so the fans can blow air across all the heatsink fins.
 

nry

Active Member
Feb 22, 2013
312
61
28
Not sure I have that much thermal paste left! Was thinking about redoing them all while I'm at it.
 

nry

Active Member
Feb 22, 2013
312
61
28
So after a good hour I have replaced the thermal paste on all 8 CPUs and positioned them in the correct orientation.

Decided to take photos on the node I did, these were one of the better ones.


CPUs all cleaned



Arctic silver 5 applied



Heatsinks installed correctly



And the whole lot :)

 
Last edited: