Xpenology DSM5.0 with Mellanox 10gb

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Marsh

Moderator
May 12, 2013
2,647
1,498
113
I didn't want to hijack MiniKnight's thread. I am starting a new thread

My home lab setup
Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
AMD A4-3200 , MSI A75 motherboard, 4gb mem, 6x3 tb seagate 7200rpm raid5, 1 Mellanox ConnectX ( not ConnectX-2) 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

Let me start saying that I believed benchmark testing is like sugar, only get you high and not anything useful, purpose is for comparison.

I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 4

128KB write 419688 (419MB/s ), read 1125212 (1125MB/s)
256KB write 414MB/s, read 1103MB/s
512KB write 492MB/s, read 1136MB/s
1024KB write 390MB/s, read 1125MB/s
2048KB write 452MB/s, read 1096MB/s
4096KB write 456MB/s, read 1084MB/s
8192KB write 444MB/s, read 1079MB/s
 
  • Like
Reactions: Entz and Patrick

Marsh

Moderator
May 12, 2013
2,647
1,498
113
ATTO benchmark does not test or show IOPS.
I could run some IOMETER tests, Hdtune does not test map network drive, I had to use ISCSI disk, then it is not the same for comparison.

I don't pay too much attention to benchmark, or tweaking. The builds are so cheap, I just use more servers. I have a bunch Mellanox 40gb card that is sitting on the shelf because I am happy with the low end Xpenology server performance. The key for me is SSD raid and 10gb network link.

In my lab, I could run 6 to 25 linux or Windows vm on Xpenology only with 4gb memory without issue.
I typical run the Xpenology server that hosting working vm with raid zero ( 240gb SSD x 2 ) or raid 10 with x4 SSD , "master VM template" resides on raid 5 hard disks for backup. I backup / replicate the vm that I care about to hard disk raid. Most of the time, the vms are deploy using automate deployment method, rebuilding is fast.

I stop buying or using raid cards nowadays, I use the sata ports on the motherboard, most consumer motherboard gives me 6 ports ( 6x3tb or 6x4tb). If I want more, I'll use Supermicro or Intel enterprise version of motherboard that will give me 12 to 20 sata ports.

I do have 2 HP X1600 (P212 controller with 6x3tb and 2 SSD) running Windows 2012r2 storage space with tiering in case I need more IOPS.

I'll post some IOMeter test later, let me know what you like to see.
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
AMD A4-3200 , MSI A75 motherboard, 4gb mem, 6x3 tb seagate 7200rpm raid5, 1 Mellanox ConnectX ( not ConnectX-2) 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

Map network drive with 10gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*| *Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|1.96 |29882 |933 |0%|
|*RealLife-60%Rand-65%Read*|177.01 |335 |2 |0%|
|*Max Throughput-50%Read* |3.68 |15739 |491 |0%|
|*Random-8k-70%Read* |217.06 |275 |2 |0%|

Next, I'll post result with 2 x old 128gb sata2 SSD as raid zero
 
Last edited:

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G630 cpu , ECS H61 motherboard, 4gb mem, 2 x OCZ vertex 2 SSD 128gb each in raid zero, 1 Mellanox ConnectX ( not ConnectX-2) 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

Map network drive with 10gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME* |*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read* |2.11 |27347 |854 |0%|
|*RealLife-60%Rand-65%Read*|5.82 |10172 |79 |0%|
|*Max Throughput-50%Read* |3.29 |17904 |559 |0%|
|*Random-8k-70%Read* |6.79 | 8739 |68 |0%|
 
Last edited:

Marsh

Moderator
May 12, 2013
2,647
1,498
113
erver side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G630 cpu , ECS H61 motherboard, 4gb mem, 2 x OCZ vertex 2 SSD 128gb each in raid zero, 1 Mellanox ConnectX ( not ConnectX-2) 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 4

128KB write 575090 (575MB/s ), read 1084813 (1084MB/s)
256KB write 582MB/s, read 1130MB/s
512KB write 604MB/s, read 1150MB/s
1024KB write 585MB/s, read 1161MB/s
2048KB write 564MB/s, read 1151MB/s
4096KB write 586MB/s, read 1130MB/s
8192KB write 581MB/s, read 1177MB/s

Next up, use 4 x 256gb newer generation SATA3 SSD in raid zero
 
  • Like
Reactions: MiniKnight

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,826
113
This looks like a lot of fun. Helps quite a bit with driver compatibility too I would guess.

Have you tried at higher queue depths?
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
I tried the max queue depths in ATTO benchmarch, it did not make a bit of difference.
The next ATTO benchmark test, I'll bump it up to max 10.
More later
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Change:
New hardware configuration for Xpenology server , added ATTO disk benchmark with Queue Depth 10

Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 2 x Seagate 600 pro 256gb SSD in raid zero, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 4

128KB write 918604(918MB/s ), read 1172045(1172MB/s)
256KB write 1060MB/s, read 1178MB/s
512KB write 1061MB/s, read 1144MB/s
1024KB write 1071MB/s, read 1184MB/s
2048KB write 948MB/s, read 1183MB/s
4096KB write 986MB/s, read 1177MB/s
8192KB write 898MB/s, read 1179MB/s

Overlapped I/O with Queue depth 10
128KB write 962854(962MB/s ), read 116939(1169MB/s)
256KB write 1063MB/s, read 1187MB/s
512KB write 1053MB/s, read 1181MB/s
1024KB write 1065MB/s, read 1184MB/s
2048KB write 1071MB/s, read 1186MB/s
4096KB write 988MB/s, read 1184MB/s
8192KB write 917MB/s, read 1182MB/s
 
  • Like
Reactions: MiniKnight

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Change:
New hardware configuration for Xpenology server , added ATTO disk benchmark with Queue Depth 10

Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 2 x Seagate 600 pro 256gb SSD in raid zero, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

Map network drive with 10gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|1.32|33675|1052|0%|
|*RealLife-60%Rand-65%Read*|1.33|39714|310|0%|
|*Max Throughput-50%Read*|2.17|25816|806|0%|
|*Random-8k-70%Read*|1.23|42256|330|0%|

From the test result,
Read operation saturated the 10gb link with 2 x SATA3 SSD in raid zero,
Write operation is limited to single SSD write speed.

Next up, 4 x SSD in raid zero
 
Last edited:
  • Like
Reactions: MiniKnight

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Changed: 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid zero

Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid zero, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 10

128KB write 1127997(1127MB/s ), read 1101004(1101MB/s)
256KB write 1136MB/s, read 1128MB/s
512KB write 1144MB/s, read 1178MB/s
1024KB write 1149MB/s, read 1166MB/s
2048KB write 1157MB/s, read 1175MB/s
4096KB write 1150MB/s, read 1177MB/s
8192KB write 1154MB/s, read 1169MB/s
 
  • Like
Reactions: MiniKnight

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Changed: 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid zero

Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid zero, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

Map network drive with 10gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|1.45|32736|1023|0%|
|*RealLife-60%Rand-65%Read*|1.03|51880|405|0%|
|*Max Throughput-50%Read*|1.80|30595|956|0%|
|*Random-8k-70%Read*|1.00|52148|407|0%|

From the result: 4 x SSD raid zero did not scale up random write well

Next up, 4 x SSD in raid 5
 
Last edited:
  • Like
Reactions: MiniKnight

Hank C

Active Member
Jun 16, 2014
644
66
28
that's incredible IOPS and mpbs throughput. does the software synology support the ssd cache?
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
No SSD cache, just some cheap old hardware from years ago.
It is no-brainer to install and config a new Xpenology host ( 5 to 10 min work). It will run on almost any hardware.
I am pretty sure everyone in this forum have old hardware and old SSD laying around.
I been building a Xpenology configuration with 2 HD raid zero, 2 SSD raid zero, 1 Mellanox EN 10gb card in mini-itx format. It allows me to have a 3 rings nodes.
I could moved the tiny Xpenology host anywhere thus saving money on a 10gb switch.
I am still looking for a cheap 10gb switch.
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Changed: 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid 5
Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid 5, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

I ran ATTO disk benchmark because it is easy
Map network drive with 10gb link
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 10

128KB write 886240(886MB/s ), read 1184092(1184MB/s)
256KB write 976MB/s, read 1175MB/s
512KB write 926MB/s, read 1181MB/s
1024KB write 971MB/s, read 1174MB/s
2048KB write 962MB/s, read 1143MB/s
4096KB write 1005MB/s, read 1110MB/s
8192KB write 956MB/s, read 1116MB/s
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Changed: 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid 5
Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid 5, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

Map network drive with 10gb link
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|1.38|33578|1049|0%|
|*RealLife-60%Rand-65%Read*|1.57|34685|270|0%|
|*Max Throughput-50%Read*|1.83|30229|944|0%|
|*Random-8k-70%Read*|1.51|35913|280|0%|

Next up, iSCSI testing
 
Last edited:

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Changed: 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid 5
using iSCSI file mode

Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid 5, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

I ran ATTO disk benchmark because it is easy
using iSCSI file mode
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 10

128KB write 479531(479MB/s ), read 897318(897MB/s)
256KB write 592MB/s, read 957MB/s
512KB write 596MB/s, read 1158MB/s
1024KB write 584MB/s, read 1161MB/s
2048KB write 491MB/s, read 1087MB/s
4096KB write 554MB/s, read 1073MB/s
8192KB write 550MB/s, read 1080MB/s

Test result shown iSCSI protocol is slower than Windows SMB , I think it is good enough to use
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Changed: 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid 5
using iSCSI file mode

Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid 5, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

using iSCSI file mode
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|1.88|23269|727|0%|
|*RealLife-60%Rand-65%Read*|1.88|23748|185|0%|
|*Max Throughput-50%Read*|2.67|17871|558|0%|
|*Random-8k-70%Read*|1.70|25400|198|0%|
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Changed: 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid zero
using iSCSI file mode

Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid zero, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

I ran ATTO disk benchmark because it is easy
using iSCSI file mode
Target size 128KB to 9192KB
Total Length 2GB ( too small file size for real world, ATTO won't go higher than 2GB )
Force Write Access and Direct I/O
Overlapped I/O with Queue depth 10

128KB write 743929(743MB/s ), read 894314(894MB/s)
256KB write 898MB/s, read 1134MB/s
512KB write 898MB/s, read 996MB/s
1024KB write 861MB/s, read 1115MB/s
2048KB write 873MB/s, read 1162MB/s
4096KB write 869MB/s, read 1152MB/s
8192KB write 878MB/s, read 1128MB/s
 

Marsh

Moderator
May 12, 2013
2,647
1,498
113
Changed: 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid zero
using iSCSI file mode

Server side
Xpenology DSM 5.0-4458 Update 2 with Gnoboot, ( newer Nanoboot doe not support Mellanox card )
Intel G1830 cpu , ECS H87H3-M3 motherboard, 8gb mem, 4 SSD ( 2 x Seagate 600 pro 256gb SSD , 1 x Intel 530 240gb , 1 x Sandisk Ultra plus 256gb ) in raid zero, 1 Mellanox ConnectX-2 10gb card

Testing client side
Fresh Windows 2012R2 bare metal install , no tweaks ( only tweaks from Mellanox driver default install )
I5-2500k, with MSI Z68A motherboard, 8gb mem, 1 local 2.5" 5200rpm OS boot disk, 1 Mellanox ConnectX-2 10gb card
Mellanox driver version MLNX_VPI_WinOF-4_60_All_win2012R2_x64
Mellanox network cards were direct connected without switch.

using iSCSI file mode
iometer-1.1.0-rc1-win64.x86_64 with OpenPerformanceTest.icf

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*Max Throughput-100%Read*|1.91|22774|711|71%|
|*RealLife-60%Rand-65%Read*|1.20|33930|265|60%|
|*Max Throughput-50%Read*|2.18|17931|560|65%|
|*Random-8k-70%Read*|1.21|34651|270|62%|

Good Night....