HGST SAS SSD SSD1000MR

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
Hi,

I am considering buying a couple of these (1TB)

https://documents.westerndigital.co...sas-series/data-sheet-ultrastar-ssd1000mr.pdf

But am unsure on whether or not its too old tech to be of any real use - they came out in 2013, MLC so should be plenty reliable.

I plan to use them as a datastore for VM's, to replace a NFS datastore hosted on a 40Gbit network backed by P4510's.

But if my performance is going to tank going from the NFS on LAN to local SAS ssd's then I would of course not do it.

Thanks for any advice.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Well whats your remote share's performance?
And local disk or local raid? quite a difference there;)
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
Well - its a mirror of P4510's via 40Gbit/s network, so iops are high - but nfs via ESXi is not really that great - but when I mount via ISCSi can read/write wirespeed. - iops are good on these disks.

I know a local raid with SAS disks will not be as good, unless I get quite a few disks - but I am more concerned about potential loss in iops, since that is what really matters when running VM's for me at least - but I would run at least a mirror, possibly raid 1+0
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
first that makes me wonder what you're hosting the p4510s on - usually you do get a lot of latency remotely thus usually not constant high performance once you're out of cache area..

i doubt you will be able to match their performance with low numbers of local sas drives if they work as good as you say 8which I have not seen in my tests, so i am quite interested).
But it sounds as if you might want to virtualize your nfs/iscsi box so you get the nvme drives more local and can turn off the second box...
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
I have my P4510's on a Xeon e5-2673v3 with 256GB RAM running FreeNAS - I know its not perfect, but it gets the job done.

I have tried virtualizing my SAN - I don't like it, since it makes my ESXi's dependent on a VM - every time I have done it and had to do maintenance on the virtualized SAN - ESXi had a hard time to reconnect to the SAN - sometimes ESXi is stupid like that. Which is why I changed to an external machine for the SAN.

But anyway - we are on a different discussion now - I want to talk about the SAS SSD's - not my intel P4510's
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
O/c.
But in order to determine whether SAS can fulfill your needs you need to establish a baseline which you compare against.
Now you say that baseline is 40G line rate which is ~4GB/s with your old setup.
First I wondered how (since thats a lot), second I asked about your usage pattern because you will not be able to reach the same performance of 2 100% utilized NVMe drives with 2 (or 4) 100% used local SAS drives.
The only chance you have is to run significantly more SAS drives and that only works if your usage pattern allows for it.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
Ok,
I know I will never get the same bandwidth i.e. transfer rate using just two SAS SSD's - but I was more wondering in terms of iops, since the P4510's have very high iops, but gets penalized by the access over the network - even though its sub ms - it will probably never beat the access time towards the local ssd's - so in that sense potential iops against local disks are always higher than "remote" - because of the network latency - even if its tiny.

My P4510's has:
  • Random Read (100% Span) 637000 IOPS
  • Random Write (100% Span) 81500 IOPS

The SAS ssd's has:
  • Read IOPS (max IOPS, random 4K) 145,000
  • Write IOPS (max IOPS, random 4K) 20,000
So if I just compare iops - I would need approximately 4 times as many SAS SSD's - but I am hoping that going local instead of remote would make that less because of the network latency

Specs does not say whether or not the MAX iops is with a single or multiple threads - I am guessing its multiple threads.

Usually SSD's have an order of magnitude slower iops single threaded vs multi threaded - I know some of the newer nvme drives are not that slow, but these are old - so I would assume single threaded iops would not reach 15k/2k - but for the P4510's its probabably higher than one order of magnitude lower than max, but I don't know.

So again - my newer drives might give me higher iops because of better iops in general but also because of its probable better iops with less threads, where the local SAS SSD's would win in latency - which helps iops, but then again it might not help enough because of how much slower these SAS SSD's are.

Comparing bandwith - I would need 3 times as many SAS SSD's - but bandwidth is not my primary concern - iops are, since virtual machines crawes iops - I am not running a sequential read/write service as a VM.

I guess the right way to do this would be to load a VM, and run a benchmark that would show what kind of iops I get.

Then I would have correct numbers to compare against the SAS SSD's - but it would still be a guess at how many SSD's I would need, since specs != real performance
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
At least you'd have a more realistic set of requirements then - if thats tripple the max theoretical perf of SSDs then you know you can leave it be - if it gets close then it might make sense to run a test by taking a pair of ssds to test local vs remote difference to see the impact before spending a lot of cash on new ssds.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
Ok - I have tested a bit:

My previous quoted performance was via ISCSi also a windows VM using CrystalDiskMark. So numbers are diffent and not really comparable.

Setup:

Windows VM 16 cores, hosted in ESXi on a datastore on NFS hosted on my mirror of P4510's on another machine with 40GBps network in between. Windows file cache turned off.

using diskspd.exe -c10G -d60 -s -o32 -b128k -L
queue depth 32, sequential reads, block size 128k

One thread
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
86712254464 |       661562 |    1377.94 |   11023.50 |    2.902 |     0.126
4 threads
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
 97835679744 |       746427 |    1554.70 |   12437.57 |    8.444 |   108.991

8 threads
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev 
-----------------------------------------------------------------------------------------------------
93093232640 |       710245 |    1479.34 |   11834.68 |   18.926 |     1.483
Random Read/write tests:
diskspd.exe -c10G -d60 -r -o32 --Sh -b128k -w40 -L

1 thread
Code:
 bytes               |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
77441794048 |       590834 |    1230.62 |    9844.96 |    3.250 |     1.652
4 threads
Code:
 bytes               |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
86493102080 |       659890 |    1374.45 |   10995.60 |   11.638 |    14.581
8 threads
Code:
 bytes               |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
82535120896 |       629693 |    1311.56 |   10492.46 |   24.395 |    15.338
ESXi datastore hosted on ISCSi - queue depth 32, sequential reads, block size 128k
One thread, sequential
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
97242841088 |       741904 |    1545.28 |   12362.21 |    2.588 |     1.026
One thread, sequential, queue depth 1
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
100770775040 |       768820 |    1601.33 |   12810.62 |    0.078 |     0.218
4 threads, sequential, queue depth 1
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
146684903424 |      1119117 |    2330.96 |   18647.69 |    0.214 |   0.209
8 threads, sequential, queue depth 1
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
155370782720 |      1185385 |    2469.44 |   19755.53 |    0.404 |   0.312
For comparison - a datastore hosted on a Intel Optane 900P locally on the ESXi - I usually only use this for swap

4 threads, sequential, queue depth 1
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
172875579392 |      1318936 |    2747.15 |   21977.18 |    0.181 |   0.066
8 threads, sequential, queue depth 1
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
166860816384 |      1273047 |    2651.58 |   21212.61 |    0.376 |   0.156
8 threads, sequential, queue depth 32
Code:
 bytes                |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev
-----------------------------------------------------------------------------------------------------
131612409856 |      1004123 |    2091.44 |   16731.54 |   13.387 |   1.279
All in all decent numbers for my not so organized test - and not quite the numbers like the specs, but I think the network takes a lot of the iops.

Having the datastore via ISCSi is definately faster both in terms of raw transfer speed, but also latency, local nvme is naturally faster, but having P4510's locally would probably be very fast.

I don't have any SAS ssd's, so I guess I would need to buy a couple smaller ones and see how that performs before deciding to pony up for bigger ones.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
There are some pretty nice numbers here - is that with sync standard/enabled or disabled?

You now need to decide which of these test most realistically represents your desired workflow (or day to day operations),
At home this usually is is rarely the impressive 8 Jobs QD32 results, probably rather the 4/8 threads qd 1-4...

If you want I can borrow you some HUSSM1604's, they just are sitting idle on the desk atm (got 8) if you pay shipping back and forth
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
Numbers are with sync standard - but you have to remember writes from ESXi via NFS is always sync - that is one of the problems with ESXi and NFS datastores - ESXi insists that all writes are sync, which makes NFS Writes "slow" compared to ISCSi.

Whats is the model numbers for those SSD's ? I don't find anything when I search for HUSSM1604.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
thats why I ask - you could have set it to forced async on ZFS level.

But these numbers are higher than my optanes I think ... need to look up old numbers to check since I moved them off to vSan...
They are HUSMM1640ASS204's , sorry for the typo
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
Aah okay,

I will think about your generous offer to loan me your ssd's.

vSAN how is that working for you? Are you using the vsan in vspshere or solarwinds via virtual machines?

I have been considering it myself, but was unsure about performance if it was worth it - furthermore when my storage is on my FreeNAS I have more options in terms of backup.

Right now I am using veeam + replication of my freenas to another freenas box.

But perhaps vsan will make more sense with my new setup on my vrtx box, but I was looking into this local storage performance because ESXi servers running with storage via the shared PERC on the VRTX can actually share the same LUN as a "SAN" - so no need to migrate data if a host has to failover, since data is already shared.

But my gut feeling is that it will be expensive to get the same kind of performance + restore capability as I have now with my data on a seperate server.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
vSAN as in vmWare vSan.
Its beauty is the ease of use and integration into the whole stack (policies, permissions, management, reporting and of course builtin HA whichs is the main reasons to keep it for me).
Its performance is nothing to write home about ... or only to complain. I have opened and commented numerous threads about it (eg https://forums.servethehome.com/index.php?threads/vsan-3node-real-numbers.27193/) and ended my experiments with this one https://forums.servethehome.com/ind...-up-or-the-history-of-my-new-zfs-filer.28179/.

So if you're looking for performance for home usage - don't do vSAN.
Local disks on the other hand do have the chance to be of higher performance, but you loose flexibility.
Thats why I originally suggested to move the nvme locally since i thought you only had one box (the only scenario where I'd consider local drives), but this might be a specific solution to your VRTX HW.

Shoot me a pm if you want the drives.