Only getting 500Mb/s copying between SAS3 SSDs?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

cgtechuk

Member
Dec 27, 2016
80
1
8
Hi All,

Building a new Homelab server in a Jonsbo N3

Specs are:

Jonsbo N3 Case
CWWK Q670 Motherboard
Intel i5 12500T
128GB DDR5
LSI 3008-8i SAS3 HBA
2 x NETAPP / SAMSUNG PM163aa SAS3 SSDs (patched so no 32k bug)

Currently doing some tests on an Ubuntu Live image before setting up proxmox fully and running

sudo dd if=/dev/sda of=/dev/sde bs=1G count=10 oflag=direct iflag=direct status=progress

I am only getting 500MB/s

ubuntu@ubuntu:~$ sudo dd if=/dev/sda of=/dev/sde bs=1G count=10 oflag=direct iflag=direct status=progress
10737418240 bytes (11 GB, 10 GiB) copied, 21 s, 517 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 20.7646 s, 517 MB/s


I have also confirmed using the smartctl tool that the drives are connected in 12GB/s mode (single path not multipath) given they are using a breakout cable onto the HBA


sudo smartctl -x /dev/sda |grep "link"
negotiated logical link rate: phy enabled; 12 Gbps
negotiated logical link rate: phy enabled; unknown

sudo smartctl -x /dev/sde |grep "link"
negotiated logical link rate: phy enabled; 12 Gbps
negotiated logical link rate: phy enabled; unknown

Any ideas what I am doing wrong here?
 

cgtechuk

Member
Dec 27, 2016
80
1
8
iirc 1633s are not all that fast , 1643 is a step up
Yeah I had seen that elsewhere but they should still be able to get above 500Mb/s tbh these are unformatted block devices at the moment so there is no data on them I was expecting higher, If I run the same test across 2 Dell (Micron) SATA SSDs I get maybe 100MB/s less which is what I would expect but the SAS figures seem off
 

nexox

Well-Known Member
May 3, 2023
1,823
881
113
dd isn't really a benchmark, unless your workload involves running dd I wouldn't worry about its performance.
 

cgtechuk

Member
Dec 27, 2016
80
1
8
dd isn't really a benchmark, unless your workload involves running dd I wouldn't worry about its performance.
Ah this is fair. If I run the benchmark disk option in Disks in the ubuntu GUI it shows 1GB/s but I guess thats ram to disk and not disk to disk?

Is there anything else I could use to make the numbers more realistic to actual performance
 

nexox

Well-Known Member
May 3, 2023
1,823
881
113
Is there anything else I could use to make the numbers more realistic to actual performance
Most benchmarks focus on a single volume, because read/write to/from RAM should tell you enough to figure out the maximum disk-to-disk rate, plus if you're doing actual file copies they will go disk to ram then ram to disk. If you want something more sophisticated than the Ubuntu built in benchmark there's fio, but it is very configurable and thus it's easy to generate a bunch of benchmark numbers that have little to do with your real world usage.
 

cgtechuk

Member
Dec 27, 2016
80
1
8
Most benchmarks focus on a single volume, because read/write to/from RAM should tell you enough to figure out the maximum disk-to-disk rate, plus if you're doing actual file copies they will go disk to ram then ram to disk. If you want something more sophisticated than the Ubuntu built in benchmark there's fio, but it is very configurable and thus it's easy to generate a bunch of benchmark numbers that have little to do with your real world usage.
Thanks, So given that the benchmarks within the ubuntu tool are saying over 1GB/s RAM to disk that I guess what you are saying is that everything is fine? :-D I will give fio a look see what it says
 

UhClem

just another Bozo on the bus
Jun 26, 2012
513
314
63
NH, USA
dd is a good tool for determining the ~max seq speeds (r/w) for SAS/SATA drives (within the/any constraints imposed by the controller used).
For read:
Code:
dd if=/dev/sdX of=/dev/null bs=4M count=256 iflag=direct [skip=Nk]
For write:
Code:
dd if=/dev/zero of=/dev/sdX bs=4M count=256 oflag=direct [seek=Nk]
(obvious)Note: the write test is data-destructive.

I'd expect your 1633a drives to get ~950-1050 MB/s for r&w.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,748
1,121
113
artofserver.com
by doing the disk-to-disk, you are constraining the read performance by the write performance of the other drive. it's only going to read as fast as it needs to write, even if it can read faster.

test read and write separately, if you want to use dd to just get sequential I/O, read from disk to put to /dev/null. Or for writes, read from /dev/zero, and write to disk.
 

sko

Well-Known Member
Jun 11, 2021
401
251
63
Or for writes, read from /dev/zero, and write to disk.
these won't result in actual writes, as the firmware will collate or even ignore such writes, i.e. does not actually write to each cell, so the speeds will be completely bogus.
You'd need to use /dev/random for that test, but then with fast drives you will only test the speed of the specific /dev/random implementation of your OS. Yes, some have rather slow implementations which will choke on such a test - plus the test will exhaust the randomnes, so don't run this on a busy host doing crypto operations (e.g. TLS).

Then again: there are much better tools to benchmark disks and testing for raw disk performance won't give you any reasonable info on your actual workload performance.
 
  • Like
Reactions: nexox and cgtechuk

cgtechuk

Member
Dec 27, 2016
80
1
8
What are the temperatures on the hba & ssds when you run the "tests"/"benchamrks"?
At the moment they are in an open case with fans exhausting from them not sure how to get HBA temp but its got a fan running on it now but the SSDs were around 40oC from memory, Would need to set it up again
 

sko

Well-Known Member
Jun 11, 2021
401
251
63
not sure how to get HBA temp
mprutil:

Code:
# mprutil show adapter
mpr0 Adapter:
       Board Name: SAS9300-8i
   Board Assembly:
        Chip Name: LSISAS3008
    Chip Revision: ALL
    BIOS Revision: 8.37.00.00
Firmware Revision: 16.00.01.00
  Integrated RAID: no
         SATA NCQ: ENABLED
PCIe Width/Speed: x8 (8.0 GB/sec)
        IOC Speed: Full
      Temperature: 65 C                                   <-----

[...]

(not sure if/which linux distros actually include the mpr/mps tools)
 
  • Like
Reactions: nexox

BLinux

cat lover server enthusiast
Jul 7, 2016
2,748
1,121
113
artofserver.com
these won't result in actual writes, as the firmware will collate or even ignore such writes, i.e. does not actually write to each cell, so the speeds will be completely bogus.
You'd need to use /dev/random for that test, but then with fast drives you will only test the speed of the specific /dev/random implementation of your OS. Yes, some have rather slow implementations which will choke on such a test - plus the test will exhaust the randomnes, so don't run this on a busy host doing crypto operations (e.g. TLS).

Then again: there are much better tools to benchmark disks and testing for raw disk performance won't give you any reasonable info on your actual workload performance.
this is all true. however, i don't get the feeling that the OP is after actual benchmark numbers. if so, there's also the requirement to warm up the drive by doing a few passes of full drive writes/reads before actually running the benchmark or the results will not be consistent. I think the OP is just trying to get an ballpark feel that their SSD are operating within ballpark figures.
 

cgtechuk

Member
Dec 27, 2016
80
1
8
this is all true. however, i don't get the feeling that the OP is after actual benchmark numbers. if so, there's also the requirement to warm up the drive by doing a few passes of full drive writes/reads before actually running the benchmark or the results will not be consistent. I think the OP is just trying to get an ballpark feel that their SSD are operating within ballpark figures.

This is exactly correct, I am not looking to fully benchmark I just wanted to know that my SAS controller / Drives / Setup was all running within specs and there wasnt anything glaring. I have just migrated everything from ESXi to proxmox on a new motherboard and SAS controller and was adding flash at the same time so I had no idea what I was SUPPOSED to be getting
 

sko

Well-Known Member
Jun 11, 2021
401
251
63
I just wanted to know that my SAS controller / Drives / Setup was all running within specs and there wasnt anything glaring.
then yes, your numbers are in the correct ballpark as those samsungs are quite slow:
1762522570413.png
 

cgtechuk

Member
Dec 27, 2016
80
1
8
then yes, your numbers are in the correct ballpark as those samsungs are quite slow:
View attachment 46299
I am guessing the 1635a has the same performance as the 1633a then?

Still way quicker than spinning rust I guess and I got 4 of the Samsungs at a very good price , Glad to hear that they are running as expected. I am experimenting with using these for VM Datastores as I have completely used all the wear out on my consumer NVMe I had and I guess these are much more durable
 

sko

Well-Known Member
Jun 11, 2021
401
251
63
Still way quicker than spinning rust I guess
stream throughput is pretty much negligible for real-world loads. What sets SSDs apart is that they are capable of several orders of magnitudes higher IOPS than spinning rust.
SSDs can usually sustain a relatively high throughput even with high IOPS, whereas HDDs will take a complete nosedive for random and/or concurrent IO.
If you need higher throughput, spread the load over many drives - e.g. by putting them in mirrored vdevs with zfs (with only two drives, this is the only viable configuration anyways...).
For workloads that require/benefit from even higher throughput, like VMs or databases with higher loads, just use NVMe drives (which are cheaper than SAS anyways...)
 

UhClem

just another Bozo on the bus
Jun 26, 2012
513
314
63
NH, USA
...
running

sudo dd if=/dev/sda of=/dev/sde bs=1G count=10 oflag=direct iflag=direct status=progress

I am only getting 500MB/s

Any ideas what I am doing wrong here?
Yes, I knew what you did wrong when I posted my reply (above).
And, (accounting for that mistake,) my conclusion was that your drives have max seq speeds, both r&w, in the range 950-1050 MB/sec (maybe up to 1150). I.e. they're A-OK SAS3 SSDs.

[This would make a great Job Interview Question--position for a skilled Unix person.]
Anybody qualified? :) :) (I retired 25+ yrs ago.)
 
  • Like
Reactions: cgtechuk