School me on drives please...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TeleFragger

Active Member
Oct 26, 2016
261
55
28
51
Ok I know whats going in our head...


yeah this post wonders all over the place. hope you can follow!!!! hah sorry my adhd brain is on fire!!!!!!!


oh crap another n00b.. oh great.... your partially right. but

I have been doing computers long enough to tell you that my favorite O/S was … Dr Dos 7.0...
I used o-scope and schematics to repair IBM P/S 2 and Value Point systems...
fixed token ring cards and memory expansion cards.
I even learned early - yeah people that don't know what pre internet was...
I learned on the 8088 and 8086 what crystal was the driver for the cpu and was overclocking since then... take the 4mhz crystal out and put an 8 in...
hell my fav cpu was the cyrix 6x86!!!!!! LOL....


I was a benchmark whore... loved me some checkit … ok found this while typing this.. had to look... LOL
https://winworldpc.com/product/checkit-pro/pro-1x

I was so fluent on hardware but lost site when it started getting too hard to keep up and that was around the q6600 time...
just so many cpu types, cpu sockets, memory speeds.. just too much for me to keep up with...



ive been doing my main job as an Altiris Administrator since 2006 for a few companies now and yeah im out of the loop on what the cool jobs are today. I am seeing...virtualization, web services, storage and the sort...
I do my part to stay up with my old dwindling hardware but effectively am running...


win10, server 2016
esxi - full lab running 2k16, ad, dns, dhcp, etc... with Altiris and all of its solutions...
now venturing into freenas


my latest posts craz has been about 10gb on the cheap... im $55 in on old Mellanox cx4 style ConnectX-1 cards that I have working on win10, esxi, freenas 11 (put a ticket in as not working on 12)


so with that all out of the way... lets get to the learning im after!!!

my current job, they use what comes in the Lenovo systems... and they are just cruddy WD Blue drives. Is what it is.. but we deal with scientists and masses of data. I have told my coworkers that we need to look at SSD and I was told.. no too expensive. I took it upon myself to use anvil storage benchmark tool and started benchmarking ALL of our types of HD's... normal spinning sata drives, then our laptops that had the cruddy intel 180gb ssd's that were dying so much onto other things. we started getting a few SSD drives in for special requests and I started benchmarking them.
with that we now put SSD drives in everything and now even the new Lenovo's are getting NVMe drives!!! WOOOT!!!!


so so once again starting back up on the benchmarking of drives... BUT benchmarks are just that.. not real world to me.

with my 10gb network, I have learned a lot and EniGmA1987 was there for a reading and push me forward.
How to use ramdrives to test your true network... etc...
now im there... all tested and starting...


im not getting the numbers to make 10gb worth it to me... unless im missing the point and it isn't about file copies but more of say running multiple vms via 10gb link from iscsi freenas datastore to the esxi host????


so what is needed to really get storage speeds?
I took my plex machine and did tons of various copies to ramdisks, ramdisk locally across 10gb to various SSD and got freakishly wild and low numbers.. nothing high... and my server has server 2016 with essentials role, stable bit drive pool and SSD Optimizer for cache (just installed that)...


during my 10gb file copies
Tried from plex though 10gb nic and got...to remote
120gb Samsung SSD - 30MB/s transfer rate
500gb Crucial SSD - 200MB/s transfer rate


but on my ramdisk to ramdisk I get 700MB/s+
how can I achieve closer to these numbers and not spend a fortune?
I have been at end with trying various things and need to be schooled on it as I am just missing something...



examples...


ramdisk to ramdisk via 1gb network


upload_2019-1-18_20-58-15.png

ramdisk to ramdisk via 10gb network

upload_2019-1-18_20-58-48.png


crystaldiskmark to just that ramdrive

upload_2019-1-18_20-58-40.png



so ive got better numbers but just don't understand how to get these on file copies...


I do photo and video editing and takes a while to copy files around... thus why this would be awesome-sauce!!!!!

anyway I got company so gotta leave it at that.... hopefully someone can help unclutter my mind...
 

Attachments

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
RAM is going to be faster than SATA SSDs. Those are not the fastest SATA SSDs either.

Optane 905p and you'll do fast.
 
  • Like
Reactions: TeleFragger

WANg

Well-Known Member
Jun 10, 2018
1,308
971
113
46
New York, NY
Okay, take a deep breath.

First of all, in order to understand your problem, you have to define what it is - and once you do, you have to figure out where the possible bottlenecks are.

So first of all, understand that there are different types of SSDs and not every one of them are performant, and there are reasons for that.

The eMMC that comes in my Lenovo Win8 tablet is...kinda crap, but better than most of my Class 10/U1 SD cards. That's a $200 Bay Trail tablet and I can't expect it to use top shelf storage, thats good for about 60-80 MByte/sec.
The Crucial BX300 SATA SSD in my cheap EliteBook laptop? Decent but nothing to write home about. It was older lithography and not a fast SSD controller - I am lucky if I get 400MByte/sec sustained
The Samsung 960 Evo on my Mac Mini? Even better, but still held back by the SATA bus - 400 usually, sometimes pegging up to 500+ (that's 600MBytes/sec max in theory).
The Apple propietary "12/16" PCIe 2.0 x2 512GB drives on my MBP13 Retina? Even better, I am sustaining local writes past the SATA limits (650MBytes/sec often, 730 sometimes)
Compare that to the PM961 on my XPS15, that one is faster still but held back by the Dell firmware for battery runtime, longevity, cooling and compatibility reasons (about 2250 MBytes/sec, 3200 MBytes/sec possible in most cases)

So, that's the SSD itself. then you have to deal with SATA controller / NVMe chipset processing overhead, which extracts its pound of flesh. On this 2018 MBP13 that I am testing, the onboard T2 chip made NVMe operations even faster than most other commercially available storage devices (2850 MBytes/sec), while on my XPS15, it was gimped since Dell didn't want to make the machine out of heat wicking metal for aesthetic and portability reasons. So depending on PCIe version, controller/firmware/driver efficiency or temperature considerations, it might not run full speed as well.

Next, there is the overhead of whatever filesystem you are dealing with, and whether certain things are cached in RAM or not. Some filesystems like ext4 are relatively simple, while others, like zfs, has its own intracacies - for example, on an HP MicroServer G7 running FreeNAS with 4 "nothing to write home about" HGST 4TB spinning NAS hard drives running zraid1 (that's the zfs equivalent of RAID5) I was able to cheat the theoretical transfer limit of 4 drives (125MB/sec x 3 + parity) by having a large zpool in RAM that can help push data out faster than 400MB/sec for smaller files (this is tested with the good old dd command inside FreeNAS itself copying stuff into a RAM drive from the zpool, BTW). Due to processing overhead and caching, I do not expect it to sustain that speed beyond a certain point as the drive will have to skidaddle the data as quickly as possible, and the cache replenish rate is not usually as good as burn rate on spindles. They might be better on SSDs, but even then SATA or SAS SSDs might not keep up with 10GbE raw transfer rates.
And there's the relatively weak processing power of that 2009 vintage Athlon Neo N40L CPU so RAID calculations and zfs commands are not running all that fast...but those are minor factors compared to the slow speed of the spinners. In order to evaluate properly those 3.5" SATA spinners will need to be swapped out with SATA SSDs.

Then of course, there is the issue of network overhead.
The actual sustained file transfer rate to the NAS is currently pegged at 1Gb/sec speeds, and even then it's rate limited by my Powerline AV1200 adapters, which run at 1/10 the speed connecting 2 Gigabit ports due to some 1930s apartment wiring). The 4 drives also have to talk iSCSI via 10GbE on the same spindles to my hypervisor node, and I can't imagine the FreeNAS network stack being happy talking to both the 10GbE card and the server's Realtek embedded NIC (most of us here have some choice words regarding their product line, and those choice words usually involve their propensity to peg out/crash servers and our collective desire to initiate sexual relations of the mothers of their entire design team for realizing such turds in silicon form). Even on the iSCSI side the zraid1 zpool can only sustain about 220MByte/sec to my ESXi VMs, and that's to be expected. iSCSI is not very performant, and we are sharing a zpool with a SMB share.