The price point is edging closer and closer to it making sense to swap out all my 2.5” spinners for SSD and my mouth is watering.
Glad I used by better judgement not to expand with the ST2000LM015
Have looked high and low for a ~20cm (or less) cable.
I have one of appropriate length without sideband, but would like to have sb if possible.
Literally the M14TQC hot swap cage is about 2 inches from the expander, so longer cables wind up blocking fan flow. Reconfiguring the expander...
Sorry I’ve been short on time lately and am unable to review in detail.
I realized too late the OP has disappeared and now you both have taken up the original convo. And then it went from the octane pcie to ramdisk and so forth.
What I’m saying is the vastly stark contrast in performance does...
He said defaults... defaults would be 128k recordsize.
Regardless, you are going to take a performance hit using ZFS any way you shake it but it’s not extreme like this unless you don’t understand how to tune it to your workload.
Since that’s a pretty wild performance hit it likely has to do...
I inquired with the zfs on linux folks and got the answer I was expecting:
One would need more data to see what's going on.
But default settings sticks out... if you mean 128k records...if you write 4k random sync writes over 128k records in a pool, you'll incur something like 64:1 IO...
This is interesting. Maybe I should hold off on those optanes and let my S3700s get a little longer in the tooth.
Are you using VMs? Would you mind spinning up Darwin with o3x for an additional datapoint?
Hi there.
I have one big ol JBOD with the following inside:
4x M28SAB which take 2 Molex connections each.
2x Intel RESCV360 which take one Molex each.
1x M14TQC
So, 11 Molex connections.
I also have a modular power supply which has 4x peripheral plugs for Molex/ SATA.
The obvious idea...
Seems the SMR drives have inflated cache to compensate for lower performance.
The SMR drives however have really appealing densities and form factors (4tb in a 2.5” package, that’d be 96tb in a 2U 24 bay Supermicro chassis).
So, first question: are they really so bad? I’m guessing the response...
Plenty of covered toggles out there, for aircraft and the like.
https://www.amazon.com/CZC-AUTO-Aircraft-Household-Industry/dp/B07DFZ3XKK/ref=asc_df_B07DFZ3XKK/
Depends on the size of your cutout probably.
Thanks! I was close, but no cigar.
I was thinking of moving to 2x 36 ports and using the 24 port for a backup chassis anyhow (see the WD red situation below).
Presently I also have two Intel S3700 100GB and a couple of 240gb enterprise Sandisk SSDs.
This build has lived happily for some time...
Thanks. You've been a wealth of information.
My concern was that firmware / configuration differences on the Intel branded expanders for their own servers might have altered the available functionality of the LSI chip.
If you'd be so kind would you make a recommendation for cabling my chassis...
Yes but one of my primary questions still eludes me.... being, why does all the SAS documentation never show downstream expanders dual linked to one another.... almost always dual link from controller to expander then single link daisy chaining expanders.
For dual link cascade example see figure 7.5.4 Page 46 of this document:
JBOD 2000 Manual
Seems the link speed I negotiated by the HBA primarily so unless one expander will saturate the 2500MBps there’s no need to use dual path with the expanders.
Dual connections from one expander to the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.