Have looked high and low for a ~20cm (or less) cable.
I have one of appropriate length without sideband, but would like to have sb if possible.
Literally the M14TQC hot swap cage is about 2 inches from the expander, so longer cables wind up blocking fan flow. Reconfiguring the expander...
Sorry I’ve been short on time lately and am unable to review in detail.
I realized too late the OP has disappeared and now you both have taken up the original convo. And then it went from the octane pcie to ramdisk and so forth.
What I’m saying is the vastly stark contrast in performance does...
He said defaults... defaults would be 128k recordsize.
Regardless, you are going to take a performance hit using ZFS any way you shake it but it’s not extreme like this unless you don’t understand how to tune it to your workload.
Since that’s a pretty wild performance hit it likely has to do...
I inquired with the zfs on linux folks and got the answer I was expecting:
One would need more data to see what's going on.
But default settings sticks out... if you mean 128k records...if you write 4k random sync writes over 128k records in a pool, you'll incur something like 64:1 IO...
This is interesting. Maybe I should hold off on those optanes and let my S3700s get a little longer in the tooth.
Are you using VMs? Would you mind spinning up Darwin with o3x for an additional datapoint?
I have one big ol JBOD with the following inside:
4x M28SAB which take 2 Molex connections each.
2x Intel RESCV360 which take one Molex each.
So, 11 Molex connections.
I also have a modular power supply which has 4x peripheral plugs for Molex/ SATA.
The obvious idea...
Seems the SMR drives have inflated cache to compensate for lower performance.
The SMR drives however have really appealing densities and form factors (4tb in a 2.5” package, that’d be 96tb in a 2U 24 bay Supermicro chassis).
So, first question: are they really so bad? I’m guessing the response...
Plenty of covered toggles out there, for aircraft and the like.
Depends on the size of your cutout probably.
Thanks! I was close, but no cigar.
I was thinking of moving to 2x 36 ports and using the 24 port for a backup chassis anyhow (see the WD red situation below).
Presently I also have two Intel S3700 100GB and a couple of 240gb enterprise Sandisk SSDs.
This build has lived happily for some time...
Thanks. You've been a wealth of information.
My concern was that firmware / configuration differences on the Intel branded expanders for their own servers might have altered the available functionality of the LSI chip.
If you'd be so kind would you make a recommendation for cabling my chassis...
Yes but one of my primary questions still eludes me.... being, why does all the SAS documentation never show downstream expanders dual linked to one another.... almost always dual link from controller to expander then single link daisy chaining expanders.
For dual link cascade example see figure 7.5.4 Page 46 of this document:
JBOD 2000 Manual
Seems the link speed I negotiated by the HBA primarily so unless one expander will saturate the 2500MBps there’s no need to use dual path with the expanders.
Dual connections from one expander to the...
From what I have read online dual link will only benefit when using SAS drives, not SATA from a performance perspective.
I believe this is a semantics problem again.
Dual path = cable redundancy
Dual domain = hba, expander and cable redundancy
Both can equate to bandwith gains.
Here is what I’ve found from Intel’s JBOD 2000 storage chassis user manual. Obviously these are set up with redundant connections, but it looks like the same principles can apply.
Seems clear despite contrary forum posts that the 36 and 24 port expanders support 32 and 20 disks respectively...
PS, what's odd is that Intel doesn't seem to have a real utility for the 'G' connector in the RES2CV240 in their complete suite of products. They don't seem to ever mention it's purpose.
Except to say in some instances it's used for mounting? Seems like a lot of trouble for a mounting...
I have a 4U chassis which I can fit 36 2.5" (4x 8 + 1x4). I'll be using SATA drives, but may migrate to SAS.
I'll be using ZFS to manage the array. So, the 1x4 will house the SSDs and hot spares
I presently have an Intel RES2CV360 and RES2CV240 which are laid out like this:
From what I...