EMC KTN-STL3 15 bay chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nexox

Well-Known Member
May 3, 2023
1,518
731
113
Unlike most I want to speed up the fans not slow them down. I have a KTN-STL3 full of 6tb rust. They are running at 44-45C. the drives in a SC846 are running 35-36C in the same rack. Is there a way to raise the speed of the fans apart from pulling one power supply so the other goes into full afterburner.
Just trigger any of the failure conditions that people hit trying to mod the fans, disconnect a tach wire from one fan or disconnect the PWM signal from all the fans.
 

BusError

Member
Jul 17, 2024
45
9
8
For HBA I picked the Dell branded one Dell 02PHG9 (Full height bracket) or Dell 0T93GD (same, but half height bracket) simply because they have oversized heatsinks compared to most other 93xx's series I've seen. Small details, but these cards can run quite hot so...
 

pablob

New Member
Aug 7, 2024
1
0
1
I'm new here, came here just to learn about these DAE as I got one super cheap and I'm looking to use it to replace functionality of an aging self contained NAS box. I was looking to populate it with these SATA disks, WD Ultrastar DC HC520 HUH721212ALE600 0F29590 12TB SATA 3.5 HDD — ServerPartDeals.com. On the site description it mentions that they have an "ISE Power Disable Pin". What does this mean and is that relevant to me wanting to use them in this enclosure?
 

Fiberton

New Member
Jun 19, 2022
23
5
3
I came back here just to say, as I've been periodically re-reading these for various reasons... Fiberton: your advice has been totally accurate. Every single time. Thanks for that
No worries. Glad I could help. I have six of them running like a top. Never have issues. Thing to always remember is to just start the enclosures first and let them settle. Then just fire up your server. Other than that they run fine. They also can use 230v not just 120v. Just get a 230volt PDU and c13 to c14 power cables.
 

BrassFox

Member
Apr 23, 2023
35
12
8
Based on your advice I've been accumulating 10 TB SAS drives (data center pulls ) and you were right: these things are far superior to any damn SATA drive. I have 5x8 TB and 12x10TB SAS drives now. I wish I knew about these a few years back, before accumulating so many SATAs. I think many oof the SATAs will become my cold backups. I suppose they are still fine for Plex media too, after one long slow write they can just wait around to be asked for a read and do fine. Of course I am running a mix of drive types now, the EMC shelf doesn't seem to care. Although running the mix broke my LSIUtils settings for negotiated link speeds. So back to pulling/replacing a few drives, one by one, at every reboot to get the link speeds back. Interestingly the shelves rarely to never downgrade link speeds to the SAS drives, only a few of the SATAs. As had been the usual case before I found LSIUtil and it's settings. Having a mix of link speeds (even just one slow one) seems to make them all run slow. One sure way to make it work at reboot time: unclip and partially pull all the drives, spin up the shelves, then boot the server, then inset the drives one by one with a ~30 second pause. Might be easier on the power supplies too, to not try to cold-spin 15 drives at once. I noticed it does not seem to sequence spin up, which makse sense given that these are built to never be turned off. 15 drives going from zero to 5400/7200 RPM all at once is by far the biggest power load the shelf will ever experience, and would get that upon a usual startup too.

Right now I am down to two running shelves (both full) and I have a third that I had been keeping mostly for cold backups. Now I want to take five or six 10 TB SAS drives, rig up a Ryzen board to it with a SAS 3 LSI card, and take that deep dive into ZFS. It is daunting. I am not very experienced with Linux so I'm not sure exactly where to begin. A read lot of ZFS horror stories out there, too. But a few posts back you gave someone advice and I will likely start there. Ultimately I would like to migrate all my storage to ZFS, and as I (barely) understand things, I'll need to begin with a minimum 5-6 drives that are all the same size. Then I can add to the ZFS in a similiar fashion (5 or 6 drives at a time). If all that works out, my hope is to migrate everything away from my current setup in 50 TB blocks, but I'm still pretty leery of borking some cryptic ZFS setting and losing my shit. Been running stablebit drivepool on Win 10 boxes for years now, without issue. But the performance of these SAS drives (even as currently somewhat hobbled... with queue depth set low for SATA, and no multipath link because microsoft sucks) combined with the promise of zero corruption from ZFS is causing me to venture into new territory.

Is there some benefit to running them on 230 Volts? I kept to 115 mainly because thats what I have handy for UPS's. The server nerds are awfully proud of 230V UPS units.
 
Last edited:

BrassFox

Member
Apr 23, 2023
35
12
8
...even as currently somewhat hobbled, with queue depth set low for SATA
Now that I think on that, I wonder if it's wise to bump queue depth on the SAS drives before making the SAS multipath work.

This is what I am talkig about (below).

Queue Depth Comparison.png

Most LSI cards come with QD set to 256 but I found that to be troublesome and throw errors with SATA drives in these shelves. Actually setting it to 32 was too, I had set it to QD=31 and all my errors went away. But running SAS drives like that as I am now (and especially SAS drives on just one path) must be slowing them down. They are still far quicker than SATA drives though.

I also wonder if I were to plug the SAS drives straight into the LSI card, via breakout cables, maybe the multipath mystery just goes away. Perhaps without an expander, the HBA card recognizes both paths per device and reports it to the PC as one? Not sure.

I do have a nice Threadripper with ECC RAM, just sitting and growing dust. Lots of PCIe bandwidth there, I could be running multiple HBA cards so maybe that becomes my ZFS box. WIth or without the EMC shelves, or perhaps both. I could keep cold backups on the shelves and thus not run those, most of the time. I have a lot of choices to work through here besides the spooky ZFS setup thing. Probably that TR is a crazy amount of horsepower for a NAS box but it could be doing other stuff at the same time too. I can also tune it down for power economy.
 

BusError

Member
Jul 17, 2024
45
9
8
No worries. Glad I could help. I have six of them running like a top. Never have issues. Thing to always remember is to just start the enclosures first and let them settle. Then just fire up your server. Other than that they run fine. They also can use 230v not just 120v. Just get a 230volt PDU and c13 to c14 power cables.
Perhaps I'm lucky, but I've been using mine the other way around! It is using mdraid on linux, and I power it on... mdraid discovers the drives; I mount/use them (I use the array for backups) and I then stop umount/stop the array and power the rack off! Been doing that plenty of times it works like a charm...
 

nexox

Well-Known Member
May 3, 2023
1,518
731
113
I also wonder if I were to plug the SAS drives straight into the LSI card, via breakout cables, maybe the multipath mystery just goes away. Perhaps without an expander, the HBA card recognizes both paths per device and reports it to the PC as one?
If you want to eliminate multipath just don't plug the SAS cable into one of the controllers on the shelf, then you get a single expander and a simple topology. There aren't going to be two paths if you directly connect drives unless you find a really special breakout cable, it could exist, but I haven't seen one, you pretty much need a backplane for multipath.
 

BrassFox

Member
Apr 23, 2023
35
12
8
If you want to eliminate multipath just don't plug the SAS cable into one of the controllers on the shelf, then you get a single expander and a simple topology.
Yes, I am certainly aware of this. I'd rather not run them hobbled like that, however. I ran two links on a drive shelf with a blank SAS drive installed to play around, and no amount of messing made Win 10 Pro do anything besides recognize the drive as two drives. Don't know what happens if you try to store something on it like that. Nothing good. Apparently you can transplant MPIO drivers from Windows server into Win 10 and get it to work, but it loses the mod upon reboot. That's the best it gets, since MS goes to great lengths to not allow MP on any of thier desktop OS, and the workaround at every boot is too much hassle (and risk) for me. One day the machine decides to just reboot because Windows likes to just do that, then all your data is borked. Pass....

There aren't going to be two paths if you directly connect drives unless you find a really special breakout cable, it could exist, but I haven't seen one, you pretty much need a backplane for multipath.
They exist. https://www.amazon.com/gp/product/B010CMW6S4.

temp pic.jpgtemp pic.jpg


I bought a pair of these a while back, and will report back about what they do after I get around to playing with them. Although I am already 99% sure they won't work on any MS Windows non-Server variety OS, but I may try it anyway just to find out. If the HBA card processes the two links into one device before showing the disk to the OS, then it should work. I do not believe they can do that, but perhaps with some LSIUtils or SAS2Flash cryptic fiddling. Which is always so much fun. I would not want to go through that every time I want to add or subtract a drive from any particular system anyway. I think I need to make the big leap into this TrueNAS / Proxmox / Linux / ZFS business, to get where I want to be.
 

BrassFox

Member
Apr 23, 2023
35
12
8
Perhaps I'm lucky, but I've been using mine the other way around! It is using mdraid on linux, and I power it on... mdraid discovers the drives; I mount/use them (I use the array for backups) and I then stop umount/stop the array and power the rack off! Been doing that plenty of times it works like a charm...
This works as long as you remember to unmount them all first before powering down the shelf. In fact this is how I get all my drives up to full link speed when a few negotiate 3.0 or the dreaded 1.5 speeds. Unmount (or offline them, in Win 10) and pull the slow link drive, count to ten, plug it back in. But don't power them down without the unmount/offline first, or you risk corrupting something.
 

BrassFox

Member
Apr 23, 2023
35
12
8
Those breakout cables are only going to connect a single port of each drive, no multipath.
I believe that would typically be true if you just plug them in out of the box and do nothing else. However there are LSI card bios settings for "half" or "wide" bandwidth, and the stock setting is "half bandwidth." I do not know if I can get it to work dual path, or if it can work at all, but if/when I try I will report the result. It doesn't much matter for Win 10 if it works but then reports each SAS drive as two drives, anyway.
 

Fiberton

New Member
Jun 19, 2022
23
5
3
Based on your advice I've been accumulating 10 TB SAS drives (data center pulls ) and you were right: these things are far superior to any damn SATA drive. I have 5x8 TB and 12x10TB SAS drives now. I wish I knew about these a few years back, before accumulating so many SATAs. I think many oof the SATAs will become my cold backups. I suppose they are still fine for Plex media too, after one long slow write they can just wait around to be asked for a read and do fine. Of course I am running a mix of drive types now, the EMC shelf doesn't seem to care. Although running the mix broke my LSIUtils settings for negotiated link speeds. So back to pulling/replacing a few drives, one by one, at every reboot to get the link speeds back. Interestingly the shelves rarely to never downgrade link speeds to the SAS drives, only a few of the SATAs. As had been the usual case before I found LSIUtil and it's settings. Having a mix of link speeds (even just one slow one) seems to make them all run slow. One sure way to make it work at reboot time: unclip and partially pull all the drives, spin up the shelves, then boot the server, then inset the drives one by one with a ~30 second pause. Might be easier on the power supplies too, to not try to cold-spin 15 drives at once. I noticed it does not seem to sequence spin up, which makse sense given that these are built to never be turned off. 15 drives going from zero to 5400/7200 RPM all at once is by far the biggest power load the shelf will ever experience, and would get that upon a usual startup too.

Right now I am down to two running shelves (both full) and I have a third that I had been keeping mostly for cold backups. Now I want to take five or six 10 TB SAS drives, rig up a Ryzen board to it with a SAS 3 LSI card, and take that deep dive into ZFS. It is daunting. I am not very experienced with Linux so I'm not sure exactly where to begin. A read lot of ZFS horror stories out there, too. But a few posts back you gave someone advice and I will likely start there. Ultimately I would like to migrate all my storage to ZFS, and as I (barely) understand things, I'll need to begin with a minimum 5-6 drives that are all the same size. Then I can add to the ZFS in a similiar fashion (5 or 6 drives at a time). If all that works out, my hope is to migrate everything away from my current setup in 50 TB blocks, but I'm still pretty leery of borking some cryptic ZFS setting and losing my shit. Been running stablebit drivepool on Win 10 boxes for years now, without issue. But the performance of these SAS drives (even as currently somewhat hobbled... with queue depth set low for SATA, and no multipath link because microsoft sucks) combined with the promise of zero corruption from ZFS is causing me to venture into new territory.

Is there some benefit to running them on 230 Volts? I kept to 115 mainly because thats what I have handy for UPS's. The server nerds are awfully proud of 230V UPS units.
You do not need to run it on it on 230v. Many Enterprise setups burn less power when using higher voltage.
 
Last edited:

Fiberton

New Member
Jun 19, 2022
23
5
3
Based on your advice I've been accumulating 10 TB SAS drives (data center pulls ) and you were right: these things are far superior to any damn SATA drive. I have 5x8 TB and 12x10TB SAS drives now. I wish I knew about these a few years back, before accumulating so many SATAs. I think many oof the SATAs will become my cold backups. I suppose they are still fine for Plex media too, after one long slow write they can just wait around to be asked for a read and do fine. Of course I am running a mix of drive types now, the EMC shelf doesn't seem to care. Although running the mix broke my LSIUtils settings for negotiated link speeds. So back to pulling/replacing a few drives, one by one, at every reboot to get the link speeds back. Interestingly the shelves rarely to never downgrade link speeds to the SAS drives, only a few of the SATAs. As had been the usual case before I found LSIUtil and it's settings. Having a mix of link speeds (even just one slow one) seems to make them all run slow. One sure way to make it work at reboot time: unclip and partially pull all the drives, spin up the shelves, then boot the server, then inset the drives one by one with a ~30 second pause. Might be easier on the power supplies too, to not try to cold-spin 15 drives at once. I noticed it does not seem to sequence spin up, which makse sense given that these are built to never be turned off. 15 drives going from zero to 5400/7200 RPM all at once is by far the biggest power load the shelf will ever experience, and would get that upon a usual startup too.

Right now I am down to two running shelves (both full) and I have a third that I had been keeping mostly for cold backups. Now I want to take five or six 10 TB SAS drives, rig up a Ryzen board to it with a SAS 3 LSI card, and take that deep dive into ZFS. It is daunting. I am not very experienced with Linux so I'm not sure exactly where to begin. A read lot of ZFS horror stories out there, too. But a few posts back you gave someone advice and I will likely start there. Ultimately I would like to migrate all my storage to ZFS, and as I (barely) understand things, I'll need to begin with a minimum 5-6 drives that are all the same size. Then I can add to the ZFS in a similiar fashion (5 or 6 drives at a time). If all that works out, my hope is to migrate everything away from my current setup in 50 TB blocks, but I'm still pretty leery of borking some cryptic ZFS setting and losing my shit. Been running stablebit drivepool on Win 10 boxes for years now, without issue. But the performance of these SAS drives (even as currently somewhat hobbled... with queue depth set low for SATA, and no multipath link because microsoft sucks) combined with the promise of zero corruption from ZFS is causing me to venture into new territory.

Is there some benefit to running them on 230 Volts? I kept to 115 mainly because thats what I have handy for UPS's. The server nerds are awfully proud of 230V UPS units.
If you are going to use Ubuntu use Ubuntu Server unless you want a GUI. You have to install ZFS to Ubuntu. It will take some time but you will get it.
 
  • Like
Reactions: BrassFox

BrassFox

Member
Apr 23, 2023
35
12
8
Those breakout cables are only going to connect a single port of each drive, no multipath.
After looking into this more (without actually trying them) it seems that this is correct.
They do make "wide-channel" breakout cables that look very similar, but they're difficult to find and these ain't it
 

nitrosont

New Member
Nov 25, 2020
20
9
3
After reading a lot about this JBOD I'm tempted to buy one.
One question that I still question myself: Does JBOD support any size of disks? So when for example 30TB HDDs might come up, would those be supported as well? Or is there any kind of firmware limit regarding the disk size.
 

BrassFox

Member
Apr 23, 2023
35
12
8
After reading a lot about this JBOD I'm tempted to buy one.
One question that I still question myself: Does JBOD support any size of disks? So when for example 30TB HDDs might come up, would those be supported as well? Or is there any kind of firmware limit regarding the disk size.
I'm not sure how anyone could accurately answer about all future possibilities, but all indications are that with the correct interposers, these shelves can run any size SATA or SAS drive. I have a couple 16TB's in mine, mixed in with 8's, 10's, 12's, and 14's, both SAS and SATA drives in the mix now too. They all run just fine.

Low negotiated link speeds when a shelf is full or near full of drives appears to be the only persistent annoyance. That is 100% solvable when running all SATA drives, but it is a somewhat cryptic fix. However with a mix of drives the problem is so far unsolved (for me, anyway) when running a SAS/SATA mix. I have not tried all SAS drives in a shelf yet, but it's probably same as all SATA (link speed can be successfully dictated). These shelves seem to run faster/better as 12-drive shelves then 15-drive, with four drives and a blank in each of the three 5-wide bays. I have not run benchmarks to truly know that, it's a subjective observation.

These units are fairly old SAS2 tech though, on the slower (and cheaper) side of what is out there. They will not be "good" forever. Which is true of anything. I would not expect to still be running these shelves ten years from now. But maybe.
 
  • Like
Reactions: nitrosont

nexox

Well-Known Member
May 3, 2023
1,518
731
113
I have not tried all SAS drives in a shelf yet, but it's probably same as all SATA (link speed can be successfully dictated).
I have a unit full of SAS drives, I haven't used it a whole lot but all the disks and expander links always came up at 6G for me.
 

BrassFox

Member
Apr 23, 2023
35
12
8
I have a unit full of SAS drives, I haven't used it a whole lot but all the disks and expander links always came up at 6G for me.
That's what I figured would be the behavior with all SAS drives. I wonder if the card dictates this behavior or the shelf's expander. Meaning if one shelf all SATA, another shelf all SAS, both cabled to same HBA, what happens then?

I have the card set via LSIUtils to negotiate 6 for all which worked fine with all SATA. Mixing drive types seems to break that. My SAS drives do seem to do better, usually they all pop up with 6, and one or two SATA drives get less. Although I have seen a couple SAS drives settle in at 3 after boot in a mix, which is fine (still faster than they can run) but SAS will rarely to never get the dreaded 1.5.

I don't reboot a lot, but it seems about 1/3 of the time I will have one SATA get 1.5 now. This can be difficult to shake. I usually power cycle the thing if I get one of those. It's as if the shelf remembers the drive, no number of pull and reinstall attempts seems to gain you a faster speed. Once the dreaded 1.5 appears, it gets sticky