EMC KTN-STL3 15 bay chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nitrosont

New Member
Nov 25, 2020
20
9
3
I'm not sure how anyone could accurately answer about all future possibilities, but all indications are that with the correct interposers, these shelves can run any size SATA or SAS drive. I have a couple 16TB's in mine, mixed in with 8's, 10's, 12's, and 14's, both SAS and SATA drives in the mix now too. They all run just fine.

Low negotiated link speeds when a shelf is full or near full of drives appears to be the only persistent annoyance. That is 100% solvable when running all SATA drives, but it is a somewhat cryptic fix. However with a mix of drives the problem is so far unsolved (for me, anyway) when running a SAS/SATA mix. I have not tried all SAS drives in a shelf yet, but it's probably same as all SATA (link speed can be successfully dictated). These shelves seem to run faster/better as 12-drive shelves then 15-drive, with four drives and a blank in each of the three 5-wide bays. I have not run benchmarks to truly know that, it's a subjective observation.

These units are fairly old SAS2 tech though, on the slower (and cheaper) side of what is out there. They will not be "good" forever. Which is true of anything. I would not expect to still be running these shelves ten years from now. But maybe.
Thank you very much for this detailed answer. I didn't know so far about the advantage to fill the shelf up with only 12 drives. But that currently not really a thing to consider. Right now, I'm only dealing with 5 drives. But good to know, that it might be wise to populate 4 in a row and then leave one spot free.
 

BrassFox

Member
Apr 23, 2023
35
12
8
Thank you very much for this detailed answer. I didn't know so far about the advantage to fill the shelf up with only 12 drives. But that currently not really a thing to consider. Right now, I'm only dealing with 5 drives. But good to know, that it might be wise to populate 4 in a row and then leave one spot free.
Well, sort of. Don't do 1/2/3/4/blank/5/10 empty slots though. Spread around the love a bit more.... here is why.

What I was saying was that I noticed my shelves are a lot happier with 12 drives, in three groups of four, than they are with 15 drives. I measure "happy" by whether I need to fuss around after booting to get good negotiated link speeds. Also some subjective observations of performance. I made no effort to measure perfoamce, but I can see it by how long it takes to do it's pool check after boot. I can also tell very quickly when I have one or more drives in a slow link speed, without measuring anything. If just one drive gets the dreaded 1.5 link speed it slows down the whole show. Link speed 3 does not (either 3 or 6 is good)

So here is how these things work. You must think about how the signals are being passed progressively from part-to-part along the way from a single drive, to interposer, to expander, to the SAS cable, to the HBA, and then to PC. Then do what you can (if anything) at each juncture to make stuff happen faster. In the case of physical drive placement, you can do something:

Each of the two SAS expanders has it's own singular 8088 cable input (not getting into daiy-chain scenario here as I avoid this, and not dual-channel SAS as I am using single channel SAS). After the SAS cable, within the shelf's expander shelf itself, there are three channels of five drives.
Each of the five-wide bays (three of these bays) are separated by vertical metal plates. These are the expander channels.
So three expander channels total, 5 drives each, is how you get to 15 drives per shelf.

Any time you are using less than 14 drives total per shelf, you will want to optimize the performance by distributing those drives as evenly as possible between the three expander channels. It makes things as speedy as possible.

In your case, with three drives:
Two per channel with the fifth drive living by itself in the third channel. I know the SAS system starts counting slots at zero (meaning it counts the slots as #00 - #14) but I am going to use using human counting here.
So you would install your drives as follows:
Drive in slot 01, slot 02, skip 03/04/05. Drive in slot 06, slot 07, skip 08/09/10. Last drive in slot 11, skip 12/13/14/15.
When you buy a sixth drive, put that in slot 12 (for six drives, two per expander channel)

Add three more drives? Distrubte them evenly: to slots 03, 08, and 13. (nine drives, three per channel). And so on. In this way you are maximizing the performance. The benefits are noticable. What I was saying before is that after 12 drives (in 3 groups of four) the expander seems to be getting pretty busy, as I noticed a decline in performance when I had finally fully populated my shelves with 15 drives. Plus the link speed thing started popping up.

Also: populate your empty slots with empty caddys. Otherwise the fans are sucking most of that air through the empties (path of least resistance) and not sucking so much air across the drives (not cooling them as well). My shelves came with 2.5" to 3.5" adapters in all the caddys, and so I installed those in the blank caddys when I had empty slots, and added some tape (packing or masking tape) to the 2.5/3.5 adapters to slow down the airflow. Basically I simulated a 3.5" drive. In such way, all 15 slots will get an even shot at that air that those raging fans pull in.

If you do not do this, the air will go in through the easiest path and mostly bypass the drive-populated slots. I can post a pic of my caddy "tape mod" if you'd like. I added the interposers too, in the empty caddys. Figured this is the safest way to store them. I still managed to misplace one interposer of my 45 though (dammit) but it will turn up.

I am about to fire up my third shelf again with six drives to start, and play with ZFS. So I will use 2/2/2 in the channels and install 3/3/3 empty caddys with air blockers.

I recently figured a way to get air filters into the face plates. That, coupled with some well-placed sound insulation foam in the rear of my case, has finally made these units pretty damn quiet. Took me years of putzing around with various ideas, trying to get to quiet. I am finally there.
Which is a shame because I also finally figured out to "spoof" the power supply fans, and to then control them manually. I still may do that just becuse, but I do not need it anymore. I use cyptominer fan spoofers. Only certain ones work. They must report back tach speed on the yellow wire exactly the same speed as the call speed on the blue fan wire. With fan spoofers that can do that, no more shut down. Now you must both power and modulate your fans yourself.

Right now I have two shelves on top of each other and fully populated, and sitting in a 74F degree room they all run at 79F or about 5 degrees warmer than ambient, without any air intake filter. With my new intake filters (they choke it a bit) they run at 86-88F, meaning the drives increase to about roughly 12-14 degrees above ambient, when active. I need the air filtration because cats. With this new air filter config, which blocks noise from the front and slows down the air out the rear just a bit, my shelves are super silent. So I don't really need the fan spoofers mod anymore, and I don't want the drives running much warmer than this anyway. They are whisper quiet now, can barely hear them!

For anyone about to complain about why am I am using F, rather than C, read this:
Kelvin is for how a molecules feels.
Celcius is for how water feels.
Farenheit is for how people feel. I am human.

Anyway- if I yank a drive and leave a slot wide open, the air goes there. The other drives will rise in temp by about 5 degrees, or worse under heavy activity. I've seen this effect, so what I describe about filling in the blanks with unused caddys is valid.

Good luck, have fun tinkering
 
Last edited:

BrassFox

Member
Apr 23, 2023
35
12
8
If you are going to use Ubuntu use Ubuntu Server unless you want a GUI. You have to install ZFS to Ubuntu. It will take some time but you will get it.
Hey Fib,

So after much reading and pondering... I am thinking that for my first foray into ZFS storage (which if successful, I would migrate nearly everything into, adding more storage in groups of a half-dozen drives) I will build a dedicated PC for ZFS: an older Ryzen 7 I have lying around, on a B550 board, with 32 GB ECC Ram, and then run only TrueNAS Core on this.

Nothing else, just ZFS storage for that build. With the EMC shelves, of course. One to begin with, but all three eventually or maybe more. I have enough slush storage capacity to move around about 70 TB of drives and crap to migrate at a time. This build would have either two HBA's (I like hot spares), or one HBA running and a 10G network card, with a light GPU (like the 1050ti I have, growing dust) running off the second NVMe slot. Weird, I know... but it works just fine as long as you don't try to play Fortnite on it. Hopefully it also works on TrueNas Core.

I would keep that build as the ZFS / NAS storage box separate and alone, and resist all temptations to run anthing else on it. It's first array would consist of six 10 TB SAS drives.

Most of my other projects will get relegated to run together on a second server PC build, a Ryzen Threadripper with 64 GB ECC ram and gobs of PCIe lanes for whatever I want, running Proxmox.

My hardware choices are based upon what I have lying around. I would instead use the Dell R610 I have, which has gobs of RAM on it, but the PCIe 2.0 slots on that server are a real bummer for this. I am moving up to LSI 93XX cards, and even my trusty 9206-16E's need PCIe 3.0. So the Dell server is not good for ZFS due to lousy PCIe 2.0, and the threadripper seems like too much horsepower for a storage-only appliance. I could run proxmox and ZFS on the TR but I am trying to minimize my odds for failure here. Hence the new B550 build, and will keep the old Dell on the shelf. It is pretty old tech now, anyway.

Sound like a decent ZFS starting point/plan, to you?
 

nitrosont

New Member
Nov 25, 2020
20
9
3
Again: many thanks for your experience and sharing these!

Well, sort of. Don't do 1/2/3/4/blank/5/10 empty slots though. Spread around the love a bit more.... here is why.

What I was saying was that I noticed my shelves are a lot happier with 12 drives, in three groups of four, than they are with 15 drives. I measure "happy" by whether I need to fuss around after booting to get good negotiated link speeds. Also some subjective observations of performance. I made no effort to measure perfoamce, but I can see it by how long it takes to do it's pool check after boot. I can also tell very quickly when I have one or more drives in a slow link speed, without measuring anything. If just one drive gets the dreaded 1.5 link speed it slows down the whole show. Link speed 3 does not (either 3 or 6 is good)

So here is how these things work. You must think about how the signals are being passed progressively from part-to-part along the way from a single drive, to interposer, to expander, to the SAS cable, to the HBA, and then to PC. Then do what you can (if anything) at each juncture to make stuff happen faster. In the case of physical drive placement, you can do something:

Each of the two SAS expanders has it's own singular 8088 cable input (not getting into daiy-chain scenario here as I avoid this, and not dual-channel SAS as I am using single channel SAS). After the SAS cable, within the shelf's expander shelf itself, there are three channels of five drives.
Each of the five-wide bays (three of these bays) are separated by vertical metal plates. These are the expander channels.
So three expander channels total, 5 drives each, is how you get to 15 drives per shelf.

Any time you are using less than 14 drives total per shelf, you will want to optimize the performance by distributing those drives as evenly as possible between the three expander channels. It makes things as speedy as possible.

In your case, with three drives:
Two per channel with the fifth drive living by itself in the third channel. I know the SAS system starts counting slots at zero (meaning it counts the slots as #00 - #14) but I am going to use using human counting here.
So you would install your drives as follows:
Drive in slot 01, slot 02, skip 03/04/05. Drive in slot 06, slot 07, skip 08/09/10. Last drive in slot 11, skip 12/13/14/15.
When you buy a sixth drive, put that in slot 12 (for six drives, two per expander channel)

Add three more drives? Distrubte them evenly: to slots 03, 08, and 13. (nine drives, three per channel). And so on. In this way you are maximizing the performance. The benefits are noticable. What I was saying before is that after 12 drives (in 3 groups of four) the expander seems to be getting pretty busy, as I noticed a decline in performance when I had finally fully populated my shelves with 15 drives. Plus the link speed thing started popping up.
Like you described it, it makes sense. I'll keep that in mind and follow your advice! I'm buying more caddys / interposer asap.

Also: populate your empty slots with empty caddys. Otherwise the fans are sucking most of that air through the empties (path of least resistance) and not sucking so much air across the drives (not cooling them as well). My shelves came with 2.5" to 3.5" adapters in all the caddys, and so I installed those in the blank caddys when I had empty slots, and added some tape (packing or masking tape) to the 2.5/3.5 adapters to slow down the airflow. Basically I simulated a 3.5" drive. In such way, all 15 slots will get an even shot at that air that those raging fans pull in.

If you do not do this, the air will go in through the easiest path and mostly bypass the drive-populated slots. I can post a pic of my caddy "tape mod" if you'd like. I added the interposers too, in the empty caddys. Figured this is the safest way to store them. I still managed to misplace one interposer of my 45 though (dammit) but it will turn up.

I am about to fire up my third shelf again with six drives to start, and play with ZFS. So I will use 2/2/2 in the channels and install 3/3/3 empty caddys with air blockers.

I recently figured a way to get air filters into the face plates. That, coupled with some well-placed sound insulation foam in the rear of my case, has finally made these units pretty damn quiet. Took me years of putzing around with various ideas, trying to get to quiet. I am finally there.
Which is a shame because I also finally figured out to "spoof" the power supply fans, and to then control them manually. I still may do that just becuse, but I do not need it anymore. I use cyptominer fan spoofers. Only certain ones work. They must report back tach speed on the yellow wire exactly the same speed as the call speed on the blue fan wire. With fan spoofers that can do that, no more shut down. Now you must both power and modulate your fans yourself.
I didn't thought about it, but as you described it, it makes totally sense. In my use case the shelf is more of a backup - think it'll be fired up just once a week or so. But then I'll get some data transferred and the temps and the performance will matter to me!

Regarding the "fan spoofing" - could you explain that a bit more? I understand, the fans goe haywire if one PSU is missing / unplugged / broken. Do you install small PCBs to fake the tach speed?

And yes, if you would be able to post some pictures about your mods and setup, I'd be very in interested.

You mentioned, that you install the interposer also in an empty caddy. Wouldn't it save a little power to not install them in empty slots? I would assume since the interposer isn't powered it might reduce the power consumption a tiny bit?

For anyone about to complain about why am I am using F, rather than C, read this:
Kelvin is for how a molecules feels.
Celcius is for how water feels.
Farenheit is for how people feel. I am human.
Since I'm from Europe (metric system), I don't agree, but I can relate ;) I wouldn't dare to complain! ;)
And it's not to hard to convert to °Celsius for me :cool:

Good luck, have fun tinkering
I always have! :D
 
Last edited:

roberth58

Member
Nov 5, 2014
37
7
8
62
Florida
I recently figured a way to get air filters into the face plates. That, coupled with some well-placed sound insulation foam in the rear of my case, has finally made these units pretty damn quiet. Took me years of putzing around with various ideas, trying to get to quiet. I am finally there.
Which is a shame because I also finally figured out to "spoof" the power supply fans, and to then control them manually. I still may do that just becuse, but I do not need it anymore. I use cyptominer fan spoofers. Only certain ones work. They must report back tach speed on the yellow wire exactly the same speed as the call speed on the blue fan wire. With fan spoofers that can do that, no more shut down. Now you must both power and modulate your fans yourself.
Any chance you could provide more info/images regarding the fan spoofers, air filters and sound insulation. I am running 3 units with 31 drives in truenas and in the summer in Florida they are getting hot. Ambient is 27c and the drives are running from 32 to 40c. Turning up the fans and adding sound insulation would help.

thanks
 

Fiberton

New Member
Jun 19, 2022
23
5
3
Hey Fib,

So after much reading and pondering... I am thinking that for my first foray into ZFS storage (which if successful, I would migrate nearly everything into, adding more storage in groups of a half-dozen drives) I will build a dedicated PC for ZFS: an older Ryzen 7 I have lying around, on a B550 board, with 32 GB ECC Ram, and then run only TrueNAS Core on this.

Nothing else, just ZFS storage for that build. With the EMC shelves, of course. One to begin with, but all three eventually or maybe more. I have enough slush storage capacity to move around about 70 TB of drives and crap to migrate at a time. This build would have either two HBA's (I like hot spares), or one HBA running and a 10G network card, with a light GPU (like the 1050ti I have, growing dust) running off the second NVMe slot. Weird, I know... but it works just fine as long as you don't try to play Fortnite on it. Hopefully it also works on TrueNas Core.

I would keep that build as the ZFS / NAS storage box separate and alone, and resist all temptations to run anthing else on it. It's first array would consist of six 10 TB SAS drives.

Most of my other projects will get relegated to run together on a second server PC build, a Ryzen Threadripper with 64 GB ECC ram and gobs of PCIe lanes for whatever I want, running Proxmox.

My hardware choices are based upon what I have lying around. I would instead use the Dell R610 I have, which has gobs of RAM on it, but the PCIe 2.0 slots on that server are a real bummer for this. I am moving up to LSI 93XX cards, and even my trusty 9206-16E's need PCIe 3.0. So the Dell server is not good for ZFS due to lousy PCIe 2.0, and the threadripper seems like too much horsepower for a storage-only appliance. I could run proxmox and ZFS on the TR but I am trying to minimize my odds for failure here. Hence the new B550 build, and will keep the old Dell on the shelf. It is pretty old tech now, anyway.

Sound like a decent ZFS starting point/plan, to you?

Should be fine. Such a late response sorry about that. You may want to eventually update to something like a 730XD.
 
  • Like
Reactions: Koop

odditory

Moderator
Dec 23, 2010
385
75
28
Also: populate your empty slots with empty caddys. Otherwise the fans are sucking most of that air through the empties (path of least resistance) and not sucking so much air across the drives (not cooling them as well). My shelves came with 2.5" to 3.5" adapters in all the caddys, and so I installed those in the blank caddys when I had empty slots, and added some tape (packing or masking tape) to the 2.5/3.5 adapters to slow down the airflow. Basically I simulated a 3.5" drive. In such way, all 15 slots will get an even shot at that air that those raging fans pull in.
Quoting this for importance. If you have open slots, the drives have almost no airflow.

On one occasion after adding one of these shelves, additional caddies had not yet arrived but I had enough for 12 x 20TB disks (groups of 4, with the fifth slot blank in each). And during an array init I noticed drive temps uniformly 51-53C. I then placed a foam insert into each of the three blank drive slots, and temps dropped to 34-35C uniformly, meaning the airflow is engineered very well when the device is configured as intended. Not that this is shining a light on anything previously unknown, but the degree of difference is striking.
 

BigPines

New Member
Dec 3, 2024
2
0
1
Newbie here. I bought this thinking it was just a normal SAS enclosure. I have an Areca ARC-1882IX-24 and I have loaded the EMC-KTN-STL3 up with 22TB Western Digital Red Pros. I connected to the bottom expander and no love. All it does is crash my computer. :( The interposers that came with it are mostly 303-286-003C-00 but I have four 303-115-003C-003D. Am I barking up the wrong tree here?
 
Last edited:

bonox

Active Member
Feb 23, 2021
101
28
28
do you get the same result with just one drive installed? I've never seen that first interposer code before, but try one drive only of each interposer type as a test.

More info on what "crash my computer" means would also be helpful. There's nothing about the shelf that would cause an OS to simply die.
 

BigPines

New Member
Dec 3, 2024
2
0
1
I tried using only the four 303-115-003C-003D interposers that I had because I read those were verified to work with SATA and I got the same results. The card did see the enclosure sometimes but never could attach any of the drives. I have two of these units and I tried them both.

I didn't want to get into the weeds about what it does to my computer because I don't think it is important to potentially getting it working but it freezes the Areca ARC-1882IX-24 and it will no longer respond on it's web interface. This appears to be unrecoverable and it requires a reboot to fix. Because the RAID card stops responding, eventually I can't open any disks attached to that card and since I have VMs and other processes that are running on those disks, it eventually makes the computer unresponsive or unusable until I reboot it.

I will try one drive at a time with each type of interposer. I am told I need 303-116-003D to work with SATA drives. Those seem like they are very difficult to find.
 
Last edited:

Fiberton

New Member
Jun 19, 2022
23
5
3
I tried using only the four 303-115-003C-003D interposers that I had because I read those were verified to work with SATA and I got the same results. The card did see the enclosure sometimes but never could attach any of the drives. I have two of these units and I tried them both.

I didn't want to get into the weeds about what it does to my computer because I don't think it is important to potentially getting it working but it freezes the Areca ARC-1882IX-24 and it will no longer respond on it's web interface. This appears to be unrecoverable and it requires a reboot to fix. Because the RAID card stops responding, eventually I can't open any disks attached to that card and since I have VMs and other processes that are running on those disks, it eventually makes the computer unresponsive or unusable until I reboot it.

I will try one drive at a time with each type of interposer. I am told I need 303-116-003D to work with SATA drives. Those seem like they are very difficult to find.
SAS drives work perfectly. Never tried any SATA drives. Folks say they work. I have 90 SAS drives in my Enclosures and still running like a top.
 

LowKey

New Member
Jan 2, 2025
1
1
3
I got my hands on an EMC KTN-STL3 today, that was about to be scrapped. Disks naturally weren't included due to privacy laws, but having no previous knowledge of the device and finding no manuals available online, or any other support documentation, it was my absolute pleasure to run into this 14 page thread!

Just wanted to thank everyone who has contributed with such valuable information so far, and hope I'll be able to do likewise once I get this up and running. I was originally filled with so many questions (will it take drives bigger than 2 or 4 TB, how the channels are split, how loud it is, etc...) and pretty much everything I could think to ask so far got answered in reading through the 270 or so posts here!

Every time a model number was mentioned, I went and checked, and to my great pleasure I found that my unit has the 071-000-553 PSU's (3rd Gen. VE) the E13 revision of the 303-108-000E controllers and all 14 of the caddies I got with it are the SAS/SATA hybrid type (100-563-430) with the 303-116-003D interposers. Now I just need to get myself one more, to complete the set.

Since this came as a bit of an unexpected surprise, I still have to get myself a Dell H810 PERC card and some SFF 8088 cables (or just the one, since I'm going to be running SATA drives) and obviously a bunch of disks... But every project starts somewhere, right?
 
Last edited:
  • Like
Reactions: nexox

macrules34

Active Member
Mar 18, 2016
427
30
28
41
I was thinking about using an X-Brick (EMC XtremIO cluster containing two service processors, 1 DAE filled with SSD drives and battery backup) and since you can’t use the X-brick software without have an expensive license, I was thinking that I could install TrueNAS on the service processors. Would I be able to use true as to do fail over (having two paths to the storage and servers?