EMC KTN-STL3 15 bay chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

abel68

New Member
Jan 22, 2025
4
0
1
Hey all, hoping someone might be able to help. I recently picked up a secondhand EMC KTN-STL3 off ebay, unfortunately it came with all 303-116-003D in the caddies (I only have SAS drives and my understanding is these only work with SATA drives). So I bought a handful of 303-115-003D interposers and put them in the caddies along with 2 x 10tb HGST HDDs and 3 x 3tb Seagate HDDs.
The enclosure is connected by an 8088 to 8088 cable to and 8088-8087 card, through to an LSI 9211-8i card in IT mode.
The server itself is just an i7 9700 with unraid, but every time I try to interact with the hard drives in the enclosure they disappear from the OS and the lights on the caddies go out. Unplugging the enclosure from the power and back in makes the drives reappear but the same thing happens. I've tried running Ubuntu live off a USB and the same thing happens.
I was trying to reformat the drives to 512b via terminal but the drives just drop every time. Would really appreciate some help!
did you manage to solve the problem?
 

floydcohen

New Member
Jun 6, 2022
3
0
1
Can anyone with 15-disk SATA drives in their enclosure do a quick sequential read benchmark (dd iflag=direct)?
My setup can't push pass around 960MB/s. ZFS (all raidz types, raid0), mdadm raid0/10/5, parallel individual disk transfers.
Before I buy another cable (only thing I can think of), just want to see if there's a problem at all or if this the best I can get with this chassis on one cable. Or if there's an issue with my LSI 9207 4i4e. From what I know, this should be 24Gbps or 3GB/s max throughput. Other numbers in this thread that stuck out is the 1.9GB/s zfs scrub, but not sure if that's dual-path/sas drives whereas I'm getting about half that.

Current setup:
Fedora 41 (also tested under Windows 11 with Storage Spaces / crystaldisk)
Dell Precision 3620 Workstation. Also tried HP Elitedesk 800 G3, Lenovo Ideacentre 5 (i3 intel 10th gen), ASUS Prime B560M-A AC (i5 10th gen).
LSI 9207-4i4e (crossflashed 9217-4i4e) P20 firmware IT mode in PCI3.0x16 slot
EMC 038-003-787 8088 to 8088 cable
EMC KTN STL3 with both power supplies on
15 SATA drives with interposers. All drives 2TB 7200rpm Hitachi Ultrastar 7K3000 (HUA723020ALA640).

tuned-adm profile throughput-performance

dd iflag=direct of=/dev/null bs=64M status=progress if=20GBtest.bin
21052913719 bytes (21 GB, 20 GiB) copied, 21.8693 s, 963 MB/s

if i run six dd iflag=direct of=/dev/null bs=64M if=/dev/sd[a-f] &
iostat shows roughly 150MB/s each drive
9 disk direct dd runs, roughly 110MB/s each drive
15 disk direct dd runs, rougly 60MB/s each drive

lspci -vvnn
01:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)
LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns
LnkSta: Speed 8GT/s, Width x8

lsiutil -p1 -s

LSI Logic MPT Configuration Utility, Version 1.71, Sep 18, 2013

1 MPT Port found

==============================================================================

ioc0 LSI Logic SAS2308 D1 MPT 200 Firmware 14000700 IOC 0

SAS2308's links are 6.0 G, 6.0 G, 6.0 G, 6.0 G, down, down, down, down

B___T___L Type Vendor Product Rev SASAddress PhyNum
0 0 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1009 9
0 1 0 Disk ATA HUA723020ALA640 AA30 5006048001ce100a 10
0 2 0 Disk ATA HUA723020ALA640 AA30 5006048001ce100b 11
0 3 0 Disk ATA HUA723020ALA640 AA30 5006048001ce100c 12
0 4 0 Disk ATA HUA723020ALA640 AA30 5006048001ce100d 13
0 5 0 Disk ATA HUA723020ALA640 AA30 5006048001ce100e 14
0 6 0 Disk ATA HUA723020ALA640 AA30 5006048001ce100f 15
0 7 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1010 16
0 8 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1011 17
0 9 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1012 18
0 10 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1013 19
0 11 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1014 20
0 12 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1015 21
0 13 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1016 22
0 14 0 Disk ATA HUA723020ALA640 AA30 5006048001ce1017 23
0 15 0 EnclServ EMC ESES Enclosure 0001 5006048001ce103e 24

lsiutil -p1 option 16

Type NumPhys PhyNum Handle PhyNum Handle Port Speed
Adapter 8 0 0001 --> 7 0009 0 6.0
1 0001 --> 6 0009 0 6.0
2 0001 --> 5 0009 0 6.0
3 0001 --> 4 0009 0 6.0

Expander 25 4 0009 --> 3 0001 0 6.0
5 0009 --> 2 0001 0 6.0
6 0009 --> 1 0001 0 6.0
7 0009 --> 0 0001 0 6.0
9 0009 --> 0 000a 0 3.0
10 0009 --> 0 000b 0 3.0
11 0009 --> 0 000c 0 3.0
12 0009 --> 0 000d 0 3.0
13 0009 --> 0 000e 0 3.0
14 0009 --> 0 000f 0 3.0
15 0009 --> 0 0010 0 3.0
16 0009 --> 0 0011 0 3.0
17 0009 --> 0 0012 0 3.0
18 0009 --> 0 0013 0 3.0
19 0009 --> 0 0014 0 3.0
20 0009 --> 0 0015 0 3.0
21 0009 --> 0 0016 0 3.0
22 0009 --> 0 0017 0 3.0
23 0009 --> 0 0018 0 3.0
24 0009 --> 24 0019 0 6.0

Enclosure Handle Slots SASAddress B___T (SEP)
0001 8 500605b00b3551e0
0002 16 5006048001ce103e 0 15


Power Management actions menu, select an option: [1-99 or e/p/w or 0 to quit] 2

CurrentPowerMode: 0x00
PreviousPowerMode: 0x00
PCIeWidth: 0x08
PCIeSpeed: 0x02
ProcessorState: 0x00000000
PowerManagementCapabilities: 0x0000010C
IOCTemperature: 0x0049
IOCTemperatureUnits: 0x02
IOCSpeed: 0x01
BoardTemperature: 0x0000
BoardTemperatureUnits: 0x00

Current Port State
------------------
SAS2308's links are 6.0 G, 6.0 G, 6.0 G, 6.0 G, down, down, down, down

Software Version Information
----------------------------
Current active firmware version is 14000700 (20.00.07)
Firmware image's version is MPTFW-20.00.07.00-IT
LSI Logic
Not Packaged Yet
EFI BIOS image's version is 7.27.01.01

Firmware Settings
-----------------
SAS WWID: 500605b00b3551e0
Multi-pathing: Disabled
SATA Native Command Queuing: Enabled
SATA Write Caching: Enabled
SATA Maximum Queue Depth: 32
SAS Max Queue Depth, Narrow: 0
SAS Max Queue Depth, Wide: 0
Device Missing Report Delay: 0 seconds
Device Missing I/O Delay: 0 seconds
Phy Parameters for Phynum: 0 1 2 3 4 5 6 7
Link Enabled: Yes Yes Yes Yes Yes Yes Yes Yes
Link Min Rate: 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
Link Max Rate: 6.0 6.0 6.0 6.0 6.0 6.0 6.0 6.0
SSP Initiator Enabled: Yes Yes Yes Yes Yes Yes Yes Yes
SSP Target Enabled: No No No No No No No No
Port Configuration: Auto Auto Auto Auto Auto Auto Auto Auto
Interrupt Coalescing: Enabled, timeout is 10 us, depth is 4
 

pld

New Member
Feb 14, 2022
27
0
1
Hi there

1. Could somebody share a kind of health status checking script? Linux based. The sg_ses --join --filter command gives me a tons of information, I have no idea which are significant.

2. Did anyone try to connect ktn-stl3 with NETAPP PM8003 controller?
 

chadtn

New Member
Jan 15, 2025
6
0
1
It's been mentioned several times that only the bottom controller works with SATA disks, but some of my internet searches mentioned multiplexing interposers that sound like they might mimic the secondary path. Is that really a thing, or am I just confused? Are there interposers that would allow the EMC KTN-STL3 controllers to support MPIO with SATA drives? If so, are there any that people would recommend?

I'm wanting to build a Hyper V cluster and was looking for a way to share a couple dozen SATA JBOD's between two hosts without having to add a third node for shared storage. The main use case would be for Chia farming so I can live migrate the guest OS when patching hypervisors without losing access to the Chia plots . Might also fill another shelf for ip camera recordings in the future as well. Both scenarios are pretty similar...Just need the drives accessible from either side of the cluster so the VM's don't lose access when I have to reboot the opposing side for patching. The storage the VM's themselves live on is completely separate from the data drives I'm trying to share between the hosts.

Thanks!

Chad
 

nexox

Well-Known Member
May 3, 2023
1,987
990
113
Are there interposers that would allow the EMC KTN-STL3 controllers to support MPIO with SATA drives?
If they exist they're rare, and while that would probably solve the physical layer connection issue, you can't just connect a disk (SAS or otherwise) to two hosts and start using it, standard filesystems will just immediately corrupt and only one port of a spinning SAS drive can be active for IO at a time, among other issues. By the time you solve all of those problems you'd have been better off just getting a network storage server and doing it the simple way.
 

chadtn

New Member
Jan 15, 2025
6
0
1
If they exist they're rare, and while that would probably solve the physical layer connection issue, you can't just connect a disk (SAS or otherwise) to two hosts and start using it, standard filesystems will just immediately corrupt and only one port of a spinning SAS drive can be active for IO at a time, among other issues. By the time you solve all of those problems you'd have been better off just getting a network storage server and doing it the simple way.
I'm still in the planning stages for the cluster and not very familiar with shared storage. Right now I've got 16 SATA JBOD's on one host connected to an LSI 9305-16e HBA. I ran across four or five EMC KTN-STL3 shelves with caddies, controllers, power supplies, and cables for next to nothing and was wondering if it would be possible to get MPIO working with the SATA drives. My thought process was to only have the drives mounted on one host at a time and automate the dismount/remount as part of the live migration of the guest os to the other host.

The data on those drives doesn't need to be backed up or duplicated. Just needs to be accessible from either side. The VM's themselves live on striped NVME drives and I still need to sort out that piece as well if anyone has suggestions. I've been watching eBay for used dual 100gb connection Mellanox Connectx 5 cards. Probably directly connect the hosts and do some kind of vSAN replication there. Half the fun for me is figuring out all of the new stuff as I learn. heh..

Thanks!

Chad
 

bonox

Active Member
Feb 23, 2021
130
41
28
it can be done on sas drives provided that only one host at a time is running. Failover may well result in corruption of something in the transition.

as for doing it with sata, it's just another layer of complexity for the same issues. MPIO is mostly for two controllers on the one server as a failure mechanism for the HBA not to cover failure of the host as a whole. It's not generally intended (outside of a full SAN setup) to provide the behaviour for multiple hosts.
 
  • Like
Reactions: nexox

SeaneyC

New Member
Jan 27, 2023
11
10
3
Hi all,

Has anyone had any issues with the 3rd set of 5 slots randomly dropping out? Array was fine with 10 disks in Unraid, but now i’m at 12 i keep losing one or both of the disks every week or 2. I’ve already tried different interposers (SAS or SATA) on the disks and different slots from 11-15 but it still keeps happening. The most annoying thing is it only happens on one or both disks every few weeks, i drop the disk(s) out of unraid and rebuild from parity and it’s fine again for a few while.

Anyone got any suggestions for best options to try and troubleshoot first?

So far i’ve got (in order of cost/hassle):
  1. Try a different port on my HBA (9201-16e)
  2. Swap the controllers in the disk shelf over (only using bottom one as half the disks are SATA)
  3. Buy a new 8088-8088 cable
  4. Buy a new HBA
  5. Buy a new shelf
Answers on a postcard please…
 

bonox

Active Member
Feb 23, 2021
130
41
28
i think, from memory, that the shelves have blocks of disks assigned to each of three sas lanes, with one assigned to the downstream port. dropouts on a specific block of disks is likely to be the connection to the sas lane involved. This could mean, in order of cost//likelyhood

1: the cable
2. the connectors in th eshelf controller/hba
3. the hba is faulty on a lane
4. the backplane in the shelf has faults in the traces or an internal connection

I personallly vote for the cable, then the hba
 
  • Like
Reactions: Fiberton

SeaneyC

New Member
Jan 27, 2023
11
10
3
Thanks for the reply - this was my understanding too but wasn't sure if anyone had inside knowledge of these where one channel of the expander regularly died or not. I couldn't find anything so suspect this is a me problem. I'm going to move the cable over later given that's a 2 second job and order a new cable and wait for it to arrive.
 
  • Like
Reactions: Fiberton

bonox

Active Member
Feb 23, 2021
130
41
28
I'd be astounded if just the segment of an expander corresponding to a set of disks specifically assigned to one SAS lane died with no other issues. It's a single logic controller and circuit board. It's far more likely to be the lane - either at the connections or the cable in the middle, even more so given the intermittent nature.

For completeness though, is the HBA actively cooled?
 
  • Like
Reactions: SeaneyC

SeaneyC

New Member
Jan 27, 2023
11
10
3
So this was a fun one, after swapping disks into some difference parts of the drive enclosure to see what else I could try for free, I found that the issue followed the drives which at least meant I had a good idea it was something to do with the drives or the interposer boards. Fortunately I already have some spare interposers, which didn't make any difference, so either I had 2 bum drives that pretty much always failed at the same time, or something else was at play.

For reference, they are both 16TB Seagate Exos drives, and exhibits the same issues as the Unraid thread here: https://forums.unraid.net/topic/146...ou-encounter-random-shutdowns-or-read-errors/

Since setting the drives to never power down, I don't seem to have had any further issues. I'm going to leave it for another week or 2 to confirm I haven't just got lucky recently, and then try changing the 2 firmware settings in the thread to see if this resolves the issue while still retaining the spin down.

Pretty random one and seems to be related specifically to the firmware on the drives, and using an LSI HBA in Unraid while using the drive spin down feature. Thought i'd drop the (hopefully) good news back in this thread in case anyone else is experiencing the same issues :)
 
  • Like
Reactions: Dennisjr13

noj11

New Member
Sep 27, 2025
1
0
1
hello all

i am new the EMC array gang- i love all the great information

i got a great deal on two EMC AAE or Unity D3123F
i think its basically the same as the KTN-STL3 but 12gb and comes with the quiet PS
they came fully populated with 30 6tb SAS hgst drives :)
i have been using a md1200 for years but got tired of the noise- even after fan modding it

i thought i would pass some information along as i have had a weird fault which has had me tearing my hair out -in case anyone has the same issue
i use a perc H840 card and had cabled them up correctly with redundancy
BUT for some weird reason in OMSA one array would always report a critical failure on connection 0
i tried multiple cabling options and swapping drives around but it didn't seem to make any difference
i have had a look with Sg_ses and i have solved it
turns out if you have a caddy with a interposer in it but no drive it throws a critical error which OMSA mistakenly thinks as a SAS cabling error
i have removed the interposers on the empty drives and im good to go
also only enclosure provided this error for some reason-
I have 303-286-003C-00 interposers- i am going to try some 303-115-003d to see what happens
 
Last edited:

trozzadozza

New Member
Jan 12, 2026
1
0
1
I've recently picked a KTN-STL4 up. It's got the sas 6gb controllers from the STL3. It seems like the fans are at full speed and I cannot connect to it. The server its connected to is running unraid as well.
 

bonox

Active Member
Feb 23, 2021
130
41
28
this sounds a like a world of hurt. The STL4 as well as being older was made for FC drives, not SAS. I've not seen one but would expect a raft of differences in things like the interposers (if indeed it has any at all - the STL3 uses the same FC backplane connectors as the STL4 but the interposer takes care of the connector and protocol difference to the drive; not sure if it's bidirectional conversion - ie an interposer lets you connect a SAS drive to a FC controller normally, however the closed ecosystem EMC didn't have to follow a standard and the backplane is mostly passive, so should just be flowthrough from drive to interposer to backplane to controller) and the power supplies would be the original designs, not the newer STL3 mods that gave fan control. FC as a protocol is not compatible with SAS and while the controllers may fit mechanically, they might be nothing alike electrically; thinking about this more, the predominantly passive backplane should probably negate that issue. That said though, the FC boxes were also designed to be connected to an arbitrated loop controller node that's not as simple as a SAS HBA.

On top of that, the original disks (if that's what you're using) are probably formatted as 520 or 528byte blocks, which unraid won't be able to use, even if it could see them.

I'd start by looking at the interposers you've got/not got and go from there. I'd bet that if it appears 'dead' to the controller, then you've not got the correct interposer compatible with the SAS controller and whatever the drives are you have. And don't forget that unless you're' using sas drives, you can only use the bottom "A" controller to connect to your HBA.
 
Last edited:

entropy47

New Member
Feb 8, 2026
3
0
1
Every Google search I have made about the KTN-STL3 leads me to a different post somewhere within this massive thread - delighted to have found a collection of people who might know more about this disk shelf than EMC ;)

My shelf is functioning well but it seems pretty loud. Part of me says "hey, this is just server equipment" - and it's certainly louder for a few seconds when it first turns on. But on the other hand, I have tried unplugging one of the power supplies - a little orange light appears on that power supply - and it doesn't get any louder. I had heard that this would typically force it into Quite Loud mode, so I just wondered if I might be stuck in Quite Loud mode all the time.

Happy to run any sg_ses commands that might help - I'm not at all familiar with this util but buried deep within the ancient scrolls I spotted this. It looks like at least some of the fans are going at max rpm?

Code:
crispin@lonelyplanet:~$ sudo sg_ses --page=es /dev/sg8 | grep -B 3 -A 3 rpm
      Overall descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
        Ident=0, Do not remove=0, Hot swap=0, Fail=0, Requested on=0
        Off=0, Actual speed=5300 rpm, Fan at third lowest speed
    Element type: Temperature sensor, subenclosure id: 2 [ti=18]
      Overall descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
--
      Overall descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
        Ident=0, Do not remove=0, Hot swap=0, Fail=0, Requested on=1
        Off=0, Actual speed=5300 rpm, Fan at highest speed
      Element 0 descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
        Ident=0, Do not remove=0, Hot swap=0, Fail=0, Requested on=0
        Off=0, Actual speed=5300 rpm, Fan at highest speed
      Element 1 descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
        Ident=0, Do not remove=0, Hot swap=0, Fail=0, Requested on=1
        Off=0, Actual speed=5300 rpm, Fan at highest speed
    Element type: Temperature sensor, subenclosure id: 3 [ti=21]
      Overall descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
--
      Overall descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
        Ident=0, Do not remove=0, Hot swap=0, Fail=0, Requested on=0
        Off=0, Actual speed=2690 rpm, Fan at third lowest speed
      Element 0 descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
        Ident=0, Do not remove=0, Hot swap=0, Fail=0, Requested on=0
        Off=0, Actual speed=2690 rpm, Fan at third lowest speed
      Element 1 descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
        Ident=0, Do not remove=0, Hot swap=0, Fail=0, Requested on=0
        Off=0, Actual speed=2690 rpm, Fan at third lowest speed
    Element type: Temperature sensor, subenclosure id: 4 [ti=24]
      Overall descriptor:
        Predicted failure=0, Disabled=0, Swap=0, status: OK
I know just enough about computers to know I could be missing something terribly obvious, so here's a picture of it on the floor of my office that shows a few part numbers, which ports I have and haven't connected, etc. I would try anything to fix this - dodgy serial commands, hardware mods, prayer...
 

bonox

Active Member
Feb 23, 2021
130
41
28
Can only suggest you refer back many pages to the people who have mentioned the different revisions of the PSUs and compare to what you have. If you've got a later rev 2 or 3 that's as good as you'll get, bearing in mind that people have subjective opinions of "loud" and unless you're recording levels with proper rigour there's no way to compare internet opinions.

The other point would be that the PSUs don't wind up when you remove one until after the timeout marked on the case. I think it's 1 or 2 minutes, after which they'll ramp up to jet engine levels. Is this what you did, or just a pull, listen and replug?
 

entropy47

New Member
Feb 8, 2026
3
0
1
Thanks for the reply. If I'm reading this right I have rev 8 (?!) power supplies. I'll try leaving it unplugged for more than 2 mins - I might have been *just* shy of that. Just for clarity I am unplugging the electrical cord but leaving the PSU module seated in the chassis (if that makes a difference).

I can dig up a proper db meter but was hoping the RPM figures might help if other folks could see what speeds they're idling at. A `sudo sg_ses --page=es /dev/sg8 | grep -B 3 -A 3 rpm` from anyone who feels theirs is idling quietly would be very handy.

Interesting side note - I only have 3 disks in mine (and blank plates / caddies for the other 12 slots). At some point I plan to fill it up - I bought this for future proofing - but I'm pretty confident I don't need as much cooling as this thing is designed for :)
 

entropy47

New Member
Feb 8, 2026
3
0
1
I've powered it up now by plugging in a single PSU cord (normally I try to jam both in simultaneously). After two mins it's the same volume level as it was with both plugged in. I plugged the other one back in and waited two mins - still no change. I've heard people describe the "failsafe mode" with 1x PSU as screaming like a jet engine and while it's currently much louder than it needs to be, you couldn't call this jet engine loud.

I interpreted the "2 mins" sticker on the back as "wait 2 minutes after removing power before you handle this stuff, because it gets hot" but your explanation makes more sense, I find things can stay hot for a lot more than two mins!

Maybe I need to take one of these bad boys apart and try to do my own hardware mod - I can't see any writeups around. Do you know if all the fans are in the power supplies and they're just forcing air over the disk shelf that way? Or are there more hidden elsewhere in the innards of the chassis?
 

bonox

Active Member
Feb 23, 2021
130
41
28
fans only in the PSUs. They're radial blowers They only exhaust air out the back, meaning a more or less constant low pressure through all the drive caddies, which is why you can't leave the empty bays open, since you'll get no useful cooling for the populated bays. You could however slow down the fans and fully blank the unused slots if you've only got 3 drives. but you'll need to keep in mind a need to cool the controllers since they will contain some hot chips that are probably spread around.
 
Last edited: