EMC KTN-STL3 15 bay chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

SINN78

Active Member
Apr 3, 2016
111
25
28
45
Im up to 6 of these enclosures, with 5 rail kits. They have been hit or miss, averaging about $20US. 2 in canada, 3 in the US. Just keep an ebay search going for them, never the same vendor.
do you have a link to the rail kits that have worked? thanks for the help guys appreciate it
 

gregsachs

Active Member
Aug 14, 2018
591
205
43
do you have a link to the rail kits that have worked? thanks for the help guys appreciate it
Someone else suggested the APC 0M-756h kit as being reasonably priced and very sturdy, I'm using one with a xyratex xb-1235 myself. <$30 on the bay.
 

MishaZabrodin

New Member
Mar 18, 2020
12
0
1
Hello ,

I just got original EMC 5800 system with 175TB of total storage, with 11xKTN-STL3 racks, each with 15TB of mixed storage SSD,SAS.
All came with a lot of cables, and all original Cadys with HD.
This is a link:
PUBLIC - Google Drive

Ill be happy to answer any question about my system.

Also help me how to connect KTN-STL3 rack to PC or Server??

I tryed to connect SAS and SSD drives directly to IBM 3850X5 server backplane, Raid card sees the disk but cant create a virtual drive.
Are these special disks or specially formated?
 

gregsachs

Active Member
Aug 14, 2018
591
205
43
Do you want to connect the whole system, including the management system, or just a shelf?
If you want the whole system, I'd imagine that it can make space available over iscsi. possibly over SMB as well.
If you just want a shelf, you are one the right track, but the disks are likely 520 format and need reformatting to be used directly.
 

MishaZabrodin

New Member
Mar 18, 2020
12
0
1
Do you want to connect the whole system, including the management system, or just a shelf?
If you want the whole system, I'd imagine that it can make space available over iscsi. possibly over SMB as well.
If you just want a shelf, you are one the right track, but the disks are likely 520 format and need reformatting to be used directly.

Only Rack/Shelf , the cady marked as 520BPS. I need to physically connect the Rack to PC, that first issue.
 

gregsachs

Active Member
Aug 14, 2018
591
205
43
Only Rack/Shelf , the cady marked as 520BPS. I need to physically connect the Rack to PC, that first issue.
Looks like normal normal sff-8088 connectors.
RAID or HBA will drive your choice, it looks like it will take full height cards so something like a 9201-16e would work; or something like a 9286-cv8e for a raid controller.
That plus a sff-8088 cable from amazon or e-bay.
 

MishaZabrodin

New Member
Mar 18, 2020
12
0
1
Looks like normal normal sff-8088 connectors.
RAID or HBA will drive your choice, it looks like it will take full height cards so something like a 9201-16e would work; or something like a 9286-cv8e for a raid controller.
That plus a sff-8088 cable from amazon or e-bay.
Thank you, That make sense (the connector shape looks compatible).
I presume I can use existing Cable that is used to connect the shelfes together.
 

MishaZabrodin

New Member
Mar 18, 2020
12
0
1
Update on KTN-STL3 connection to IBM 3650 M4:

1.Purchased LSI SAS9200-8E -HP with 2 SFF-8088 output ports.
2.Changed Bios from 07.05.04.00 to 7.39.02.00 using sas2flash -b mptsas.rom
3.Changed Firmware from 05.00.13.00 to 20.00.07.00 using sas2flash -f 9200-8e.bin
4.Windows 10 64/pro recognized the card as LSI Adapter , SAS2 2008 Falcon .
5.Installed 15 SAS600GB 15k, HGST model HUS156060VLS600, these drives are original EMC drives with 520 Byte sector size.
6.Connected oroginal EMC cable to OO port on KTN-STL3 and port1 on 9200-8E-HP.
7.In Windows Device Manager all 15 Drives appeared as HITACHI HUS15606 CLAR600 SCSI Disk Device
8.When using Disk Management to format, disk appeared not ready (something like this)
9.Using sg_scan showed all drives with pd1,pd2....pd15
10.Using sg_format --format --size=512 pd1 , formatted first disk, then pd2 nd so on.
11.formating will take around 1 hour per drive, to reduce that time I tryed to format 8 drive at the same time, it took 1.5 hours for 8 drives.
just open CMD windows 8 times and execute sg_format changing pd number.
12.After format was done, Using windows Disk management, added Stripped Volume containing all 15 drives, for total of 8.4TB
13.Windows was able to use that volume as 8.4 TB E drive
14.CrystalDiskMark 7.0 result 2162MB/s Read and 1582MB/s Write, image attached.
15.Tryed to use port with 2 square symbol instead of 00, same result.
16.Tryed to connect with 2 cables instead of 1, I can see double number of drives under Device Manager, but size and transfer speed was the same.
17. Couldnt boot SAS 9200-8E-HP in Desktop HP and Asus motherboards, PC just hung.

18.Still have problem formating Micron P410m 200GB SSD drive, because of 520 Byte.(need help)!

19.Next Will try to connect to Centos 6 running on IBM 3850 X5 server, using same card.
20.Next will try second Rack attached to First Rack, and add another 15 drives.


Resume: I have a 8TB rack with 2.1 GB/s transfer read speed.!! is it good, can I improve this? what will be the other solution.
 

Attachments

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
Good evening gents, been a while and I stumbled on this thread. A while back, I scored one of these chassis for a good price and though, why not. My current media server is running in 5x SM 16-bay chassis with server in one and other four are DAS. Been OK for a couple of years but I have started having some backplain issues and thought these EMC units could start replacing them. The other factor has been the use of Windows Storage spaces and to be honets, M$ can go root their boots, sick of bullsh!t updates and no improvements to a subsystem that really does suck for Parity, yes, 16x 3TB drives in Double Parity each.

Anyway, here is the details, some of which have already been covered. The drives are formatted in 520 Bps or 528 Bps for which both are no good for Windblows. You will be able to detect them on Windows and RAID cards but thats it, cant do anything else with them.

I found a post in there regarding adding required files to Windows (version that has PowerShell, Server or Windows 10) and then follow the directions.

See this Link: EMC/NetApp Branded 520b block size SAS Drives ? : homelab

· Windows Server or Windows 10, need PowerShell
· EMC Chassis full of as many drives as you want, connect more in daisy-chain if needed
· SAS HBA – connect only one of the SAS SF-8088 cables as some HBA’s have a poopy-pants moment at dual-port drives presenting twice.
· Check in DISKPART that you can see drives.
· Run rest as per the instructions in that blokes post, one instance per drive, does take some time (several hours) as it is a low-level format, run a fresh instance for each drive to allow multiples to be done at same time.

Take note about HBA and dual-port drives. If you want to run SATA drives, they will only be detected on one of the interfaces. If you run the supplied SAS drives, they are dual-port and if both backplanes are connected to a HBA, drive will present itself to the system twice causing issues. RAID cards do work better

As for noise, if you are running servers, you will already be used to some noise. No, they are not silent but they are very well behaved and would be fine in a cabinet with door or in a room, just not your bedroom.

I still haven't figured out the Comm ports for programming or control, thats on my to-do list.
 

BeTeP

Well-Known Member
Mar 23, 2019
661
441
63
Only about 12" deep, very shallow.
The chassis is 14" deep and then you will need at least 1" clearance in the front (if you have the door and want to be able to close it) and 2-3" clearance in the back - the SFF8088 cables are not very flexible and even right angle ones (rare and expensive) take almost 2".
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
The chassis is 14" deep and then you will need at least 1" clearance in the front (if you have the door and want to be able to close it) and 2-3" clearance in the back - the SFF8088 cables are not very flexible and even right angle ones (rare and expensive) take almost 2".
I wasn't pulling my units out to measure them. But for the sake of it, I have grabbed the Tape-Measure out.
14.25" deep plus 1" clearance for the PSU and I/O board pull-rings on rear. You will need about 4" to clear cables and allow airflow.
 
Last edited:

devoye4001

New Member
May 20, 2020
2
1
3
Hi guys. I’m trying to get this device to work with a Windows Server PC equipped with an LSI MegaRAID 9201 controller. I have a cable that fits only the connector marked with diamonds, not circles. When I connect the device to the server controller, the disks do not appear in the disk manager. Am I right in thinking that everything will work as soon as I start using a cable suitable for the connector marked with circles? I have to buy it in that case.
By the way, can you tell me more about 520 bytes per sector?
 

devoye4001

New Member
May 20, 2020
2
1
3
Today I bought a cable marked with a circle (2*SSF8088 key 4/6). After connecting the unit with this cable, the installed disks appeared in the disk manager!
 
  • Like
Reactions: gb00s

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
Hi guys. I’m trying to get this device to work with a Windows Server PC equipped with an LSI MegaRAID 9201 controller. I have a cable that fits only the connector marked with diamonds, not circles. When I connect the device to the server controller, the disks do not appear in the disk manager. Am I right in thinking that everything will work as soon as I start using a cable suitable for the connector marked with circles? I have to buy it in that case.
By the way, can you tell me more about 520 bytes per sector?
Today I bought a cable marked with a circle (2*SSF8088 key 4/6). After connecting the unit with this cable, the installed disks appeared in the disk manager!
The cables in question are SFF-8088, there is no difference between the double-O and the Diamonds physically. The circles are the uplink to the host, the diamonds are downlink to the next chassis if daisy-chaining. Take note what card you are connecting to as well, not all drives are dual-channel.
 

Golfonauta

New Member
May 29, 2020
1
0
1
Hello, sorry but I'm really curious, I have 2 of these home from an old VNX2, and I want to use the drives from my home nas but after using them some time with no issues with an external sas controller LSI 9207-8e I have noticed than even with my nas (unraid) telling the drives to power off they are still consuming that stupid amount of electricity (close to 170W full bay) instead of something close to 0 as they should (if it was like 50... but 170 is too much).

I have an array of 15 by 3TB. Running unraid with dual parity.

I'm thinking if you have found any way to lower the consuption or if you know the internal built of the shelves to search for a DIY solution.

Thanks in advance.