Tintri T5060 chassis (2-node, 24-bay SFF) what is it really?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MPServers

New Member
Feb 4, 2024
23
16
3
We retired a Tintri T5060 chassis and rather than scrap it, I thought it'd be interesting to see what it's running under the hood.

My best guess is that it's based on a Xyratex (?) chassis, or more like a Seagate Exos 2U24 model since it probably came out after Seagate bought Xyratex.

It has 24 SFF drive bays in front, where each caddy has a SATA SSD (Samsung SM863) with a SAS interposer integrated in the caddy to provide the dual path I suppose.

Around the back it has 2 hot swappable server nodes. Each one has a part # 1013583-03 which Seagate's site seems to know something about (I also get that model # when I use Seagate's serial # lookup) but it's a dead end there... no support page or anything.

Each server blade is a dual core E5-2640 v2 and has a total 116 GB (I know...weird). One DIMM internally is a bit shorter, presumably to make way for a cable that goes to a battery mounted on the lid of the node and just had to be routed on top of the DIMM, so it may have a lower capacity. LOL

I managed to pound some keys during boot and I think it was F2 that got me into a setup type screen where I can select BIOS setup utility, a "device manager" etc. That screen says "XP-SM-1" with a BIOS info of "SummitPoint.v04.03.0020"

Going into the BIOS setup, it's an InsydeH2O Setup Utility". The "platform board type" in there is listed as "Seagate Camaro" and again I couldn't find anything about this.

Well, the reason I'm trying to find more info... I did manage to get Windows booted up on this. It has a pair of M.2 SATA (not NVMe) drives in there... I think 320 and 120 from different vendors. One of those (the boot M.2) is actually installed via a caddy accessible on the rear when it's installed, which is kind of cool. The other is in an internal M.2 socket.

The annoying thing is that the WHOLE TIME I'm doing all that, I had just one of the two power supplied plugged in because there is NO fan throttling. It's a jet engine, and easily, by miles, the loudest thing I've ever encountered, and I've been in some loud datacenters. It was a jet engine blasting in my ear the whole time I'm fiddling with it. I put a piece of cardboard and even some vent ducting taped to it to redirect it a little, but you could hear this thing outside my house it was so bad.

So obviously I'm not going to use this for anything at all, but I began to wonder what kind of software or drivers or something might be able to manage that fan speed. The Tintri software is Linux based but you don't get a shell with it so I couldn't check for drivers without breaking into it, which I might do, but I really have no interest in the Tintri ecosystem. Plus, I do recall that when I'd work on this in our work datacenter, it was pretty loud too so I can't really say for sure if it did any fan throttling at all.

Does any of this ring a bell with anyone? Encountered something like this before? I can try to post some pics here if it jogs any memory.

In the meanwhile I discovered at least one of the 24 drives I pulled from this had the Samsung "bug" where the endurance setting was throttling the reads to ridiculously low speeds, and that never triggered any faults in Tintri. I shudder to think of any slowness we had in VMWare because of that and we just couldn't nail down why that was happening...

I had more luck with some Rubrik chassis that we also retired. Those turned out to be 2U 12-bay (LFF) boxes with *4* nodes installed in the back. Older E5-2600 v2 processors with just one CPU installed (but socketed for another if desired). Those are rebranded Supermicro units (nodes are X10DRT-H) and I was able to update the BIOS on them to get rid of the Rubrik boot logo, etc. Pretty nice little thing... each node has access to 3 of the drive bays up front so they're fun little boxes, and the fans actually do ramp up/down with temperature (I assume) :) Might be a fun little proxmox cluster for home labbing, although when I plugged in both power supplies it kept tripping my 20A gfci (but not the breaker). I realized each supply is rated at 1200W on 120V so...maybe during startup when it's full bore, it was going over? But it ran fine with just one supply, and the chassis says the other is really for redundancy and shouldn't be necessary. But my ultimate home lab will involve me setting up a 30A 208V circuit so we'll see.
 
  • Like
Reactions: wifiholic

MPServers

New Member
Feb 4, 2024
23
16
3
I'm still puzzling my way through this box. I got Windows installed on both of the server nodes and that's all good. There's a PLX PEX 8732 PCIe switch in there and I found a Broadcom driver that at least got that to show up correctly in Device Manager.

Although Windows was seeing the internal M.2 drives just fine, I put a drive into the 24-bay front and was expecting one or the other nodes to see it, but no luck. Then I remembered maybe I need the MPIO feature installed, so I did that, and still no luck.

I figured I'd try Linux on one of them to see how it looks in there and it's definitely better. I can see the test drive I put into bay 0, plus Linux was giving me way more useful info on the devices it found, so I can see for sure this:
0b:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02)
Subsystem: Xyratex SAS3008 PCI-Express Fusion-MPT SAS-3 [122e:8004]
Kernel driver in use: mpt3sas
Kernel modules: mpt3sas
--------------------

And the PLX info:
0c:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)
Kernel driver in use: pcieport
0d:01.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)
Kernel driver in use: pcieport
0d:08.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)
Kernel driver in use: pcieport
0d:09.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)
Kernel driver in use: pcieport
10:00.0 Bridge [0680]: PLX Technology, Inc. PEX PCI Express Switch NT0 Port Virtual Interface [10b5:87b0] (rev ca)
Subsystem: PLX Technology, Inc. PEX 8732 32-lane, 8-Port PCI Express Gen 3 (8.0 GT/s) Switch [10b5:87b0]

-----------------------

The info on the mainboard of the server node itself:
*-core
description: Motherboard
product: XP-SM-1
vendor: Seagate
physical id: 0
version: Type2 - Board Version 2
serial: Type2 - Board Serial Number
slot: XFF Slot 1
*-firmware
description: BIOS
vendor: Seagate
physical id: 0
version: SummitPoint.v04.03.0020
date: 21/12/2015
size: 128KiB
capacity: 4MiB
capabilities: pci upgrade shadowing cdboot bootselect edd int9keyboard int14serial int10video acpi usb biosbootspecification uefi

------

I can probably deal with some kind of Linux solution to mess around with this but I was really hoping I'd be able to get Windows clustering working on there but without the MPIO or being able to see the disks in the front bays, I got a little stuck.

Some other things I found interesting... the power supplies where the noisy fans live, I unplugged the fans (the actual power brick has its own) and it boots but lights up an error light, and eventually it goes into some kind of failure state and the power shuts off and then back on, so that's no good. But I did notice that eventually the fans did start to spin slower. Maybe it's because when I was testing previously I only had one server node installed so it was in a failsafe where the fans just run full speed?

Another odd thing in the power supply is a lithium battery pack in each one... purpose unknown. Maybe to help light up indicators on the front panel if power is lost and it still needs to show some kind of indication of whatever?

So, with the extra info I got from Linux (Ubuntu 22.04 fyi) seeing the drive, I know it works that way and maybe I can take the vender/device info to work back and see what Windows is missing... Windows has the "Avago SAS3 3008 Fury" driver, so I wonder if it has anything to do with the PLX driver? Is that PCIe switch responsible for switching access to different sas channels in the front bays? I don't really know much about how that works and what role it plays, if any, in drive access.

It's been ages since I setup a Windows cluster where I needed to use MPIO. Fiddling with "mpclaim" brought back memories. :) But maybe I just missed a step and need to go back to basics.
 

MPServers

New Member
Feb 4, 2024
23
16
3
I've still been exploring the whole idea of putting Windows on these. So far, no joy. Under Linux, it can "see" the drives but any attempt to write to them fails. Under Windows, it doesn't see the drives at all even though the LSI3008 driver is present and working, and I'm assuming the PLX pcie switch (an 8732) is acting as the SAS expander to allow it to see all 24 drives. From what I can tell, the PLX is configured as 3 times 8-channel downstream and 1x 8-channel upstream which would make sense for the LSI3008 (with 8 SAS channels) to be able to see all 24 drives.

Unfortunately, despite having the PLX drivers in Windows, it's just not acting correctly as a SAS expander. I don't know what Linux is doing different but it works to the point of being able to see the drives, just with something missing. And that may be the multipath functionality, where the server node doesn't "own" those drives. I'm unfamiliar with multipathing in Linux so I may need to learn that.

In the meantime, I mounted my backup of the Tintri OS onto a virtual Linux box so I could explore it a little. On the 320GB M.2 SATA drive that it boots from, there's a Linux RAID 1 partition of 180GB, and that is mirrored onto the 180GB internal M.2 SATA drive. Curious.

In that mirrored partition are a handful of other logical volumes, including a pair of Tintri specific ones that seem to be one old version, and then the current version, I guess in case you needed to roll back to the previous revision after an update.

Again, my lack of more in depth Linux knowledge has slowed me down because although I can see files related to the PLX and LSI adapters, I can't really tell for sure how it's all working together to make magic happen, but I'm hoping I can at least suss out how it's doing any multipathing (if at all...it could be entirely bespoke for all I know, in a weird Tintri cluster way). I know from watching it boot the actually Tintri OS when this was operational that one or the other of the controllers/servers will boot first and that becomes the master, and you can't even login to their simple front-end on the inactive node... it tells you that you need to login to the other one. So I'm assuming it's just a race, and one or the other just starts up first and declares itself the winner, takes ownership of the drives and becomes the cluster master.

I know that hacking an old Tintri T5000 series might be a very limited audience, but I'm just posting my comments and thoughts along the way in case it helps anyone else down the road (and also it may elicit any advice from someone who has possibly gone down this road before and could steer me on the right path). :)

FYI, I did kind of confirm that the fans will blow full jet turbine speed if either controller node is removed, so that's why it was so stinkin' loud the first time I tried that. When both are installed, it's quieter... typical server noise, so it's loud, but not LOUD. I also realized that one of the server nodes had a mostly dead CR2032 battery... no wonder it kept losing the BIOS settings and wouldn't keep the UEFI boot I'd configured on it. Swapped out with a fresh one and so far, so good. The old one measured about 0.15V, so yeah... toast. I should check the other just in case even though I haven't noticed it losing settings. But I think this T5060 went operational around 2016, so a CR2032 failing after 8 years, sounds normal. It probably gave up the ghost a couple years ago but the default settings were fine ( the Tintri boot is BIOS, not UEFI, so even if it defaulted it would have booted fine and not complained).
 

MPServers

New Member
Feb 4, 2024
23
16
3
I'm still not really any closer to getting any other OS to see the front 24 drive bays... I tried using some "period accurate" versions of RHEL and CentOS (6.10) that seem to match the kernel of the Tintri OS (which I suspect is RHEL 6.10... kernel seems to be 2.6.32-10). I installed a stock RHEL 6.10, copied over the modules from the Tintri such as the "xybridge.ko", which I now think is acting as a PCI channel between the two nodes (in addition to being part of the expander, by way of the PLX 8732?), but it fails to install when I do a modprobe. I think it has to do with missing symbols from the "agigaram.ko" module used for the NVDIMM support. I can't remember which Linux OS I tried, but I do remember on some older version, perhaps CentOS 7, it was able to see the NVDIMM "out of the box" as a block device...

Anyway, this Tintri T50x0 series is the identical hardware to IBM's "tapestor" ProtectTier models, specifically the TS7650G chassis, and the server modules match up with IBM's 3958-DD6 option. Unfortunately everything related to this on IBM's site is behind a registration, so even when logging in, if you try to get to where they have software for it, unless you've registered your ProtectTier, you ain't getting it.

It's definitely a Xyratex design that Seagate puts out... the motherboard is listed as a "Seagate Camaro", and it's running a customized Megarac BMC firmware (with a little Tintri logo in the upper right corner of the banner).

I did update the embedded LSI 3008 controller on each one. They had a MUCH older version 12 firmware which took the version 16 firmware just fine. It was already IT mode so no need to change from IR to IT or anything fancy.

The SAS expander appears to be a Seagate XP-SM24SXP-SM-1 model, based on how it shows up on the couple of Linux versions that actually see it. It's just strange that those OS can see the expander and the couple of drives I put in there (one drive in each of the 3 expansions... so drives 1, 9 and 17). It just can't *write* to those drives...weird.

I've had zero luck chasing down any kind of source for those "xybridge" or "agigaram" modules, etc. I was just hoping if I got a close match to the kernel they were apparently compiled for (2.6.32-10) then I could shoehorn these KO files in and get it working with the drives okay, but the thing about missing symbols was throwing me. Not sure what I'd have to do besides trying to fudge the modules symbol related files to stuff those in?

Looking through some of the scripts in the Tintri OS, like a "common function" file (cmn_funcs) is entertaining. Like randomly giving some codenames of the small/medium/large sized T600 models the names papa/mama/baby bear, with a funny note:
# PCL: I don't know what to call the various models, so
# they are henceforth named after Goldilocks and the Three Bears
# since the intent (I guess) is to have a large, midrange, and
# small box

Also, for what it's worth, it seems the built in IPMI/BIOS password it expects is "MuffinMan"... found that in a few different spots like that common function, and the firmware check/update scripts. :) Some other funny comments to be found by whoever wrote those. Reminds me of some funny comments I've put in my own scripts...what must people think if they read those later on?

So essentially, I have no problem installing whatever OS I want... Windows Server 2022 installs fine, as do whatever flavors/versions like CentOS, RHEL, Ubuntu, etc. Windows won't see those front drive bays at all even after installing what I'm assuming is just a "dummy" driver for the PLX 8732 (probably just has enough info to get rid of the device manager "bang", but doesn't do anything). The Linux flavors may or may not (usually do) see the drives, but any attempts to write will fail. The NVDIMM module (a single 4GB module with a super-cap attached to the lid of the server node) only showed up natively in some Linux version... can't recall which now but it may have been CentOS 7 or RHEL 9? I was throwing a bunch of things at it to see and I kind of lost track on that finding.

If anyone had any suggestions on getting KO files to load properly by copying them over to another system with a "mostly" matching kernel, that'd be great. Best match I found was RHEL 6.10 which I think was 2.6.32-20 instead of 2.6.32-10 that the Tintri OS was running... that should be "close enough" to work, shouldn't it? And I feel like I've chased down every alternative to other vendors that sold this or a similar chassis. The Seagate Exos E or AP 2U24 are close, but the server modules are slightly different, plus I couldn't find any downloadable software/drivers for them anyway. Stinkers. These super expensive (when new) boxes were SO dang proprietary and walled off, but they're still kind of fun to tinker with so I'm just hoping somewhere out there, someone has something... source for the drivers. :)
 

MPServers

New Member
Feb 4, 2024
23
16
3
After more fussing, I've come up with the following "partial" solution to being able to do something with it.

TL;DR = TrueNAS scale, but only on the lower server node. And only in BIOS, not UEFI.

It all came down to discovering that some things only work in BIOS, such as the NVDIMM support in Linux. I went back to figure out which Linux variants I'd seen the "pmem0" device show up, and realized it'll appear in all of them (RHEL, Centos, Ubuntu) as long as I booted in BIOS. If I did a UEFI boot, it just wasn't there. I tried in Windows as well using BIOS, but Windows just doesn't have any support for that Agigaram apparently. If I enable the "battery" feature in the BIOS settings for the NVDIMM (it does have a supercap after all, which does work), then Windows will see *4* unknown batteries appear that it doesn't know what to do with. Oh well... So Windows is out, due to lack of any good drivers for these things.

The Xyratex stuff in the Tintri OS (which I think is Centos 5.11 but with kernel 2.6.32-10... why so old, I don't know) is 3 different things:
- The xybridge module
- An "xyvnic" which I think is used for communication between the server nodes (heartbeats, maybe sync traffic?)
- "xymir" which is a non-transparent bus module

Those are all components of that PEX 8732 as far as I know, and their primary purpose is, I think, just that inter-node stuff, so kind of unrelated to the storage system.

The LSI 3008 on each server node is hooked to the front drive bays through some type of Seagate expander/chassis which works, but here's the tricky thing... I could never get Linux configured so that one server *or* the other has "ownership" of the drives. I did some things with multipathing but since that's more for cases where you have dual controllers going to your storage, it's kind of irrelevant, I think. I just don't know how Linux clustering would work in a case where you have 2 nodes and one shared DAS between them. Maybe the multipathing is still important, and I did have both nodes configured with multipath and added my 3 test drive WWIDs, but beyond that I just gave up for now.

What I did learn is that the bottom server node (which the IBM TS7950G documentation seems to list as node 1, and the top is node 0) will "own" those drives unless something else tells it otherwise. That kind of checks out with what I saw when these were actually running the Tintri OS before I decommissioned it: the bottom node would be the master controller when it powered on.

So what that means is, I can install Linux on that bottom node and it'll see the drives and I can read/write. Which is ironic because all my early testing, I left Windows on that one and I was trying out all the different Linux variants on the top node which could read, but not write, to those drives, so I didn't catch that little fact until just the other day.

I have TrueNAS *scale* running on it right now (top node is currently powered off). I tried TrueNAS core but it's FreeBSD based and wasn't seeing all of the drives for some reason. I've heard others report the same thing (LTT for example with a repurposed Netapp) so that gave me a clue to try Scale which is working well.

I'm not sure what would happen if I turn on that top node and let it try to access the drives while that bottom node is doing something... best case it would ignore the write commands like it has been all this time, but worst case, the drives freak out from the write attempts and I get weird things happening on that TrueNAS install. But this is testing for fun so I'm probably going to try that out while copying a bunch of data to the test SMB share I setup.

So, all is not lost, I do have a "kind of" solution where I could use the 24 front drive bays for something. Just not the dual-server, shared SAS DAS I was thinking I'd be able to do, like an old HP MSA P2000 G2 and G3 I once managed, long ago. That one had Windows support. Just saying... :)

I'm probably done messing with this for the time being unless I have some other insights. My next tinkering will be with some of our old Netapp servers/shelves we turned off a couple years ago and are collecting dust. I feel like there's more chatter about reusing those (at least the shelves, with external SAS controllers in more modern servers) than with this oddware Tintri/Xyratex/IBM chassis, so we'll see.
 
  • Like
Reactions: wifiholic

MPServers

New Member
Feb 4, 2024
23
16
3
FYI, a little more research showed that there's something called a "Reduxio HX550" that uses this same chassis/canisters. Not a lot of info out there about that but I found this write-up from 2018 that has pictures of what it looks like. Same thing as the Tintri T50x0 and that IBM T7650G:
Reduxio HX550 Review

Seems like Reduxio went out of business so there's really nothing I can find from them now. A few things in the wayback machine, but the support pages were all behind a login so that's no help. Here are a couple images of that Reduxio. Better than any online images I could find of the Tintri. The colors of the sections on the back of the server canisters is a little different but the layout is identical.

StorageReview-Reduxio-HX550_No-Bezel.jpg
StorageReview-Reduxio-HX550-Rear.jpg
 

GreyJedi73

New Member
Mar 16, 2024
4
0
1
Let me begin by saying that you might be the miracle I need right now…

I am trying to get a Tintri T5080 (same family as your T5060) resurrected so that we can use it as storage for a virtual infrastructure that I’m putting together for one of our new teams.

Because we were hacked a couple of years ago, the the 24 disks that came with our unit are considered a crime scene so I had to replace those with 12 new disks. And that seems to be the problem.

The Tintri unit sees the 12 new drives but has marked them as “spares”. This has me thinking that the memory of a previous RAID array must be stored somewhere and it’s using that as some sort of starting point to which these new disks are spares. In the logs it even reports that the MD-RAID is degraded and waiting to be rebuilt.

Furthermore, it reports that the controllers are offline. That cannot be true as I have been able to log into the web interface.

I need a way to reset the controllers to factory defaults but I have no service manual to guide me. Indeed it seems to me that Tintri has not made those manuals available at all.

Because I’m not in the datacenter right now, I have had to rely on pictures of the motherboard from eBay. I can see jumpers that might be the key but that’s as far as I go.

By the way, how did you mount the Tintri OS into a virtual machine?

I have attached a screenshot of the hardware tab which I’ve marked up. I have also attached pictures of the vmstore console which show the status of the controllers. Any suggestions you might have would be most welcome.

Thanks in advance for your help.
 

Attachments

MPServers

New Member
Feb 4, 2024
23
16
3
I could never get my backups of the internal drives to boot when I restored the images, and I blame Macrium Reflect (free edition). Either I did something wrong, or it really just doesn't work with Linux filesystems that well.

The best I could do was restore it to a virtual drive and mount them under Ubuntu so I could explore it. As a result, after I formatted them to try out different operating systems, there was no going back to the Tintri OS. There's a thread on Reddit where someone mentioned possibly being able to provide a way to reinstall the OS from scratch, but I haven't heard back after PM'ing him. Oh well.

In your case, I'm not sure what the issue could be. I was never involved with setting up our Tintri when it was operational, I just maintained it when we were still running VMWare pointing to it so I couldn't really say what it takes to reset it. As long as you're able to login to it though, it seems like you should be able to manage it. I think the Tintri T5080 is only different from the T5060 by the size of the drives it came with (3.84 TB instead of 1.92 TB) and otherwise has the same specs on the "canister" server nodes (same CPU, I think the same memory, etc). The SAS controller is an LSI SAS3008, and to the best of my knowledge they use that PLX 8732 PCIe switch for communication between nodes (syncing traffic, heartbeats, and I don't know what else possibly?)

I'm assuming you can login to the local console and web interface (I see your screenshots, I'm just being over-explainy for anyone else later), using a keyboard and monitor plugged directly in (to the bottom node, most likely... it seemed to want to be the primary controller after startup). After making sure the network is setup correctly you should be able to reach the web interface on it and that might get you further along with setting up the vmstore itself.

That version of the OS is a little old... the last one for the T5000 series is 4.6.3.5 and even if your support contract expired, at least in my case I'm still able to login and get that last version, and you can also see the documentation. You can also get their "Tintri Global Center" (TGC) software which is just an appliance type management system for it. It's been a while and I never used it too much so I don't remember exactly how to do stuff in there. :)

Depending on how long this was sitting before you fired it up again, you might check the network cabling between everything to see if that explains the lack of redundancies you saw. I don't know if that would prevent it from starting anything up.

As far as I know, there's nothing too special about the drives. I don't know if you have the original drive trays (or got some spares from somewhere). The drives themselves are SATA but there's a SATA->SAS interposer board, and without that you're probably not going to get it working since I think there's some multipathing it does (unclear on that though). The interposer lets that work with regular SATA drives. You can, if you wanted, use a SAS drive directly and mount it further back and remove the interposer. I tried that on a couple drive bays when testing TrueNAS and that works okay, just FYI.

You're probably right though, that the Tintri is expecting to see all 24 drives there and is upset that only 13 are present. 13 seems to be the minimum it needs - these were sold with either 13 drives or all 24 (and you could get an expansion kit that includes 11 more and when you pop those in, an "Expand Capacity" button shows up).

You can also try looking at the logs and diagnostics. Create an autosupport bundle and you should be able to download that and look at it on your own. It'll have logs to look at. I've only created an autosupport bundle a couple times, when we were still under a support contract so I'm not super familiar with it but I recall it was pretty easy to do.

In the end, although the Tintri OS is kind of nice and is good for what it does, I wanted something a little more flexible so I don't care too much that I could never get that going again. It's fun that I could install TrueNAS (or Proxmox, or even Unraid if you want) on the bottom node, and it's unfortunate I could never figure out how to get any of those to do any kind of failover clustering, but I guess if that was easy, then Tintri wouldn't have had a business model. But I wasn't using this for any real business needs, just to play around with and see what I could get it to do.

So... all that to say, sorry I don't have more help for you. :)
 

oldHDDs

New Member
Mar 16, 2024
1
0
1
Has anyone had any luck re-using the SSD drives from a decommissioned Tintri T50X0? I keep getting get that they are bios locked.

Thanks in advance.
 

GreyJedi73

New Member
Mar 16, 2024
4
0
1
@oldHDDs: I wouldn’t say that I have been able to reuse them as such. But when I have plugged one of those into an external SSD caddy, it was detected and only needed to be initialised.

Now the particular drive on which I tried this was not one that we had used ourselves, but it was a refurbished drive. I’m not sure you can still get these new.
 

MPServers

New Member
Feb 4, 2024
23
16
3
I've had good luck. They're just regular SATA 2.5" SSD drives. No special firmware. I just plugged them in and they worked. When I first tried one, I used an old HP Proliant (a DL360 G6) to try it. That Proliant has --slow-- speeds for SATA drives and you could really tell. But I hooked it up to the Proliant using the included interposer (using a SAS extension cable obviously since the caddy from the Tintri doesn't work) and since that Proliant's P400 or P410 has faster speeds for SAS, it recorded higher transfer rates. Just an interesting little thing I noticed.

I parted out about half of the 1.92 TB drives from my Tintri to other systems in my home lab. Various Dell Poweredge, but I've also tried them in desktops, some SuperMicro, etc. and they just work.

Of the 24 I had, most of them had an SSD wear level (check the SMART attributes) of at least 92%, and half were 98%-100% (the two that were 100% were probably the spares, I think they had newer mfg dates). But two of them were 22-26% which was interesting. They were manufactured the same year as most of the others (I think 2014) and had the same amount of power-on hours, so it was just weird to see a couple that had so much wear. I'm assuming that it meant it was using more of the spare sectors to account for bad spots. Those two are now living a quiet life in some unimportant desktop systems.

The other drives, I'm still keeping them in the Tintri for the moments when I'm still fussing with getting other operating systems on there, but really I just keep them handy to use on other computers as needed. :)
 

GreyJedi73

New Member
Mar 16, 2024
4
0
1
@MPServers
You have already helped quite a bit. I just have one more question: do you have any idea who the motherboard manufacturer is for the Tintri T50x0 controllers?

Our unit is pretty much intact. We still have the original drive trays and the SAS interposer cards. In fact, the presence of those cards in bays that had no disks led to the BIOS reporting that it could not unlock all the disks. And when the Tintri web interface was accessed, it would show those bays with no disks with a red X on them, almost as if they were failed disks.

It was only after watching a video on YouTube that had some exec talking about expanding drive capacity that I noticed that the empty bay that they showed as part of the demonstration had no cards in them. I then removed the cards on those 11 bays and got a more compliant display of drives. That’s the image I sent you earlier.

Incidentally, you are not likely to be able to use just any drive SATA drive with this. Earlier in this project we tried to use a consumer-grade Samsung SATA SSD with this. Specifically, we tried to use the 870 Evo. The BIOS saw it but the web interface showed it marked it as incompatible. We could have saved quite a bit of money on the disks if we could have used the consumer-grade disks.
 

GreyJedi73

New Member
Mar 16, 2024
4
0
1
@MPServers I have just opened up one of the controllers. Perhaps a jumper setting should reset this thing to factory defaults. The question is which jumper setting is it?

I have attached pictures in case you might have any ideas.
 

Attachments

MPServers

New Member
Feb 4, 2024
23
16
3
Yeah, I've tried to track down the *exact* specs on the make/model of the Tintri chassis. It's definitely a Xyratex/Seagate chassis, but as for the exact model, I couldn't find anything more specific than that. There are all kinds of clues in the hardware (like " Seagate XP-SM24SXP-SM-1 " or the BIOS being "Seagate Camaro") and the drive caddies are identical to what Xyratex chassis' used before Seagate bought them. It's identical to that IBM 7650G chassis, and the Reduxio HX550, but I couldn't find any software/drivers for those either as a way to compare.

In my case since I didn't care about using the Tintri OS, I just hooked up a keyboard/mouse and a monitor using the Tintri mini-HDMI to VGA adapter. With that you should be able to get into the BIOS setup pretty easy (hit ESC repeatedly while booting and it should get you in). Once you're in there you can change the BIOS settings as much as you want. You can also boot to a USB so you can install your own OS. I had no problems installing whatever I wanted on there, and it does support UEFI as well as legacy BIOS.

The only problem I ran into was that only the lower node could access the disks (and Windows can't see them at all, I think because, as I've discovered, Windows just isn't good at seeing drives in an expansion chassis with an SAS3008 controller which is what this uses...Linux has no problem with it though, so TrueNAS, Unraid, Proxmox, and just plain Ubuntu for example can see all the drives).

The top node can see the drives but any attempt to write to them gives an error because they're "owned" by that bottom node. The Tintri OS has some kind of drivers that allows that failover to take place, and the PLX 8732 has to be a part of that I'm guessing, being used as an inter-node channel to sync up the states and manage that backplane access.

If you want to reset the BIOS to defaults, I think I did that by removing the CMOS battery and shorting across the battery holder contacts for a few seconds to drain any residual charge, just to simulate a long dead battery. When it boots you'll know if that worked by going into the BIOS options and the date/time will be defaulted to whatever.

When you're in the BIOS you can set the BMC ip address. The BMC is a kind of generic Megarac. I wish I could find an update to it. I actually had to spin up an old Windows 7 VM with IE and an old Java on there just so I could more reliably use the remote console. I did have the remote console working with Java 7 and PaleMoon browser (since it's only TLS 1.0 or 1.1 or whatever, and some old self-signed certificate details) but even that stopped working at some point, mostly from the JNLP refusing to launch no matter how insecure I made Java about what certs are allowed. :)

Default user/pass for the Megarac... I could be wrong but I think it was ADMIN / ADMIN (all uppercase for both). If it asks for some other BIOS pass or BMC login, try that "MuffinMan" I referenced in an earlier post although I never had to try that. There was no BIOS setup password and ADMIN/ADMIN got me into the BMC.