Sanity Check: Advice on Next Server Steps

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ere109

New Member
Jan 19, 2021
27
21
3
Denver
Like many homelabbers, I started my server journey with a three-generation-old, single processor system (with a Xeon E5 v1). I've run FreeNAS/TrueNAS exclusively and been happy. But I download and store a lot, enjoy trying new operating systems, and have added and reconfigured disks as I ran low on storage. After a lot of research, and a lot of dumb luck, the third chassis I bought was a CSE-836, which I decked out with an additional six SSD drives. That worked for three years, but my largest pool was very close to 80% full, and I had no more room to add disks, so...

I've recently completed a mad month of researching, dreaming and impulse buying, and now have two solid 836 chassis, and six motherboards, including an X11 Scaleable with SAS3. Understand: I don't work in IT, so the learning curve and financial outlay are significant drawbacks. At the same time, I love to learn. I've always been drawn to reading about and playing with tech. It's about understanding a system. I have learned so much, but there's so much left to understand and I can't dedicate all of my free time to one hobby. I thought I'd open the learning/teaching up to the community.

With two chassis, I have realistically already FAR exceeded my need, and my budget, but my compulsion to learn and to make full use of my available resources continues to dominate. Tech compulsion is my first problem. The second: how best do I utilize my server(s)? I'd love suggestions.

My Roles:
Storage
SMB File Server
Zoneminder w/ 5 cameras
Plex Media Server
Torrents
Virtual Machines

My Equipment:
CSE-836-A (4 x 8087 direct attach)
CSE-836-SAS2-EL1 (1 x 8087 expander)
SuperMicro X9SRH-7TF (E5-2670v1, 128GB RAM)
SuperMicro X9DRH-7F (2x E5-2620v2, 128GB RAM)
SuperMicro X11SPH-nCTF (24-Core Scaleable Platinum, 128GB RAM)
SuperMicro X11SSH-TF (E3-1275v6, 64GB RAM)
SuperMicro X11SSH-F (E3-1245v5, 16GB RAM)
SuperMicro X11SSM-F (empty)
Various SFP+, SAS2, HP expander, etc

The Plan:
Install 24-core X11 in EL1 backplane chassis, run ESXI and give SAS3 controller to Truenas: run one port to EL1 backplane (16-bay), run second port to HP SAS expander in "A" backplane chassis (16 more bays). But that means I'll have a second running chassis with nothing better to do, and some motherboards burning a hole in my pocket.

Questions:
Do I have a reason to run a second motherboard, other than as a JBOD?
Is ESXI the "best" hypervisor option for my purposes?
Someone told me about running a hypervisor INSIDE ESXI. Why?
Should I even ask about high availability?
M.2 or SATAdom boot?
How does Oculink fit into this?
What kind of expansion is an EL1 SAS Expander capable of?
How do I truly take advantage of 10gb ethernet?
With three 10gbase-T boards, would SFP+ add anything?
Why do many HP SAS Expanders come without a bracket, and how do I find a full-height bracket?
Is there a way to monitize unused processor potential (mining or...)?
Can I donate unused processor potential (folding?)?
I probably need to get behind a VLAN, but it's overwhelming.
How best do I unload excess equipment before my wife sees next month's credit card bill?

Hop in; discuss or answer anything that suits you. And thanks.
 
  • Like
Reactions: itronin

i386

Well-Known Member
Mar 18, 2016
4,247
1,547
113
34
Germany
Esxi is great, but could restrict you in what hardware can be used.
Proxmox is an alternative that supports a lot more hardware, which could introduce undocumented problems. If the system is not critical try proxmox :D

> M.2 or SATAdom boot?
do you have m.2 ssds and sata doms lying around? If yes use the m.2, if no: I would go with the satadom for the os and a m.2 for caching if budget is not a problem

>How does Oculink fit into this?
There is a rear drive bay kit for 836B chassis that can add two u.2/u.3 ssds. That kit is currently not listed on supermicros website, but some shops list it. Im not sure if there are backplanes for the 836 that support u.2/u.3 ssds...

>What kind of expansion is an EL1 SAS Expander capable of?
I'm not sure what you mean with that. Sas2 is okay for hdds, for ssds I wouldn't use them

>How do I truly take advantage of 10gb ethernet?
By using it and achieving speed faster than 1GBit/s (125+ MByte/s). Doesn't have to be 1GByte/s all the time :D

> Why do many HP SAS Expanders come without a bracket
There are different versions, some where mounted between the mainboard and the backplane hence no bracket.
 
  • Like
Reactions: ere109

mattventura

Active Member
Nov 9, 2022
448
217
43
run one port to EL1 backplane (16-bay), run second port to HP SAS expander in "A" backplane chassis
Minor change, but it would likely be better to run two links to the backplane and then plug the other chassis into the third port on the backplane. You'd still get an x4 link to the other chassis, but you'd have an x8 link to the EL1 backplane.

Is your main pool all SSDs? Or is it six SSDs in addition to the spinning drives? If the latter, it may work a little better to just get the rear NVMe kit, run two NVMe drives which would be faster than the six SAS/SATA SSDs, and free up the 3.5" bays for hard drives.
 
  • Like
Reactions: ere109

Markess

Well-Known Member
May 19, 2018
1,162
780
113
Northern California
I'm still pretty amateur on this stuff in a lot of ways, and I do my homelab on a shoestring, so my perspectives are going to be different than some of the pros. But sounds like you're at the beginning end too, so might be of help

Do I have a reason to run a second motherboard, other than as a JBOD?
If you have room, part of the fun of homelabbing is trying stuff. Sometimes, its nice to have a "physical" barrier between your file/Plex/etc. server that you need to work all the time, and a second (or third...or tenth) machine that you can experiment with...without fear of killing the wife's show on TV right at the good part. Spousal/Significnat Other Approval Factor needs to remain high at all times if you are going to be a successful homelabber.

Is ESXI the "best" hypervisor option for my purposes?
ESXI is great, but some of its strongest features are in the paid version (which I assume you will want to avoid given your concern about your spending weakness ;)) . For two machines, probably not an issue. But if you're like me you learn a lot of stuff from tutorials. Many of the tutorials out there are based on the paid version and the layout and features and solutions taken to accomplish things may be different than free. That isn't to say that there's not a lot of tutorials based on the free version, you just need to make sure you know what version the tutorial is for.

Proxmox is free, but as @i386 said above, there can be undocumented problems because Proxmox is "open" and hasn't been tested with every possible hardware combination. That said, I think if you're running Supermicro motherboards with "known brand" expansion cards that lots of people use, there's a strong chance that your problems will be either minimal, or already encountered and documented by somebody else.

Regardless of what hypervisor you select, I suggest that you take a quick look at the various You Tubers that have tutorials. Find one that you can tolerate watching (some are really poorly done) and that has a complete series of setup from start to finish for what you're trying to learn. That really helped me.

For example, I like the tutorial videos that Tom Lawrence does (mostly on the LearnLinux TV and Lawrence Systems channels on You Tube), as well as NetworkChuck for networking. I'm not affiliated with either, but I find their teaching approach fits my learning approach and they have "series" on different topics that let me follow a single person from start to finish. Lots of other free options though. Explore a bit.

Someone told me about running a hypervisor INSIDE ESXI. Why?
One reason may be that If you have only one machine in your lab, and you want to try out a different hypervisor, you're going to have to virtualize it. Another may be if you're trying to integrate other software that may not work with ESXi, but works with something else.

What kind of expansion is an EL1 SAS Expander capable of?
If you want to go the JBOD route, you can use a Supermicro AOM-SAS3-8I8E, or a different brand equivalent, to connect the JBOD box to your server's expander backplane. There are multiple versions of these kind of cards available, with both the older (8086/8087) and newer connectors. You can also just run a long cable from the EL1 backplane, through a hole in the chassis (remove a PCIE cover and sqeeze the cable through, and into the other chassis. But that's kind of janky and hard to work with when you need to break a box open. If you do decide to try this sort of thing, check prices carefully on places like eBay, because they tend to be all over the map.

You can even sell a few of those motherboards (or return them if inside the window) and use a JBOD card like this one https://www.servethehome.com/supermicro-cse-ptjbod-cb1-jbod-power-board-diy-jbod-chassis-made-easy/ . Multiple versions of the JBOD card available too, available, including one with IPMI (although that usually costs as much as a motherboard!).

I probably need to get behind a VLAN, but it's overwhelming.
Yeah, there's a learning curve. Lots of tutorials out there though.

How best do I unload excess equipment before my wife sees next month's credit card bill?
Don't know where you are...and don't know if you're serious or not :rolleyes:. But, if you don't want to mess with eBay, consider local online marketplaces. If you're on Facebook for example, there's local marketplace sales there. List your stuff and arrange to meet with buyers somewehre neutral, like a grocery store parking lot. There's Craigslist too in a lot of areas, but because its totally anonymous, there's a lot of sketchy folks and scammers on there.

One other thought...if noise isn't a significant factor and you want to have more machines to try more stuff in isolation from your file/plex/everything needs to just work server, you can sell/return one 836 and put some of those motherboards in 1U or 2U chassis (which cost a lot less). Sometimes, you just don't need a ton of disks for an experimentation only box, and a 1U chassis will suffice. If you go that route, the E3 based motherboards are a good choice, as they will generate less heat overall.

Cheers.
 

mattventura

Active Member
Nov 9, 2022
448
217
43
Oh, one other thing I forgot to mention: One of the big downsides of ESXi is poor local storage capabilities. No ZFS support, so you're basically limited to either hardware raid, or passing through your storage devices to a VM and using that VM to export a share for the rest of the VMs. So if you go with the ESXi route, plan accordingly.
 

ere109

New Member
Jan 19, 2021
27
21
3
Denver
Thanks everyone. I've got a pretty good idea how I want to physically assemble the new machine, then I think I'll take a few months and play around - leave my "current" X9 server doing the jobs it has always done, while I explore the second chassis, compare Proxmox and ESXI, test VMs, etc. Long after I feel like I've got a good grip on a new setup, I'll consider consolidating to one server with expanded drives.
 
  • Like
Reactions: itronin

Pete.S.

Member
Feb 6, 2019
56
24
8
Thanks everyone. I've got a pretty good idea how I want to physically assemble the new machine, then I think I'll take a few months and play around - leave my "current" X9 server doing the jobs it has always done, while I explore the second chassis, compare Proxmox and ESXI, test VMs, etc. Long after I feel like I've got a good grip on a new setup, I'll consider consolidating to one server with expanded drives.
You have to think like a company. The things you actually run and use on a daily basis are your production workloads. The rest are lab/test systems used for education, experimentation, benchmarking etc.

On your test systems you can go wild. Install esxi, proxmox, xcp-ng, hyper-v whatever. This is where having multiple systems makes sense. Test whatever hardware you have, in whatever way you want.

You don't want to experiment with your production systems however, because you need them working reliably.

On your production system running 24/7/365 you want to have a hardware configuration that is in parity with your actual needs. You're paying for electricity so if you go overkill you'll probably notice. Power, heat and noise also goes hand in hand.

When you want to upgrade your production systems you do it by migrating the workloads to another system.

So having several system is not overkill, it's a necessity. But it's how you use them that matters. I think your plan sounds good but if you want to experiment with hardware you're not going to be able to consolidate to one server.

Personally I think you need at least one more chassis so you have somewhere to mount some of your other motherboards. Something generic like SC825 with LP slots maybe.
 
Last edited:

Pete.S.

Member
Feb 6, 2019
56
24
8
Also I couldn't really graps your plan in your first post.

The SAS expander in the backplane (EL1) is primarily to avoid having to have a HBA/RAID controller with 16/24/36 SAS connectors (whatever you chassis have). There's a price to pay though and that is performance bottle neck.

Direct connect backplanes (A) are faster when you have more drives but then you need a HBA/RAID with enough ports to connect all the drives. Or you can use a HBA/RAID with fewer ports and add a SAS expander mounted inside the chassis. Basically DIY version of what Supermicro does with the EL1 backplane.

A Supermicro JBOD is a chassis without a motherboard and some extra hardware basically making it a box that has SAS connectors with power supplies and drives. You can turn a regular chassis into a JBOD if you add some stuff. But a 836 chassis only has 16 drives or so - it's not a good fit.

You can daisy chain SAS expanders but again creating a new performace bottle neck. Since internal SAS connectors and external are not the same, JBODs are commonly connected on RAID/HBA with external ports and any internal drives connected to internal ports on the RAID/HBA.

If you wanted to have a lot of drives on the same system, you are much better off using a chassis with more drive bays instead of going to JBODs. The 4U 846 for example that has 24x3.5" bays and 4U 847 has 36x3.5" bays. If you want 2.5" drive bays for SSDs the you have the 2U 217 with 24 bays and the 417 with 72x2.5" bays. We have all of these chassis at work and a few others too.
 
Last edited:

ramicio

Member
Nov 30, 2022
69
14
8
There is a rear drive bay kit for 836B chassis that can add two u.2/u.3 ssds. That kit is currently not listed on supermicros website, but some shops list it. Im not sure if there are backplanes for the 836 that support u.2/u.3 ssds...
I believe that part is a total sham and doesn't exist. I just talked to SuperMicro about this. They deny everything and said they make no rear NVMe kit for ANY chassis. Did you see this on page 46 of some online PowerPoint that was partly in Russian writing?

MCP-220-82617-0N seems to exist at a few stores. Are they legit stores? Tigerdirect is one of them, but out of stock there. This part isn't listed as even fitting the 836 series. For the 836 series case they list MCP-220-82618-0N, which when searching for, shows up as a part called a metal cage for an actual 2.5" SAS rear drive kit.

This kind of irks me, because this is exactly what I need. I'd imagine it's because this was 2017 and who was going to use only 2 slots for U.2 drives and in the back of a case? They make the other ones for SATA/SAS still probably because people actually buy them and use them for OS drives. Either that, or it was only sold in Russia, and they're boycotting selling to them, and don't even want to mention any products they sold them.
 

unwind-protect

Active Member
Mar 7, 2016
418
156
43
Boston
16-port SAS controllers just went cheap (such $50) through Ebay. I am not fond of expanders for RAID. If you use individual disks without combining them it doesn't matter of course.
 

Pete.S.

Member
Feb 6, 2019
56
24
8
I believe that part is a total sham and doesn't exist. I just talked to SuperMicro about this. They deny everything and said they make no rear NVMe kit for ANY chassis. Did you see this on page 46 of some online PowerPoint that was partly in Russian writing?

MCP-220-82617-0N seems to exist at a few stores. Are they legit stores? Tigerdirect is one of them, but out of stock there. This part isn't listed as even fitting the 836 series. For the 836 series case they list MCP-220-82618-0N, which when searching for, shows up as a part called a metal cage for an actual 2.5" SAS rear drive kit.

This kind of irks me, because this is exactly what I need. I'd imagine it's because this was 2017 and who was going to use only 2 slots for U.2 drives and in the back of a case? They make the other ones for SATA/SAS still probably because people actually buy them and use them for OS drives. Either that, or it was only sold in Russia, and they're boycotting selling to them, and don't even want to mention any products they sold them.
We can order them at our Supermicro supplier at work and they get their stuff directly from Supermicro's EMEA HQ in the Neatherlands.
  • MCP-220-82616-0N 2x 2.5" Hot-swap 12G rear HDD kit w/ fail LED for 216B/826B/847B
  • MCP-220-82619-0N 2x 2.5" hot-swap tool-less NVMe U.2 rear kit for 216B/826B/847B w/o cables
  • MCP-220-82617-0N 2x 2.5" hot-swap NVMe U.2 rear kit for 216B/826B/847B w/o cables
So I'd say the part exists for sure, but I don't know if you can buy it as an separate component. That might be the reason why Supermicro doesn't list it on their site. Or it has been discontinued if you actually tried to order it.

Don't know if any of these fit 836 chassis though. I always check before ordering because sometimes Supermicro changes something on their chassis and you need revision X of the chassis for it to work. When I look at the SATA/SAS version of the rear kit on Supermicro's site it doesn't list 836 as compatible - only 216B/826B/417B/846X/847B .
 
Last edited:
  • Like
Reactions: ramicio

ere109

New Member
Jan 19, 2021
27
21
3
Denver
Thanks for your perspective, Pete. It makes absolute sense to keep my "production" server doing the important jobs, and use the second server for experimenting. Thanks for the links on the NVME drives. I've seen Google image search pictures of these, but great to have model numbers now. Is there an external option for Oculink - connect two chassis together that way?
My budget was gone well before I ordered the final motherboard, so I'll keep asking questions and working up a list, and hope to complete it later.
I'm using spin drives, so my math indicates I should be able to get close to max speed with 16 drives on one expander.
I pulled a well-set X9 from my second chassis today to put the Scalable motherboard inside. Your comment on electricity has been on my mind. I should take some pictures.
 

i386

Well-Known Member
Mar 18, 2016
4,247
1,547
113
34
Germany
I believe that part is a total sham and doesn't exist. I just talked to SuperMicro about this. They deny everything and said they make no rear NVMe kit for ANY chassis. Did you see this on page 46 of some online PowerPoint that was partly in Russian writing?
Nope, it was directly listed on some sm distributors websites. I had an ebay search for the 836 version (MCP-220-83609-0N]
and it was listed on two occasions but without pictures.
And sm support has sometimes problems to find a sku just by description, but when you give them a part number they could usually provide more support/information.
 

i386

Well-Known Member
Mar 18, 2016
4,247
1,547
113
34
Germany
Don't know if any of these fit 836 chassis though.
the 216/826 rear kits have the same backplane as the 836 rear kits. For a while I was thinking about getting the normal 836 sas rear kit and swap the backplane with the nvme one from the 216/826 rear kit :D
 
  • Like
Reactions: ere109

Pete.S.

Member
Feb 6, 2019
56
24
8
the 216/826 rear kits have the same backplane as the 836 rear kits. For a while I was thinking about getting the normal 836 sas rear kit and swap the backplane with the nvme one from the 216/826 rear kit :D
Good to know! The only 3U servers I mess around with now are microclouds. Otherwise it's either 2U or 4U servers.
 
Last edited:

ramicio

Member
Nov 30, 2022
69
14
8
Nope, it was directly listed on some sm distributors websites. I had an ebay search for the 836 version (MCP-220-83609-0N]
and it was listed on two occasions but without pictures.
And sm support has sometimes problems to find a sku just by description, but when you give them a part number they could usually provide more support/information.
I provided them with part numbers as well. They replied to me twice that they don't make any rear NVMe rear cage kits. So now I am trying to figure out what else to do. There is only one card that can take two of them that is an actual legitimate manufactured product sold by a legitimate company, and for some reason it takes an x16 slot. I believe it probably has some sort of RAID going on. Don't want that. Anything else I found are some dubious Chinese items, from China, with nothing but a couple surface mount capacitors onboard. I am wanting to get a couple Optane drives for ZFS special devices for metadata. Was looking to get them by next week. The 118GB M.2 ones are cheap enough that I can do 4 of them in a striped mirror for 236 GB (I have a SuperMicro x8 card that takes four M.2). Don't know if that would be big enough. I have no idea if I can set up the array now and then later add special devices to the array and the metadata move over automatically.

Thank you all for the info.
 

Pete.S.

Member
Feb 6, 2019
56
24
8
I provided them with part numbers as well. They replied to me twice that they don't make any rear NVMe rear cage kits. So now I am trying to figure out what else to do. There is only one card that can take two of them that is an actual legitimate manufactured product sold by a legitimate company, and for some reason it takes an x16 slot. I believe it probably has some sort of RAID going on. Don't want that. Anything else I found are some dubious Chinese items, from China, with nothing but a couple surface mount capacitors onboard. I am wanting to get a couple Optane drives for ZFS special devices for metadata. Was looking to get them by next week. The 118GB M.2 ones are cheap enough that I can do 4 of them in a striped mirror for 236 GB (I have a SuperMicro x8 card that takes four M.2). Don't know if that would be big enough. I have no idea if I can set up the array now and then later add special devices to the array and the metadata move over automatically.

Thank you all for the info.
What is the problem you're facing?

NVMe drives just needs 4 lanes of PCIe. You don't need a controller card, you just need an electrical adapter to connect a cable to a PCIe x4 slot on the motherboard. PCIe 4.0 and 5.0 complicate things because of their higher transmission speed so you might need a redriver/retimer card instead of a dumb adapter - I haven't tried so I'm not sure.

To connect more than one drive into one PCIe slot you need bifurcation on the motherboard or you need an intelligent PCIe card with a PCIe multiplexer chip. The second options here is where it starts to get complicated because you run into compatibility issues. Not all cards work on all motherboards.

PS. I believe all Supermicro NVMe cards are either
  • PCIe redriver (amplifies the PCIe signals)
  • PCIe retimer (cleans up the PCIe signals)
  • PCIe multiplexors, aka switch (PCIe signal splitter)
Only the M.2 cards with two slots are simple adapters.
 
Last edited:

ramicio

Member
Nov 30, 2022
69
14
8
What is the problem you're facing?

NVMe drives just needs 4 lanes of PCIe. You don't need a controller card, you just need an electrical adapter to connect a cable to a PCIe x4 slot on the motherboard. PCIe 4.0 and 5.0 complicate things because of their higher transmission speed so you might need a redriver/retimer card instead of a dumb adapter - I haven't tried so I'm not sure.

To connect more than one drive into one PCIe slot you need bifurcation on the motherboard or you need an intelligent PCIe card with a PCIe multiplexer chip. The second options here is where it starts to get complicated because you run into compatibility issues. Not all cards work on all motherboards.
I need somewhere to mount them. I have an 836BA-R920B chassis, and the bays are full of HDDs. Regardless, the backplane is for SAS/SATA. The card I have from SuperMicro holds 4 M.2 SSDs and has a PLX to make it work in an x8 slot (all I have free). My x16 slots are taken up by a GPU and another quad M.2 holder (bifurcation). The M.2 notion I speak of would be for four 118 GB drives to give me 236 GB. I have no idea if that's enough space. I have no way of knowing before I copy data to an array I haven't created yet.

Then there is the U.2 notion, where I could get two 280 GB drives and mirror them. Or wait months to save and get 375 GB drives, which I'm certain would be enough. But I have no idea if I can add special devices to a zpool later and have it transfer the metadata from the HDDs to the Optane drives. If that's even possible, I have no idea how to even see how much space is needed for metadata. That whole topic is off-topic. The U.2 notion is assuming I can find a way to mount them. It would be nice to have them in the rear of the case in hot-swap bays. But I'd need a PCIe card of some sort to go in between. They make PCIe cards that mount them directly. Hard to find x8 cards that hold two drives. As I said before, one exists that is a legitimate product, but it takes an x16 slot and I think does some sort of RAID with them. I don't believe it even presents to the system as an NVMe drive. The only other option are Chinese specials with absolutely no components on board other than 2 U.2 connectors, and absolutely zero customer reviews. Then there are cards with cables (also Chinese specials) to mount them elsewhere in the case. Not an option...not having them flop around, not drilling holes. Also not running extra power to them.
 

Pete.S.

Member
Feb 6, 2019
56
24
8
I need somewhere to mount them. I have an 836BA-R920B chassis, and the bays are full of HDDs. Regardless, the backplane is for SAS/SATA. The card I have from SuperMicro holds 4 M.2 SSDs and has a PLX to make it work in an x8 slot (all I have free). My x16 slots are taken up by a GPU and another quad M.2 holder (bifurcation). The M.2 notion I speak of would be for four 118 GB drives to give me 236 GB. I have no idea if that's enough space. I have no way of knowing before I copy data to an array I haven't created yet.

Then there is the U.2 notion, where I could get two 280 GB drives and mirror them. Or wait months to save and get 375 GB drives, which I'm certain would be enough. But I have no idea if I can add special devices to a zpool later and have it transfer the metadata from the HDDs to the Optane drives. If that's even possible, I have no idea how to even see how much space is needed for metadata. That whole topic is off-topic. The U.2 notion is assuming I can find a way to mount them. It would be nice to have them in the rear of the case in hot-swap bays. But I'd need a PCIe card of some sort to go in between. They make PCIe cards that mount them directly. Hard to find x8 cards that hold two drives. As I said before, one exists that is a legitimate product, but it takes an x16 slot and I think does some sort of RAID with them. I don't believe it even presents to the system as an NVMe drive. The only other option are Chinese specials with absolutely no components on board other than 2 U.2 connectors, and absolutely zero customer reviews. Then there are cards with cables (also Chinese specials) to mount them elsewhere in the case. Not an option...not having them flop around, not drilling holes. Also not running extra power to them.
Alright, I see your problem.

Personally I would be cautious about running the optanes on a multiplexer card. The PCIe bus interface will be the bottleneck with a multiplexer. Also M.2 with optanes sounds like an oxymoron because one is for high performance, heavy workloads while the other is not.

It doesn't sound like you have many PCIe slots. Do you have fewer slots than what your 836 chassis support?

Regardless, I would see if I could order the dual U2 drive option. MCP-220-82617-0N or MCP-220-82619-0N.
Then have two U.2 drives and connect them to a card without a multiplexer, like AOC-SLG3-2E4T-O (oculink).

You could use just use any kind of U.2 drives until you have figured out how big the drives needs to be.
 
Last edited:

ramicio

Member
Nov 30, 2022
69
14
8
Alright, I see your problem.

Personally I would be cautious about running the optanes on a multiplexer card. The PCIe bus interface will be the bottleneck with a multiplexer. Also M.2 with optanes sounds like an oxymoron because one is for high performance, heavy workloads while the other is not.

It doesn't sound like you have many PCIe slots. Do you have fewer slots than what your 836 chassis support?

Regardless, I would see if I could order the dual U2 drive option. MCP-220-82617-0N or MCP-220-82619-0N.
Then have two U.2 drives and connect them to a card without a multiplexer, like AOC-SLG3-2E4T-O (oculink).

You could use just use any kind of U.2 drives until you have figured out how big the drives needs to be.
What's there to be cautious about? I don't understand the sentence about M.2 and Optane being an oxymoron. I have 5 PCIe slots. 1 is for a 40g NIC (x8), 1 is for a GPU (x16), 1 is for an HBA (x8), and 1 holds 4 SSDs (x16).

I could use literally any drive to test this, but I'm thinking to move over 84+ TB of data just to see how large my metadata drive needs to be is a waste. And I couldn't just use any U.2 drives. I have no solution to mount them anywhere.