Recent content by rich0

  1. R

    Best Approach to Cheap Distributed NVMe?

    Yeah - something like that was what I was looking at in terms of more "off the shelf" solutions, but they're still pretty pricey. I do look forward to when that hardware starts dropping in price, as it eventually will. I'm also at the point where I want to scale horizontally more than...
  2. R

    Best Approach to Cheap Distributed NVMe?

    So, that's another concern I have about trying to modify a server - none of this stuff seems to be standard. I had never heard of an "826" until I read your post, and it appears to be the model number of a chasis, which apparently could have different motherboards in it. If I were to buy...
  3. R

    Best Approach to Cheap Distributed NVMe?

    That's just the cost of a backplane though? I'm guessing a used functional 826 would cost a fair bit more on top of that. However, that backplane would definitely handle a large number of drives, which is a good sign. At larger scale that would make sense, but I couldn't put more than a few...
  4. R

    Best Approach to Cheap Distributed NVMe?

    Mostly cephfs, so performance isn't super-critical. Some block store but it is relatively light. Can you make suggestions on 2U servers with 4x NVMe, because it is actually kinda hard to actually find stuff. Nobody has filters on U.2 ports, or free PCIe slots (granted, an actual server won't...
  5. R

    Best Approach to Cheap Distributed NVMe?

    I'm running Ceph and trying to move towards NVMe for future expansion. I'm not super-concerned with performance, and at least initially they'll probably be mixed with enterprise SATA SSDs - it just seems like the trend is towards NVMe and the cost of a U.2 SSD isn't really any higher than any...
  6. R

    Server for 2-4 NVMe for Ceph

    Yeah, I just picked up half a dozen and now all my SSD storage has power loss protection, and I'm expanding my use of it. I'm not sure how soon, if ever, I'll be 100% SSD due to the cost, but they're much more reasonable. Only issue I see with SATA is that they're a bit legacy and I suspect...
  7. R

    Server for 2-4 NVMe for Ceph

    I'm actually running Reef right now. However, almost all my storage is on 5400RPM USB3 HDDs, which don't seem to perform any better in this release. I definitely would prefer NVMe, but I do have to consider whether SATA SSD gets me to a point where I'm actually running a significant amount...
  8. R

    Server for 2-4 NVMe for Ceph

    To be fair though, this is still a distributed filesystem, so the blazing IOPS of direct access of NVMe is probably not going to happen either way. I think the question is how they compare in that context, and I honestly don't know the answer there. What is the difference between the 863a and...
  9. R

    Server for 2-4 NVMe for Ceph

    That's a fair point I hadn't considered. Sure, they don't perform as well, but system with 4 SATA SSDs would easily saturate an SFP+ or two. I was focused on the fact that NVMes aren't much more expensive per TB, but the interfaces are the expensive part. Pretty much any old SFF desktop...
  10. R

    Server for 2-4 NVMe for Ceph

    Sure, this is for homelab Rook. Random stuff around the house. Right now I'm using 5400RPM HDDs, which work but obviously don't perform great especially for recovery. Long-term I'm thinking about migrating to NVMe. I'm not sure I'd ever want to 100% migrate to NVMe due to the high cost, but I...
  11. R

    Server for 2-4 NVMe for Ceph

    Hmm, looks like it can take 4x U.2 drives? If I could get one with enough RAM/etc for a few hundred that might make sense. Would obviously be a bit large but I guess I can stack them. Looks like it idles at over 80W which isn't ideal, but 4 SFF desktops would pull that much most likely and...
  12. R

    Server for 2-4 NVMe for Ceph

    I care mostly about cost, which would include energy consumption. I don't really need hot swap - if I need to add/replace a drive I can just shut it down - it will be running Ceph/k8s after all. I'm fine with U.2 whether natively or via adapters, though obviously they need to fit in the case...
  13. R

    Server for 2-4 NVMe for Ceph

    What is the best option these days for an OSD server for Ceph to host 2-4 NVMes for minimal cost? This would be running Rook. Things I would want: 2-4 NVMes (M.2 or U.2 I guess, though it seems hard to find large enterprise M.2 format SSDs) 32-64GB RAM At least 1 SFP+ port, though SFP28 would...
  14. R

    WTB: TMM USB3 16GB GbE Low Power for Ceph OSDs

    Yeah, unfortunately the RAM is a pretty hard requirement. I'm actually running MooseFS for storage right now on Pi4s and that works nicely, but Ceph OSDs can have issues if they have to rebuild without sufficient memory. It is a nasty case because it can work just fine but if you have something...
  15. R

    WTB: TMM USB3 16GB GbE Low Power for Ceph OSDs

    I'm not sure if this is the best place to look for hardware recommendations, but I was drawn to the TMM concept being discussed here and I am thinking about buying several units for use as Ceph OSDs. I don't have a ton of storage so I just need a couple of lower-power nodes that can handle a...