Search results

  1. N

    Unifi US-24-250W vs USW-24-POE - Worth the wait?

    I am currently in the market for a 16 to 24 port POE switch, and since I have Unifi in the home, I'll use that. I'm in Europe, and I can get the US-24-250W right away, or wait a month until the newer USW-24-POE (Gen 2) arrives on the market. Do any of you in the US have experience with both...
  2. N

    Supermicro X11SC*-F iGPU support

    I am in the market for upgrading one of my servers, and given the apparent performance increase from the Xeon E3 v6 to Xeon E, I am looking into one of the Supermicro X11SC*-F motherboards with a Xeon E-21??G on it. The question is; do any of the -F boards (other than the X11SCA-F, which is a...
  3. N

    Supermicro drive tray generations

    I am in the market for a Supermicro chassis for 6-8 mixed-use 2.5"/3.5" drives. I see that 825s are available cheaply, but I'd need 3.5"-2.5" conversion trays, such as the MCP-220-00043-0N. Now, on Supermicro's site there is a whole list of drive trays, but that list shows "generations" for the...
  4. N

    Helliconia home rack gets a visual overhaul

    Not exactly hardware pr0n, but let's just say that things look a lot less messy than before. Before/after pics here. The main difference between the mess before and the approximate order is that I ordered and installed rack blind panels (to force more air through the equipment), and the panel...
  5. N

    Helium leakage in helium-filled HDDs?

    Most high-capacity HDDs are Helium-filled nowadays. Now, I studied engineering physics, and although I did not specialize in anything vacuum/helium related, the one thing that stayed with me is that helium is darn near impossible to keep inside almost anything (mostly since it's a single-atom...
  6. N

    16-bay hotswap with less noise than military airfield

    The title says it all: I currently have a Norco RPC-3116 16-bay SAS2 hotswap case, and while it works, I am getting more and more annoyed by the build quality of that case. Or rather, the lack thereof: I had a Ceph OSD drop out on me the other day because of shoddy connections between the drive...
  7. N

    A decent iSCSI-target VM with UNMAP/DISCARD support

    My homelab by now is a three-node Proxmox VE / Ceph system with 10 GBe networking (Intel X520-DA1). By now, I used bootutil to have my workstation boot off iSCSI as well. Of course, Proxmox VE does not provide iSCSI targets out of the box, but as I wanted the data to reside on Ceph RBD anyway...
  8. N

    Just a simple case move - Phagor moves to a bigger home

    Build’s Name: Phagor Operating System/ Storage Platform: Debian Stretch CPU: Xeon X5660 (6-core 2.8 GHz w/ hyperthreading) Motherboard: Asus Rampage Gene II Chassis: Rosewill RSV-L4500 Drives: A single 200 GB Intel S3600. RAM: 6 x 4 GB DDR3-1600. Add-in Cards: Intel X520-DA1 (10 GBe SFP+)...
  9. N

    mATX E5-16x0 v1/v2 build?

    I've been thinking about upgrading my old Xeon X5660-based workstation for a while now. Main goal is a higher per-thread performance at a reasonable price. However, Xeon is a must, since I want ECC RAM. Given that I want high per-thread performance, it's probably going to be E5-16x0. Looking at...
  10. N

    Proxmox VE docs to upgrade Ceph to Jewel

    For those brave enough to try, the creators of Proxmox VE have posted documentation on how to upgrade Ceph to Jewel on VE 4.4. HowTo: Upgrade Ceph Hammer to Jewel That enables some interesting things, like multiple filesystems per cluster, the new storage engine.
  11. N

    Low-power Ceph/ProxMox node

    Build’s Name: Fessup *) Operating System/ Storage Platform: ProxMox 4.2 (Linux 4.4.6, Debian Jessie derived) CPU: Core i3-6100T Motherboard: SuperMicro X11SSL-F Chassis: SuperMicro SuperChassis 510T-203B (2 x 2.5" SATA3 hot-swap) Drives: 2 x Intel DC3610 (200 GB) RAM: 2 x 16GB DDR4 ECC UDIMM...