Recent content by phroenips

  1. P

    FS: SC846TQ, with rails, dual platinum power supplies + H8DME-2

    Looks like I'm late to replying to this party, but for a laugh, I'll post up the quotes I got (I was able to get a USPS quote with the size box I have for it): USPS is USD 470, and UPS is USD 670.
  2. P

    FS: SC846TQ, with rails, dual platinum power supplies + H8DME-2

    Probably :P If you're interested, PM me your address or postal code, and I can get some estimates
  3. P

    FS: SC846TQ, with rails, dual platinum power supplies + H8DME-2

    Not sure if there would be any interest for this, seems like most everyone wants the SAS2 backplane, not the TQ, but I'll offer it up anyway. SC846 with the TQ backplane (24 individual SATA connections): rails ("native" fit for square holes, I also have adapters to use in a round hole rack)...
  4. P

    Supermicro 3.5 hard drive screws and labels $1.55

    In case it helps, I just put five packages on a kitchen scale. Total of 5.25 ounces, or 150g. These are for the ones in the OP
  5. P

    AMD 2000 series - populating second socket

    Thanks for the input. Yeah, I'll contact the seller. If it was going to work, I was hoping to avoid that potential hassle.
  6. P

    AMD 2000 series - populating second socket

    I bought a Supermicro 24-bay server from Tamsolutions a while ago, with the H8DME-2 motherboard. It came populated with a single Opteron 2374 HE I want to increase the RAM, and so ordered a second 2374 HE from eBay in order to be able to populate and address all available memory slots. Or so I...
  7. P

    Raid 50 vs Raid 6

    With 2TB drives, a 14+2 RAID 6 is really big, and pushing the boundaries that I'd be comfortable with. Risk of a triple disk failure are more likely. 2x 7+1 RAID 5 arrays is probably even worse. Personally, the biggest 2TB drive RAID 5 array I'd run would be around 4+1 I'd recommend playing...
  8. P

    ZFS array on an underspecced system

    Personally, I also run linux (CentOS 6.x) since I know that a WHOLE lot better than any of the BSD variants. I prefer to stay as close to native as I can, so I decided to go with software raid using mdadm+LVM as opposed to ZFS. It doesn't have the pool scrubbing to protect against bit rot, but...
  9. P

    10-Gigabit Ethernet (10GbE) Networking - NICs, Switches, etc.

    Minor correction, FCoE is compatible with standard 10Gbit Ethernet. That's still using Fiber Channel Protocol (FCP) though, and you need a fiber channel name server, etc. Like you though, I'm not aware of any protocol to do IP over FC
  10. P

    3x120GB/raid5, or 2x240GB/raid1?

    Bingo, the write penalty is not inconsequential. For RAID 5, you have a penalty of four, meaning for every one write operation to the array, you have four operations to the disk (read the data, read the parity, write the new data, write the new parity). If your raid controller doesn't have the...
  11. P

    Advice on SAN switch

    Disclaimer: I work in the storage industry, but I'm not very familiar with specific licenses required Just like ethernet switches, getting a higher port count in a single chassis is better. With 2x32 port switches, you'll be using valuable ports for ISLs, and depending on workload and...
  12. P

    PCI-E lanes and versions question

    Figured as much, thanks!
  13. P

    PCI-E lanes and versions question

    So, I know the different versions of PCI-E (1.0, 1.0a, 2.0, 3.0) are all interoperable (in theory anyway), and that a lot of server/workstation motherboards have physical x8 slots, but only x4 lanes available to it. What happens if you combine those scenarios? For instance, my motherboard has...