Search results

  1. C

    Same HDD, bad sectors but only in specific server

    Thanks for the replies. It turned out to be the SAS3 HBA. I swapped it out and those errors seem to be fixed. I'm running some more tests to be sure but it looks good so far. Thanks again.
  2. C

    Same HDD, bad sectors but only in specific server

    I have 2 (new to me) 2U Supermicro servers and 12 Ultrastar HE10 10TB SATA drives. Before I put any data on used drives I always test them with HD Sentinel using the surface test Write + read destructive. Server 1, 6 HDD's completed the write tests with good speeds and no issues but on the read...
  3. C

    Importance of hardware being Windows Server Catalog SDDC certified?

    Hi, I'm building a 2 server hyper converged system with server 2022 and all I have left to purchase is the fast tiered SSD's. I was going to buy Intel P4610's for that but they are not listed as SDDC certified for server 2022. I tried this previously using server 2016 but windows didn't like...
  4. C

    SM X11DPU - CPU compatibility?

    I hope this isn't too stupid of a post but I'm a bit lost and could use a nudge in the right direction. I just bought a pair of server that use a X11DPU motherboard and I'm confused about which CPU's can be used in it. For some stupid reason there isn't a published CPU compatibility list or...
  5. C

    Beware of EMC switches sold as Mellanox SX6XXX on eBay

    I followed the guide by dodgy route to convert my EMC switch to an SX6018 and all seems to be good except that the web interface is terribly slow. The CPU usage is between 40% and 100% with nothing going on. I saw the below comments and when I tried the "mlxi2c update_bootstrap166" command I...
  6. C

    S2D parity read speed vs single server parity

    Has anybody had any experience with a 7 node S2D using parity? I have not been able to find any sort of comparative benchmarks on the read performance vs what a single server with Storage Spaces and drives in parity. I have 45x 10TB drives in a dual parity setup with journal drives on WS2019...
  7. C

    Tiered Storage Space writing to slow tier (Server2019)

    Sadly no. I should have tried a Multi-Resilient Volume with the HDD's making up both tiers. I have done it before and it worked ok.
  8. C

    Recommended Storage Spaces Cache Disk (nvme m2)

    Thing to note is that's only for Storage Spaces Direct which is only on the DC edition of server and is for clusters.
  9. C

    Tiered Storage Space writing to slow tier (Server2019)

    As far as I can figure, Storage Spaces will not use the fast tier (SSD) if the fast tier is below a particular percentage (~3.2%) of the total size of the virtual drive. This is a PITA because 1TB of a fast tier with a large dual parity capacity tier still speeds up large writes nicely. The...
  10. C

    Tiered Storage Space writing to slow tier (Server2019)

    I figured out my problem. The Performance tier (SSD) is smaller than the free space threshold that triggers the writes to land directly on the HDD's. I'm reasonably sure it's configured as a % of the total volume. When I created the volume with a capacity tier of 600GB instead of 300TB the...
  11. C

    Tiered Storage Space writing to slow tier (Server2019)

    I setup a tiered storage space with 45x 10TB drives and 3x NVMe 1.6TB SS's. The writes were slow and looking into it a bit it seems the writes all land on the slow tier instead of the fast tier. I have tried with ReFS and NTFS and the results are the same. Any suggestions?
  12. C

    DDA Passthrough for plex

    A bit late to the party but for the same type of setup I used a GTX 1660 so the quality of the transcodes wouldn't suffer as the NVENC is Turing based which has addressed the quality degradation of previous generations. It took a bit of messing around but I have it working with the 2 stream cap...
  13. C

    Storage Spaces (Server2016) with lots of disks

    I destroyed the storage pool, rebooted and started again but the results seem to be the same. I had changed the cluster size from 512 to 4096 on the SSD before I put them into a pool but I don't think I rebooted after so I thought maybe a reboot and rebuild would fix it. Nope. I noticed...
  14. C

    Storage Spaces (Server2016) with lots of disks

    I could really use some help. I try hard not to bother other people when there is a possibility that if I keep working on an issue I'll sort it out myself which is why I was a lurker here for so long before posting. I have hit a wall with this storage space and the performance is much lower...
  15. C

    Storage Spaces (Server2016) with lots of disks

    Thanks, I took a quick peek at your suggestions. Interesting. What's the performance like? For whatever reason I find myself biased towards either using a native Windows solution or using something like FeeNAS.
  16. C

    Storage Spaces (Server2016) with lots of disks

    At different times I tried using WBC and tiering with varying results. With both the speed increase is noticeable but when I was testing it I didn't settle on which was best for my purposes and I thought with the updates to ReFS and server 2019 there would likely be new best practices. I also...
  17. C

    Storage Spaces (Server2016) with lots of disks

    Do you use Hot-Spares? I stopped using hot spares once the feature to "Automatically rebuild storage spaces from storage pool free space" was available. Now I make sure my virtual disk leaves at least a few drives worth of unused space. However, I don't think I have seen it automatically rebuild...
  18. C

    Storage Spaces (Server2016) with lots of disks

    Hi, So a few years later... Storage Spaces has been great if a bit painful at times from stupidity on my part. I have about a zillion 3TB HGST drives over a few different JBOD's and too many servers and I wanted to consolidate and also drop my power bill so I bought a 45x10TB JBOD. I have had...
  19. C

    Memorial Day Sales - P3605, 3TB, 4TB, S9300, etc.

    Of the 3 I checked, 2 of my drives have 97% health and one has 100% health. I'd buy from him again. Thanks for sharing the deal.