Search results

  1. C

    S2D parity read speed vs single server parity

    Has anybody had any experience with a 7 node S2D using parity? I have not been able to find any sort of comparative benchmarks on the read performance vs what a single server with Storage Spaces and drives in parity. I have 45x 10TB drives in a dual parity setup with journal drives on WS2019...
  2. C

    Tiered Storage Space writing to slow tier (Server2019)

    Sadly no. I should have tried a Multi-Resilient Volume with the HDD's making up both tiers. I have done it before and it worked ok.
  3. C

    Recommended Storage Spaces Cache Disk (nvme m2)

    Thing to note is that's only for Storage Spaces Direct which is only on the DC edition of server and is for clusters.
  4. C

    Tiered Storage Space writing to slow tier (Server2019)

    As far as I can figure, Storage Spaces will not use the fast tier (SSD) if the fast tier is below a particular percentage (~3.2%) of the total size of the virtual drive. This is a PITA because 1TB of a fast tier with a large dual parity capacity tier still speeds up large writes nicely. The...
  5. C

    Tiered Storage Space writing to slow tier (Server2019)

    I figured out my problem. The Performance tier (SSD) is smaller than the free space threshold that triggers the writes to land directly on the HDD's. I'm reasonably sure it's configured as a % of the total volume. When I created the volume with a capacity tier of 600GB instead of 300TB the...
  6. C

    Tiered Storage Space writing to slow tier (Server2019)

    I setup a tiered storage space with 45x 10TB drives and 3x NVMe 1.6TB SS's. The writes were slow and looking into it a bit it seems the writes all land on the slow tier instead of the fast tier. I have tried with ReFS and NTFS and the results are the same. Any suggestions?
  7. C

    DDA Passthrough for plex

    A bit late to the party but for the same type of setup I used a GTX 1660 so the quality of the transcodes wouldn't suffer as the NVENC is Turing based which has addressed the quality degradation of previous generations. It took a bit of messing around but I have it working with the 2 stream cap...
  8. C

    Storage Spaces (Server2016) with lots of disks

    I destroyed the storage pool, rebooted and started again but the results seem to be the same. I had changed the cluster size from 512 to 4096 on the SSD before I put them into a pool but I don't think I rebooted after so I thought maybe a reboot and rebuild would fix it. Nope. I noticed...
  9. C

    Storage Spaces (Server2016) with lots of disks

    I could really use some help. I try hard not to bother other people when there is a possibility that if I keep working on an issue I'll sort it out myself which is why I was a lurker here for so long before posting. I have hit a wall with this storage space and the performance is much lower...
  10. C

    Storage Spaces (Server2016) with lots of disks

    Thanks, I took a quick peek at your suggestions. Interesting. What's the performance like? For whatever reason I find myself biased towards either using a native Windows solution or using something like FeeNAS.
  11. C

    Storage Spaces (Server2016) with lots of disks

    At different times I tried using WBC and tiering with varying results. With both the speed increase is noticeable but when I was testing it I didn't settle on which was best for my purposes and I thought with the updates to ReFS and server 2019 there would likely be new best practices. I also...
  12. C

    Storage Spaces (Server2016) with lots of disks

    Do you use Hot-Spares? I stopped using hot spares once the feature to "Automatically rebuild storage spaces from storage pool free space" was available. Now I make sure my virtual disk leaves at least a few drives worth of unused space. However, I don't think I have seen it automatically rebuild...
  13. C

    Storage Spaces (Server2016) with lots of disks

    Hi, So a few years later... Storage Spaces has been great if a bit painful at times from stupidity on my part. I have about a zillion 3TB HGST drives over a few different JBOD's and too many servers and I wanted to consolidate and also drop my power bill so I bought a 45x10TB JBOD. I have had...
  14. C

    Memorial Day Sales - P3605, 3TB, 4TB, S9300, etc.

    Of the 3 I checked, 2 of my drives have 97% health and one has 100% health. I'd buy from him again. Thanks for sharing the deal.
  15. C

    (solved) NVMe U.2 to PCI-E Adapter card?

    I bought 8x of these cables U.2 to M.2 SSD cable - Replacement U.2 to M.2 Cable for PCIe* NVMe supporting Intel® Solid State Drives - SSD Spare Parts and some cheap M.2 to PCIe cards.
  16. C

    (solved) NVMe U.2 to PCI-E Adapter card?

    I bought 10 Intel SSD DC P3600 400GB (2.5") and in my brilliance I bought matching adapters like the StarTech one listed above. And then after a few days I asked myself, are those half height adapters? I have some paper weights coming in the mail now. I couldn't seem to find any low profile...
  17. C

    Memorial Day Sales - P3605, 3TB, 4TB, S9300, etc.

    I received my 1.6TB drives yesterday and they were in good physical condition. I'll fire them up this weekend to check their health but I don't have any concerns at this point. On another unfortunate turn of events, I made an offer of $45/ea CAD for 10x Intel DC P3600 400GB that was accepted...
  18. C

    Memorial Day Sales - P3605, 3TB, 4TB, S9300, etc.

    I'm in for 4x of the 1.6TB SSD Intel DC P3600. They've shipped and made it into Canada so I should have them in hand shortly. :)
  19. C

    Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

    Thanks for all the fantastic information @fohdeesha ! It saved me from buying a pair of MicroTik's. My kids computers are in the basement and when I seriously started down this rabbit hole my server rack was was a rigged up 2 post with 2x4's for rear supports and a couple of comparatively quiet...
  20. C

    For FFMPEG (x265) are 4 x 4650 v2 > 2 x 2680 v2?

    Twice as much on eBay? Do you mind sharing your search? Sometimes I am blind. I have had the exact same thought!