Search results

  1. G

    Docker Compose organization

    Redhat seems to disagree... https://www.redhat.com/sysadmin/podman-compose-docker-compose
  2. G

    Strange MDADM RAID 6 behaviour

    The issue you described isn't specific to ZFS, in fact as I mentioned above, ZFS will notify you on a scrub or read which is better than most other filesystems. The takeaway isn't that ZFS needs ECC, but rather that if you like your data, use ZFS with ECC and make backups! Glock24's question...
  3. G

    Strange MDADM RAID 6 behaviour

    Actually ZFS would be better than most filesystems and software raid implementations as the files would be flagged bad on read whereas any other filesystem would silently corrupt and then never let you know. MD would let you know if you did a resync or patrol read, but it doesn't know how to fix...
  4. G

    Broadcom 9308 IT vs IR

    Nope. There was a lot of FUD about IR vs IT mode. Unless you configure an array in IR mode it just passes them through. One distinction is that people flash their controllers not only to IT mode but they also remove the BIOS. It makes it boot quicker and wont hangup and wait if there is a...
  5. G

    Teradici in 2022

    HP's software based solution Zcentral seems a lot better than regular RDP and even vmware blast. Highly configurable from the client to prefer latency or beauty. I'm currently supposed to be setting up a trial and can comment back once we're using the boxes in anger vs the demos I've...
  6. G

    Strange MDADM RAID 6 behaviour

    If you don't want to try a liveCD, then try booting to a lower runlevel such as 3. See What Are “Runlevels” on Linux? for info on how to do it. That way X wont be loaded so you wont have issues with Thunar, but TBH I doubt that's the issue. It may be some form of indexing but again probably not.
  7. G

    Strange MDADM RAID 6 behaviour

    How about booting with a livecd and then mounting the drive to see if the issue still occurs. Just because your MD array is affected doesn't mean it's the cause... I think it's likely that another service is doing something that causes the drives to be busy. EDIT: If the issue still occurs...
  8. G

    Multi-bay SAS setup

    I've done something similar to this: X3650 M3 with a LSI HBA with SFF-8087 cables x2 out of the adjacent slot to a chenbro RM4116? with its own P/S and a SAS expander to 16x disks. Fun times. Only issues were when I needed to get inside the cxhenbro which was directly underneath. Note - The 2x...
  9. G

    mdadm raid5 recovery

    Hey I forgot about this thread. I'd suggest you run VMs from a different volume. Maybe a largish SSD? The ZIL/SLOG isn't a write cache. It's for SYNC writes rather than ASYNC ones. It's worth reading about this so you understand its purpose. You may be better off with the ZFS special VDEV...
  10. G

    mdadm raid5 recovery

    I'm running an 8 disk RAIDZ2 setup. I have 16 bays in my chassis, so ready to add another if/when I run out of space. TBH if you're not interested in max IOPS I would just go one big RAIDZ2. I've never run triple parity. If I was that concerned I would simply have a hot spare. I can't speak to...
  11. G

    mdadm raid5 recovery

    How are you copying stuff? rsync? If so then I think you can use those same checksums and just compare.
  12. G

    mdadm raid5 recovery

    If you're getting your stuff copied to your work disks, then don't "fix" your array, start fresh with partitions as I and another poster suggested. Also, maybe rethink if the extra layer LVM provides is worth it. It's another failure point and if you don't need to grow "disks" then why bother...
  13. G

    mdadm raid5 recovery

    We've all been there with disks everywhere. You may want a large desk fan blowing some air on them and turn them up the right way as the little hole on top of the disks needs to breathe. I would try assembling in each order permutation first. The second link I posted has a script that may help...
  14. G

    mdadm raid5 recovery

    Hey I half read the posts above, but I think a sane way to proceed, once you have a cloned set, would be to copy the MBR from a working disk to the couple that don't work. Hopefully one just works from there. If not try it from another one of the known good disks. You may need to manually...
  15. G

    ProxMox - Clearing disks for use

    parted's disk scan seems to prompt the kernel to refresh the partition table cache EDIT - Specifically partprobe hdparm also works Try googling reread partition table linux if you have issues again in the future
  16. G

    Keep IPMI BMC Functions while Use NVDIA GPU for specific Functions

    The advice below is predicated on the 2070 being seen in device manager when the onboard graphics is used. __________________ Why don't you set the nvidia up as primary to start with, get it all configured how you need to and then switch back to having the onboard as the primary. That being...
  17. G

    Clear Linux used exclusively for FFMPEG on Kaby/Coffee etc.

    Just on the quicksync thing, later version so of inte'ls hardware blocks are truly amazing. The kabylake one handles basically everything and is fast too. I use QSVENC via staxrip on windows and i can deliver 4k transcodes at realtime to 1.5x speed that are "visually" lossless and smaller than...
  18. G

    PMS 4.0...PMS 5.0...PMS 6.0...No PMS 7.0! Plex/Storage server upgrade [PICS]

    Hey It may be worth checking out Hardware Accelerated Decode (Nvidia) for Linux as some intrepid users have a mostly working approach to de/encode. Specifically, this post, Hardware Accelerated Decode (Nvidia) for Linux as it has failover for mpeg4 ASP stuff.
  19. G

    mdadm RAID10 geometry

    Shame about --examine The man pages use the term replica rather than mirror as not only can you have n-way mirrors but also technically the replicas can be sort of anywhere ie. they can be scattered around in a pseudo random fashion in a far offset raid 10. See the wikipedia page for examples...
  20. G

    mdadm RAID10 geometry

    Apparently it was changed to the by replica order in 2013 sometime. I saw that someone suggested that mdadm --examine /dev/sdX would be a good idea and TBH it probably would tell you which were pairs in your instance. Near is meant to be the worst for random i/o. See ""Far" layout is designed...