Thanks for sharing.
It could very well be that the results would be reversed if you had tested under Linux with ext4 and fio. Not that it invalidates your results because they were taken on Windows with NTFS and different tooling (on the contrary), it does however make them incomparable to the...
Thanks for the explanation and the alternative suggestions.
After I got the above to work and tested it, I decided not to go forward with the passthrough option. After some more searching I came across XCP-NG, which turns out to be a really awesome alternative for ESX that supports zfs for a...
Thanks for sharing. My adventure last night ended in me not using lvm raid and I didn't come around testing this specific feature, I decided against lvm raid before I got to that point. Besides the system its for runs kernel 4.19 (xcp, xen hypervisor).
I remember I had something similar a long time ago, with hung tasks etc. I don't remember exactly all the details though, it's 8 years ago so ... I think it ended up being a kernel issue with debian. I moved to arch for that server.
You could try booting from a live usb stick with another...
Hey no worries! I should have been more clear in what I was looking for in the first place.
Anyway, it sorted itself. Now I need to figure out how to get ncurses 5 when everything I run uses 6.1 ... :confused:
I don't have experience with it but Im very interested, especially in the performance penalty it will bring.
I am in the process of setting up a raid10 with mdadm, I might take the time to test with lvm and run some quick fio benchmarks against it, with and without chksum data.
lvm can be...
Yeah, that's not the one I was looking for. If it were, I wouldn't be posting below Hardware/hard drives.
HUGO is WD/HGST tool to reformat hard drives, it allows certain drives to be changed to 512 or 4kn block sizes.
I heard quite often that it doesn't matter and even that 4kn is faster or that 512e comes with "a performance hit". However, I have yet to see any direct comparisons with tools and parameters I trust (such as fio), so I decided to run my own. I will try to keep this concise :).
Can anyone help me with a download for HUGO for Linux (RHEL/CentOS)?
I have the Windows version but that would require I install some trial of Windows first, or remove a pile of disks from my server, attach them one at a time to my Windows machine.
Thanks in advance!
Damn that was 2 days worth of searching my ass off. Thanks!
For those reading and wondering, there are 2 mpt3sas drivers available, one in baseOS-rpms and from ELrepo. I picked the one from ELrepo, same version but 1 minor point release newer. So that would be `dnf install kmod-mpt3sas`, then...
Hey all, I am trying to get my old trusty M1015 to work in Red Hat 8. As you may not be aware, Red Hat in their infinite wisdom removed support for LSI SAS2008/2108/2116 and other controllers and they have since been included in elrepo.org. The controller is in IT mode without a BIOS.
Yeh I think you're right, internally should not be limited to physical nic speeds, but maybe e1000 is (I don't know). But even if it would be, I could bond 10 of them together :).
Good point on the auto boot Nick will keep that in mind! :D
I was able to reset it by removing the napp-it.cfg file. Editing out the password hashes didn't work ( adminpw|| ). I hadn't changed anything so removing and resetting was easiest by removing the file.
Still its weird it I couldn't login with a 32-char long random generated string with specials.
I changed the napp-it webui password for admin and operator, but I can't login anymore. How can I reset that password through the console?
Weird thing is I use Bitwarden to generate and store the password. Typo's are impossible so it might be a char limit or a special that gets in the way?
Very interesting read, thanks for all the effort you have put into that! Must have been quite a few hours there ;).
Do you think it would be possible to import my current ZFS pools? I have all volumes with native 2.0.x encryption. I can easily import those keys to your AIO appliance, if I may...
Currently I am running a home server with an Asus Z11PA-U12/10G-2S. It has 8 disks connected to the onboard SATA controller.
I also have an old IBM M1015 lying around.
The server is running Red Hat 8.4 with ZFS and RHV virtualiation. ZFS is configured with 4 pairs of mirrors. I don't boot from...
PS. I don't think 3Gb sata links poses any issues in stability or performance. Those disks won't saturate 150MB/s so there wouldn't be any benefit to having them on 6Gb links. I wouldn't get a hba card just for that reason.