Hi,
Im looking for a cheap card that will put 2 U.2 NVME drives into RAID 1. The 9460-8i or 16i is pretty pricey on eBay but I know there are several OEMs that rebrand it. Any recommendations that can be easily flashed?
I handle IT for a small office that has a couple of old Dell servers with Xeon E3-1225 quad core processors that are about 10 years old. I installed them myself and they both still work perfectly but at ~10 years, it’s time to replace them. Most of what they do is in the cloud now but for...
Sure, but zfs will not currently allow me to combine different types of storage together into one share, correct? I already have (16) 960GB SSD drives that I want to use along with a bunch of 10TB spinners. If the special vdev contributed to capacity, then it would be perfect, but I cant find...
Either I’m not communicating properly or you aren’t reading what I’m saying. I don’t want sync writes. I never said I did. What I want (the #1 thing listed above) is to pool different types of storage together. I.e some ssd, some hdd and some nvme together into a single network share. I don’t...
Thanks. I know what zil/slogs are and aren't and the difference between sync and non-sync writes. I've been using ZFS forever and will continue to do so where it makes sense. What I'm trying to solve for now is bulk storage of large files. I have a lot of HDDs, SSDs and now several NVME...
My mistake. They are DC S3700. I may give windows server 2019 a try. I’ve been testing it for a couple days and I can saturate a 10gb link all day long. Being able to pool different types of storage together with multiple tiers would be a big plus. I know that parity raid is garbage but I’m...
I’m more interested in write cache for ingesting media and general drive pooling than more read cache I’m not even using L2 cache drives. I just give zfs gobs of RAM (64GB) and it does a pretty good job with read cache. I just wish there were a way with zfs so that anytime I saved a big file it...
I’m rebuilding my home/lab server and wondering if anything has changed with tiered storage. I haven’t paid much attention over the last 4 or 5 years. I have an assortment of 8 and 10 TB HDDs a bunch of 960GB Sata SSDs and some NVME drives I’d like to put together in a server. When I last looked...
It was a bad drive. One bad SAS drive killed whatever HBA it was connected to. Once I isolated and removed it everything else came right back up. I’m going to dig a hole in the backyard and bury it.
Its an older HGST 400GB SAS SSD. I have several of them and they are great drives with...
Well…. I went ahead and upgraded to an E5-2697v4. 18 cores for $175! I also doubled the memory. New CPU and memory all showed up and it booted up no problem (also blew out about 2 lbs of dust). But now all my data stores are missing. Could the CPU change have messed up which HBAs are passed...
Thanks. I still have 4 empty DIMMs. I think the main advantage to upgrading would be PCIe lanes and maybe power efficiency. If I stick with the same mobo and get an E5-2697v4 (which is less than $200 on eBay) and add a GPU, I’ll be short one PCIe lane for an HBA.
1 PCI-E 3.0 x8 (in x16 slot) -...
I’m considering upgrading my home/lab server but I haven’t been been paying much attention to the used enterprise gear market lately so I’m not sure what is cost (and power) effective anymore.
I run ESXI 6.5 with 6-8 VMs (windows server, TrueNAS, Ubuntu Server, etc) + whatever I’m messing with...
If your drives are truly idle most of the time consider spinning them down. This can be done using ataidle startup scripts in freenas. You need to make sure none of the system files or logs are stored on the volumes you intend to spin down and you may want to change the intervals for scrubs and...
I assume this was not with parity? I’d love to get decent performance out of windows parity drives (Or really any platform that you can add single drives to existing parity volumes) so I could drop freenas.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.