I tested a kit of 4x32gb lrdimms (1x, 2x and 4x configuration), which caused the system to not boot, so I think were out of luck here.
But on a more positive note: none of my tested configs was on the QVL and 2 kits of 2133Mhz RDIMMS and 1 kit of 2400 Mhz RDIMMS worked just fine.
I have posts...
There are also 10gbe NICs on m.2 boards (Amazon). I found one thats ~120€, maybe there are even cheaper models, these seem to all use the same AQC-107 chipset. A quick google search also suggests it's probably supported out of the box by esxi, but i didn't have the time to check yet.
I took some time this morning and tested all three kits I had on hand (which finally forced me to do the esxi 8 upgrade on my other host that I've been putting of for months). These three booted in a 1x, 2x and 4x configuration:
HMA84GR7MFR4N-TF - 2133Mhz, 32GB DR
M393A4K40BB0-CPB - 2133Mhz...
Interesting, seems like all the qualified RAM is 2666Mhz. I have 2133 and 2400Mhz sticks in my other server and was initially planning on swapping these for 2666 and use the slower ones on these boards. Well, seems like a maintenance windows is needed to get one of each out for further testing...
Hm, it seems that these boards either don't support quad rank or LRDIMMs (or both). I was hoping I could repurpose the 128gb (4x32gb 4R 2133MHz LRDIMM) kit i have sitting on my shelf for a couple months, but the IPMI still shows a 16gb 2400MT DIMM in slot B0, no matter if i have one or four...
Interesting, I'm wondering what type of workload would actually benefit from 10x PCIe3x1 slots - I mean in the end data has to go through a PCIe x8 Interface...
Well... I had those boards on my watchlist for two weeks now and finally gave in and ordered 3 for 100€ a pop. lets see what these...
I'm pretty sure its from one of these boxes:
Gigabyte G431-MM0
Must've been some sort of mining rig or similar. These popped up a few weeks ago, i was initially tempted but i don't see a realistic use case tbh.
I just found a minor inconvenience in all of this. My server is a HPE CL2200 Gen10, so basically a Gigabyte R281-N40. The spec sheets and the ebay listing both suggested that the 4 orange drive bays in the front are in fact u.2 nvme - wich isn't technically wrong bot sadly not the whole story...
I'm pretty convinced now to go the Enterprise route. I'd like to avoid doing some hackery stuff to get my storage to work.
Also it seems my pricing figures from post #1 were a little off, A 1.92TB PM9A3 is about the same price as a 2TB 980 Pro. PSA: dont compare average ebay prices and...
So is this more of an issue with IOPS or an issue with aging drives?
Sorry if all that sounds pretty stupid - I'm trying to wrap my head around that for a while now, doesnt really seem to click with me. Storage seems to be a lot more complicated than consumer hardware made me believe...
Oh...
Just read it, very interesting writeup - clears up a few things.
But not all - I'm still unsure if investing twice the money or more is worth it. The 7400 Pro is somewhere in the ballpark of 250€/TB so about 2,5x the price of an 970 evo plus.
I know that PLP is kind of a big deal for making...
This seller is dangerous for my wallet. I bought twice from them - now I have 40 600GB SAS drives.
All of the first batch (haven't opened the second on yet) had around 5 years of runtime with ~3 power cycles. Only one of the 8 in my server failed in the 2 years I've had them.
So I recently acquired a new server for my lab, now I'm debating on storage options. I'm looking for around 2TB, also nothing mission critical is planned on this machine, hence a single SSD would be enough for now. Since price difference between SATA and NVME is negligible and SAS (12G) is...
I'm currently rethinking my homelab situation, since my 12th gen Dells are pretty power hungry, generate quite some heat and aren't quiet either. I have been thinking if a 13th or 14th gen single socket Dell server would work as well. But I'm a bit confused about performance:
I currently have a...
Interesting Point. But isn't faster (large 15k SAS arrays/SSD/NVME) storage to some degree limited by CPU speed? These are low core count and low Frequency CPUs so there isn't much room for overhead. So is it mainly aimed at "slower" storage units and faster ones need faster CPUs?
Or is the...
I've been thinking about this for a while now:
There are these E5-2603 (and their newer v2-4 predecessors) low end CPUs all over ebay for (sometimes) single digit prices. They aren't fast, nor do they pack any special features other than being cheap (even new they where dirt cheap compared to...
edit: Was a bit late last night, didnt read that OP well enough. This is not relevant for this thread, but probably generally good advice when building PCs. Also will not delete, so theres no confusion.
Since you've mentioned the LianLi PC-Q25B: There's a mATX version of that case, though it it only offers 5x 3.5" drive mounts, you'd be probably able to mount one or two drives at the bottom. It's called the "PC-M25B".
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.