Opteron 2419 EE - here's a link to the benchmark I ran a few days ago...I believe you need low profile cards from what ive read.
Anyone whos bought one form that same seller, that i did, can you confirm which CPU's are in these?
I just ordered one of these for my home vSphere lab.
Has anyone ordered a second power supply for their unit? Mine only comes with one. I'd read online that many of these units were designed to only work with a single power supply. Just curious if anyone had tried a second one.
Im running Esxi 5.5
I've had bad luck with Dell SIPs and PS/2 to USB converters on Dell Poweredge servers, but these aren't really Dell servers, they are Tyan, so it's worth a shot... Just save your receipt.
As for your dozen servers w/ 2 Gigs of RAM on one 1TB drive, I'd seriously consider using multiple spindles in a RAID0 (stripe) array to get better performance. Also, it depends on what the servers are doing - the VM would have normal OS disk activity, application disk activity, and if memory is too low, paging activity also. I'd ensure that each VM has enough memory to avoid paging activity as much as possible.
Hypervisor hosts need very little RAM compared to regular servers, consider bumping each VM to 3 or 4 Gigs, you only need a couple gigs for ESXi or whatever.
I was commenting about VMs in the abstract, if a particular application calls for, say, 6 Gigs of RAM if installed on actual hardware, you should allocate that same amount of a RAM to the VM it runs inside.I'm curious being new to virtualization. If a particular software install recommends 6GB of RAM and you wanted to install this on a guest vm, would you give the vm the 6GB of RAM or go with 4GB given the lowered RAM needs of a vm?
"Hypervisor hosts need very little RAM compared to regular servers, consider bumping each VM to 3 or 4 Gigs, you only need a couple gigs for ESXi or whatever."
I used to think a one-to-one Virtual to Physical CPU ratio was best, but that had me running (for example) seven single-core Windows Server instances on a dual quad core host (leaving one core for the hypervisor), but further investigation tells me that I would likely see better performance if I assigned two CPU cores per VM, over-subscribing the CPUs by 100%, since that has each VM running with two cores, a much happier place for most VMs. This is theoretical, based on reading I've done - I've very little experience with this so far.So do you suggest 1 virtual processor per physical core ratio?
I ordered SPARE power supplies, at $60 for two, it is cheap insurance - if the PS took a hit next year, where would I find a replacement PS for this chassis?I just ordered one of these for my home vSphere lab.
Has anyone ordered a second power supply for their unit? Mine only comes with one. I'd read online that many of these units were designed to only work with a single power supply. Just curious if anyone had tried a second one.
I was commenting about VMs in the abstract, if a particular application calls for, say, 6 Gigs of RAM if installed on actual hardware, you should allocate that same amount of a RAM to the VM it runs inside.
Some OSs function fine with 2 Gigs of a RAM, other will page/swap like crazy with less than 4 Gigs of RAM.
Simply running software inside a VM doesn't alter the requirements of the software with regard to system resources.
Thanks for such a detailed reply. This is indeed very helpful and gives me a good foundation to start from.I used to think a one-to-one Virtual to Physical CPU ratio was best, but that had me running (for example) seven single-core Windows Server instances on a dual quad core host (leaving one core for the hypervisor), but further investigation tells me that I would likely see better performance if I assigned two CPU cores per VM, over-subscribing the CPUs by 100%, since that has each VM running with two cores, a much happier place for most VMs. This is theoretical, based on reading I've done - I've very little experience with this so far.
The interesting question for me is hyper threading on Intel CPUs - with hyper-threading enabled the hyper visor sees 2x as many cores, but half of those cores are poor performers, with lower throughput that a 'full' core... Is it better to enable hyper-threading and allocate more CPUs per VM, or is it better to disable hyper-threading and assign correspondingly fewer CPUs per VM? My gut tells me the former (enabling Hyper-threading and assigning more CPUs per VM improves performance), but I'm not so sure...
My best advice is to not be afraid to over-subscribe your CPUs two virtual CPUs per actual CPU is probably a good rule of thumb, but a 3:1 ratio is probably fine in a pinch, based on workload.
I assume you mean, for example, importing a pre-built VM, say from Microsoft or Turnkeylunux.com? In those cases the VM configuration should be 'baked in' to the VHD/virtual disk set to what the vendor felt was appropriate.Thanks Ken. This is helpful. I take it that when you deploy a guest vm that was pre-configured when it was made available that the resources needed to support the VM are already known during the deployment process.