You read my post too closely. You are referring to pipelining not being an issue, and that is totally not the point. If your thin images are live and live on the same store, and are written to in what ever order , you get fragmentation to the order of whatever block size you handle. You may call...
I saw this coming, tried to avoid.
Thin provisioning VM images is a bad idea, certainly on barely-scratching-it Sata storage. If you actually use the underlying filesystem you will find that in scenarios where multiple images are being written to, which is all cases except those where you want...
Depending on how big and sequential the compiliations are you could see if a ramdisk could be a solution here as they could speed up compilation a lot while Gluster will slow you down in that area.
I'm actually trying to patch my cpu, tired of all those bit-leaks.
Can't compare tdps, let alone a boxed cpu with something Intel will deny even exists.
Buy the hardware based on computational requirements. (Power)Efficiency and density is an art that will safe you money, not the 100 dollar difference last year's rackmounts bring you.
$200 versus what you normally pay for the same software, yes. However, with the forced relicensing every year and non-production use I could think of better bargains.
Are all of you aspiring/currently VMware techs? Apart from vsphere and the distributed storage the software is more suited for...
You will get 5gbit from the Pcie slot which is not enough. This is where Vmwarez bites you as you have no ability for software raid with the Intel ICH. Not a biggy as you can probably hack together a script to ship off your images nightly and Vmwarez only configuration option is On/Off.
If his NAS was external then he could mount it over NFS on his synology vm. From his post you could read it as external NAS, or internal H310. There is indeed a read-only driver for Linux but a 2tb vmdk setup doesn't make sense in the first place and prohibits a migration to physical storage if...
Think of it this way. If one day your Vmware box dies, recovering from the Vmwarez FS and container is a drag and largely unsupported. A bare volume however can be mounted from any computer.
It is unlikely that the Xenserver management OS will let you use these new features. If you are willing to go as far as patching the hypervisor yourself you might as well start from scratch by installing it yourself on a distro of your choice.
Md with or without LVM for the time being can be shrunk when the time comes to migrate to Btrfs. The flexibility in that area is pretty amazing. Most of the hate you see with Btrfs is by people that used it some time, years ago, when unstable meant about to explode. Opensuse is probably the most...
Without knowing your loads I will be an ass and propose you get one server, suiting your powerful description. The last few generations of single socket servers could close in on 20w and are quite powerful.
It's not like the difference between Sata2 and 3 for random workloads is that noticable. I would go as far as saying that it is not worth 30 bucks and if it is, take two smaller ssds and stripe them and you have your 600mb target.
You can create a 'proxy-vm' with a bunch of 1gbit nics and a bonding device, back to back with the SAN. This could then be proxied to the host with a vmxnet adapter at speeds over 1gbit. Far from ideal, but since Vmwarez is a locked POS (software that is).
Sounds interesting, although I'm not sure at what point you would rather 'centralize' storage instead of running it locally on every node. I use drbd between two nodes to be enabled for HA storage and you could use a shared FS on that too. If 2 or 3 of your nodes out of ten provide storage (vms)...
None. The Hyper-v role will transform your current setup in one where your Windows 'host' will be essentially a guest itself, with the hyper-v hypervisor becoming the host. This is a bare metal hypervisor, with a pretty heavy priviliged OS on top of it.
Overclocking comes in at different levels. At home I would want the highest attainable frequency and be okay with any damage it may do.
I think some Supermicro servers were at one point reviewed here at STH that came with a factory overclock or overclock setting? I guess these could give you...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.