Performance differences between ALL-In-One and Bare metal install

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jay69

New Member
Feb 6, 2012
27
0
0
Anybody has tested the differences in performance between bare metal install and ALL-In-One ?

Thanks.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Anybody has tested the differences in performance between bare metal install and ALL-In-One ?

Thanks.
I have no exact numbers but with hardware passthrough of controllers and disks you can expect that the speed degration compared to a similar hardware (especially concerning RAM) is inevident if there is no other VM running.

If there are several VM's running, it depends on the CPU needs of the storage VM especially when using for ex encryption. But mostly RAM determines performance. Use as much as possible for storage. Be aware, that this RAM is assigned definitly due to hardware-virtualization. (Not used for dynamic memory sharing).
 

Jay69

New Member
Feb 6, 2012
27
0
0
So RAM is the main factor. The main reason for running it in ESXi vs bare metal is so that I could effectively run or install other appliance. Again my constraint seems to be on the RAM.. so it's back to the drawing board..

I have no exact numbers but with hardware passthrough of controllers and disks you can expect that the speed degration compared to a similar hardware (especially concerning RAM) is inevident if there is no other VM running.

If there are several VM's running, it depends on the CPU needs of the storage VM especially when using for ex encryption. But mostly RAM determines performance. Use as much as possible for storage. Be aware, that this RAM is assigned definitly due to hardware-virtualization. (Not used for dynamic memory sharing).
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
A SAN with 4-8 GB RAM+ is quite ok. ( mirrored vdevs, SSD's if critical)
A VMware Server with 8-10 GB+ is quite ok

So All-in-Ones for Lab use with 12-16 GB+ are quite well
My production machines have 32 GB RAM (ESXi5) or 48 GB RAM (ESXi4)
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I benchmarked a system both ways - exact same hardware, exact same pools, same combination of SE11+Napp-it. Compared bare-metal vs ESXi 4.0/all-in one. There was no materiel difference in performance using CIFS/SMB to windows clients. There was a slight difference when using an advanced NFS client on the windows side (OpenText/Hummingbird), but not enough to care - less than 10%. This was true even over a 10Gbe physical link - no materiel difference in performance for reads or writes. There was no materiel difference between running it bare metal vs all-in-one under ESXi.

If you've read other threads of mine here you'd see that I ended up leaving it running bare-metal. But the only reason was due to a problem with restart stability of SE11 under ESXi 4.0. These problems are fixed under ESXi 5.0 and should no longer be a reason to avoid all-in-one. I'm leaving it as is because I am in "production" for my home network and have no motivation to change, but based on what I know today, if I were building it again I believe I would choose the all-in-one approach. It would have saved me the cost of deploying a separate server for ESXi-based VMs.
 
Last edited:

Jay69

New Member
Feb 6, 2012
27
0
0
Thanks. That's helpful to know. I'll rebuild my system to run on ESXi-5 and see how it goes. I would like to know the hardware configurations that those running in "production" is using and the IOPS that you are hitting IF possible. Thanks.

I benchmarked a system both ways - exact same hardware, exact same pools, same combination of SE11+Napp-it. Compared bare-metal vs ESXi 4.0/all-in one. There was no materiel difference in performance using CIFS/SMB to windows clients. There was a slight difference when using an advanced NFS client on the windows side (OpenText/Hummingbird), but not enough to care - less than 10%. This was true even over a 10Gbe physical link - no materiel difference in performance for reads or writes. There was no materiel difference between running it bare metal vs all-in-one under ESXi.

If you've read other threads of mine here you'd see that I ended up leaving it running bare-metal. But the only reason was due to a problem with restart stability of SE11 under ESXi 4.0. These problems are fixed under ESXi 5.0 and should no longer be a reason to avoid all-in-one. I'm leaving it as is because I am in "production" for my home network and have no motivation to change, but based on what I know today, if I were building it again I believe I would choose the all-in-one approach. It would have saved me the cost of deploying a separate server for ESXi-based VMs.