Proxmox VE and "simple" storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MrCalvin

IT consultant, Denmark
Aug 22, 2016
87
15
8
51
Denmark
www.wit.dk
I've not been able to get good and stable storage performance with ESX using simple storage like non-cache RAID controller, SATA drives, RAID-1.. Try to copy a file to you guest OS bigger than 5GB and you'll see your transfer speed drop to below 20MB/s caused by severe disk latency issues).
Hyper-V handle this much better.
As far I can tell this behaviour has something to do with no storage cache technology in the host OS, in this case ESX, but windows has.
But what about Proxmox VE?
Can I install a guest VM on a SATA disk with performance of 135MB/s and expect to get continuous stable storage transfer and not running into the problem I see with ESX?
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
ESXi is a type-1 hypervisor without special storage or cache technologies.
You can either use a hardware-raid with cache options or you combine ESXi with a dedicated or virtualized storage appliance.

If you use a ZFS storage appliance like I do with my napp-it All-in-One for many years you can use the advanced storage and cache options of ZFS combined with a full featured NAS/SAN appliance.

On Proxmox you can use ZFS directly but without the comfort of a storage appliance.
 

sno.cn

Active Member
Sep 23, 2016
211
75
28
I've not been able to get good and stable storage performance with ESX using simple storage like non-cache RAID controller, SATA drives, RAID-1.. Try to copy a file to you guest OS bigger than 5GB and you'll see your transfer speed drop to below 20MB/s caused by severe disk latency issues).
Hyper-V handle this much better.
As far I can tell this behaviour has something to do with no storage cache technology in the host OS, in this case ESX, but windows has.
But what about Proxmox VE?
Can I install a guest VM on a SATA disk with performance of 135MB/s and expect to get continuous stable storage transfer and not running into the problem I see with ESX?
I'm running Proxmox and ZFS, and getting great disk performance all around. I'm using an Ubuntu LXC container for file sharing to Windows clients, and the only time I get disk slowdowns is when my ZFS cache fills up.

For virtual disks and Windows VMs, writeback cache is working the best for me, and make sure to use the discard flag if you're thing provisioning, so space will be released when you delete files inside the VM.

But I haven't done a lot of testing with non-ZFS storage with Proxmox. I don't recall any issues from my limited experience though.

ESXi is a type-1 hypervisor without special storage or cache technologies.
You can either use a hardware-raid with cache options or you combine ESXi with a dedicated or virtualized storage appliance.

If you use a ZFS storage appliance like I do with my napp-it All-in-One for many years you can use the advanced storage and cache options of ZFS combined with a full featured NAS/SAN appliance.

On Proxmox you can use ZFS directly but without the comfort of a storage appliance.
I actually prefer this, where Proxmox itself is the "storage appliance." Of course this doesn't usually account for backups, but I'm using separate Promox hosts for that too :D
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
@sno.cn anything special in the Ubuntu LXC for sharing to Windows? I did it all on Proxmox host OS but curious about benefits / negatives of doing it like that, ideally I want a stg appliance in a container or VM to manage it all, but for now this is working as I learn/experiment more with my setups.
 

sno.cn

Active Member
Sep 23, 2016
211
75
28
@T_Minus I was doing it that way for a long time, but I decided to run Proxmox as closely to a clean install as possible to make deployments simpler, separate security concerns, and in an effort to minimize the chance of a software issue taking out an entire hypervisor.

One thing that pushed me in that direction is my company's move to containerization for as many things as possible, so if something goes down, Kube can quickly stand up a new instance. I started feeling like everything should work that way. So now, the only things most of my Proxmox hosts do are ZFS and run VMs/containers. Because passing drives through to a VM and running ZFS from there is ghetto as fuuuu. That was actually one of the big selling points for Proxmox for me, but that's another discussion.
 
  • Like
Reactions: T_Minus

vl1969

Active Member
Feb 5, 2014
634
76
28
@T_Minus > you can actually look at TurnKey Fileserver container.
it is available directly in Proxmox templates and runs Debian just like Proxmox it self.
looks very nice. have a webUI for management of it all.


@sno.cn >> care to share how are you setting up the sharing space.
I have been researching similar setup for a while.
have been busy and could not try the install on my real server, but have tested a number configs in VM.

my biggest issue is that I have only 1 server. so it MUST be a VM Host and file server.
my second issue is that I am big on ZFS. I use it for OS drive (a ZFS raid-1 setup)
but my data is on bunch of mismatched disks(a number of 1TB,2TB and 3TB) which ZFS is not fund off as they are different size and add numbers. I was actually thinking on building out a BTRFS raid10 pool that I can bind mount to the fileserver container and share it.
not a fan of passing through the disks to VM as it's adds complexity.
most of my data are large media files so BTRFS works just fine. in fact I had it running for 3 years as OMV file server/nas before disk crushed.

Any thoughts?