i have latest debian x64 on 32gb server with swap turned off.
i want to use ram as much as possible and the rest - leave for system cache.
everytihng was fine until i noticed that oom-killer hit my mysql daemon from time to time.
it looks like this:
mysql growing and growing until all used memory come close to 50% and then oom-killer making a strike.
i also noticed that i never ever actually had used any more than 50% of my 32gb.
looks like it's because of the /proc/sys/vm/overcommit_ratio default to 50 and /proc/sys/vm/overcommit_memory default to 0 - meaning kernel is making own decisions about overcommit memory or not.
i've tried to set overcommit_ratio to 99, but it only works together with overcommit_memory = 2.
which means overcommit is disabled and that's bad, because i have lots of processes that seem to declare to a kernel they need a huge amount of ram (but actually using very low amount of it), and the kernel should say - sure, no problem, i'll give you 100 gigs of ram when you need it :]
so memory overcommit should be always enabled as it is by default.
what i don't understand, is why mysqld can't keep growing and growing, taking about 31gb of the memory and only then after it cross the line, it should be killed.
why this happens at approx 16gb checkpoint (50% of total)?
i feel that i need to add 32gb of swap, so the system grand total become 64gb and this 50% will mean 32gb (the ram i actually have).
but i don't want to use swap at all, how can i move that 50% point to 99% for example?
anyone had such experience?
when you run "free -g", your "-/+ buffers/cache used" showing more than half of the ram you have?
because i've never saw anything more than 15
so my system is actually using only 16gb and never more.
yes, i know the other 16gb is also in use by the cache, but i'd like to sacrifice system cache to the processes like mysqld.
any ideas?
i want to use ram as much as possible and the rest - leave for system cache.
everytihng was fine until i noticed that oom-killer hit my mysql daemon from time to time.
it looks like this:
mysql growing and growing until all used memory come close to 50% and then oom-killer making a strike.
i also noticed that i never ever actually had used any more than 50% of my 32gb.
looks like it's because of the /proc/sys/vm/overcommit_ratio default to 50 and /proc/sys/vm/overcommit_memory default to 0 - meaning kernel is making own decisions about overcommit memory or not.
i've tried to set overcommit_ratio to 99, but it only works together with overcommit_memory = 2.
which means overcommit is disabled and that's bad, because i have lots of processes that seem to declare to a kernel they need a huge amount of ram (but actually using very low amount of it), and the kernel should say - sure, no problem, i'll give you 100 gigs of ram when you need it :]
so memory overcommit should be always enabled as it is by default.
what i don't understand, is why mysqld can't keep growing and growing, taking about 31gb of the memory and only then after it cross the line, it should be killed.
why this happens at approx 16gb checkpoint (50% of total)?
i feel that i need to add 32gb of swap, so the system grand total become 64gb and this 50% will mean 32gb (the ram i actually have).
but i don't want to use swap at all, how can i move that 50% point to 99% for example?
anyone had such experience?
when you run "free -g", your "-/+ buffers/cache used" showing more than half of the ram you have?
because i've never saw anything more than 15
so my system is actually using only 16gb and never more.
yes, i know the other 16gb is also in use by the cache, but i'd like to sacrifice system cache to the processes like mysqld.
any ideas?