my.cnf changing on its own
January 28, 2020 7:56 AM Subscribe
I'm trying to change my max_allowed_packet but it keeps changing back and I have no idea why. And changing it might not even be the answer to the problem I'm trying to solve!
I have 2 problems here really, 1 is that changes to the max_allowed_packet keep reverting and I have no idea what is doing it.
As will probably become clear, I'm far from an expert but the hosting company's answer to all problems is that we need to buy more RAM - even when the problem was actually a failing disk drive - I had to fight so hard to get them to accept that backing up 120GB shouldn't take 18+ hours just because we don't have SSDs.
Pretty much every day mysql gets rebooted due to the server being OOM. I looked into it and our memory usage is fine until the nightly backup using mysqldump, then mysql goes from using 10% to 50-60%. Restarting mysql clears it and it goes back down to normal, however that caused issues with some cron job that we have running and wasn't really an ideal solution anyway.
So I googled the problem and found several bug tickets reporting the same problem but the response was the same, that its working as intended and mysql will totally give that memory back if the system needs it and when the OP points out that it isn't doing so and that the server is going OOM and shutting down mysql, they just say "can't replicate".
So I thought I'd try to limit the amount of memory mysql can claim, so I ran mysqltuner.pl and it said warning, dangerously high memory usage. It was configured to use maximum of 40GB.... out of the 6GB we have on the server. So I reduced the max connections and max allowed packet size so that it wouldn't exceed the amount of memory that we have. All good I think. Except I come back to check on it a few hours later and the max allowed packet size has reverted to 256M in my.cnf - some process has changed the file and restarted mysql
I think the server is running CentOS 7, with WHM/cpanel but I couldn't find anything about cpanel changing my.cnf either.
Honestly, we weren't even noticing this problem until I started poking around because of the drive failure - I reported the issue with the backups taking so long and was just fobbed off with being told we needed to buy more RAM. So I'd like to solve the problem so that if we have problems in the future they'll actually look into it instead of blaming our lack of RAM.
I have 2 problems here really, 1 is that changes to the max_allowed_packet keep reverting and I have no idea what is doing it.
As will probably become clear, I'm far from an expert but the hosting company's answer to all problems is that we need to buy more RAM - even when the problem was actually a failing disk drive - I had to fight so hard to get them to accept that backing up 120GB shouldn't take 18+ hours just because we don't have SSDs.
Pretty much every day mysql gets rebooted due to the server being OOM. I looked into it and our memory usage is fine until the nightly backup using mysqldump, then mysql goes from using 10% to 50-60%. Restarting mysql clears it and it goes back down to normal, however that caused issues with some cron job that we have running and wasn't really an ideal solution anyway.
So I googled the problem and found several bug tickets reporting the same problem but the response was the same, that its working as intended and mysql will totally give that memory back if the system needs it and when the OP points out that it isn't doing so and that the server is going OOM and shutting down mysql, they just say "can't replicate".
So I thought I'd try to limit the amount of memory mysql can claim, so I ran mysqltuner.pl and it said warning, dangerously high memory usage. It was configured to use maximum of 40GB.... out of the 6GB we have on the server. So I reduced the max connections and max allowed packet size so that it wouldn't exceed the amount of memory that we have. All good I think. Except I come back to check on it a few hours later and the max allowed packet size has reverted to 256M in my.cnf - some process has changed the file and restarted mysql
I think the server is running CentOS 7, with WHM/cpanel but I couldn't find anything about cpanel changing my.cnf either.
Honestly, we weren't even noticing this problem until I started poking around because of the drive failure - I reported the issue with the backups taking so long and was just fobbed off with being told we needed to buy more RAM. So I'd like to solve the problem so that if we have problems in the future they'll actually look into it instead of blaming our lack of RAM.
Response by poster: The file changes definitely took. Then within a couple of hours, the file was changed - only the max_allowed_packets, the other changes I made to the file remained - and then mysql was restarted. This happened twice (I stopped trying to change it after that). I may have to keep changing it and see how long it takes to change back
posted by missmagenta at 9:18 AM on January 28, 2020
posted by missmagenta at 9:18 AM on January 28, 2020
so I ran mysqltuner.pl and it said warning, dangerously high memory usage. It was configured to use maximum of 40GB.... out of the 6GB we have on the server. So I reduced the max connections and max allowed packet size so that it wouldn't exceed the amount of memory that we have
Maybe I'm misunderstanding something (and I don't work with mysql so much these days) - but you had a 40GB memory footprint and you reduced it to below 6GB just by tuning max connections and max allowed packet size?
I would expect most of your memory to be used by buffer pools - how's your innodb_buffer_pool_size ?
posted by each day we work at 9:33 AM on January 28, 2020
Maybe I'm misunderstanding something (and I don't work with mysql so much these days) - but you had a 40GB memory footprint and you reduced it to below 6GB just by tuning max connections and max allowed packet size?
I would expect most of your memory to be used by buffer pools - how's your innodb_buffer_pool_size ?
posted by each day we work at 9:33 AM on January 28, 2020
When the my.cnf is getting overwritten, what's the modification time? Is it some constant value in the past (E.G. 23:20 Nov 23 2018) , or is it some time that's newer than when you made the change. If it's older, then it's likely getting copied in by something (puppet, ansible, rsync). If it's newer, than it's getting updated, but hey, now you know when it's getting updated and that's a lot more information than you'll have if it's being copied over.
If the time is old, rig up something simple to run over night like:
Once you have the time of the change (or changes!), in addition to crontabs as pwnguin mentions, take a look at the security logs for ssh logins or something that might sudo, and see if there's any correlation with the file changes.
However anytime I've needed to lower the memory footprint, I was usually either looking at the global myisam sort buffers or key boffers, or innodb bufferpool settings. I've had to change max connections some times, but have never bothered to touch max_packet_size beyond the few times I needed to raise it to get some backups to run a bit more efficiently in a bit of a special snowflake setting. Which is to say yes, you don't want your my.cnf getting overwritten, but you might need to look at different locations for the actual problem of running out of memory.
And yes, mysql will give up memory in some situations, but if you're hitting OOM conditions that is abnormal and you need some tuning.
posted by nobeagle at 12:01 PM on January 28, 2020
If the time is old, rig up something simple to run over night like:
#!/bin/bash WATCHFILE=/etc/mysql/my.cnf MYLOG=/var/tmp/md5watch for i in {1..1440} ; do /bin/echo -ne "`date`\t" >> $MYLOG md5sum $WATCHFILE >> $MYLOG sleep 60 || exit 1 doneThe above script will every minute generate a hash for 24 hours. Look at the log file to see when there are any changes to the my.cnf .
Once you have the time of the change (or changes!), in addition to crontabs as pwnguin mentions, take a look at the security logs for ssh logins or something that might sudo, and see if there's any correlation with the file changes.
However anytime I've needed to lower the memory footprint, I was usually either looking at the global myisam sort buffers or key boffers, or innodb bufferpool settings. I've had to change max connections some times, but have never bothered to touch max_packet_size beyond the few times I needed to raise it to get some backups to run a bit more efficiently in a bit of a special snowflake setting. Which is to say yes, you don't want your my.cnf getting overwritten, but you might need to look at different locations for the actual problem of running out of memory.
And yes, mysql will give up memory in some situations, but if you're hitting OOM conditions that is abnormal and you need some tuning.
posted by nobeagle at 12:01 PM on January 28, 2020
« Older Modest sympathy plant for coworker who lost her... | How to "break in" a newly cut door key Newer »
This thread is closed to new comments.
Two possibilities come to mind. Either your file is never actually being written to (unlikely to happen without an error shown to you in the text editor) or there's an automated process like puppet / chef / ansible ( / cpanel?) managing my.cnf. You can set the file to immutable (chmod +i) to prevent those processes from changing it.
As for finding the rogue process, start by looking at as many crontabs as possible. /etc/crontab/, all the files in /etc/cron.d/, /etc/cron.*/ and then all the user crontabs. Maybe also whatever cpanel does in this regard. If nothing crops up, then you want to review logins to the system in case an automated external entity is responsible for this chicanery.
posted by pwnguin at 8:49 AM on January 28, 2020