I spent way too long failing to find the right search phrase, so maybe somebody here knows sometihng to get me started.
My server system is writing a *lot* of output to a file via the Toothpick package's FileLogger class. I mean, like a GB/hr. I can turn it down but then I don't see the stuff that explains why it went boom. Evidently naively writing to a file and watching it with `tail -f` isn't the best idea here but all I'm spotting on the google is lots of 'use logrotate' stuff that doesn't seem to be applicable at all.
Is there some technique I can use that lets me write out stuff and watch it without it filling up my little SSD? Some way to specify a file that has a maximum size and that dumps the older stuff as more is added? Some program I can pipe data to that keeps a limited set of messages?
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Useful random insult:- Calls people to ask them their phone number.
Hi tim,
In what way is log rotation not applicable to your problem? It could compress or delete older log data regularly...
About an external program, rsyslog comes to mind (has multiple sources and sinks for log data and can also do log rotation). There may be smaller solutions that I just don't know.
https://www.rsyslog.com/doc/v8-stable/tutorials/log_rotation_fix_size.html
Kind regards, Jakob
Am Sa., 10. Dez. 2022 um 02:03 Uhr schrieb tim Rowledge tim@rowledge.org:
I spent way too long failing to find the right search phrase, so maybe somebody here knows sometihng to get me started.
My server system is writing a *lot* of output to a file via the Toothpick package's FileLogger class. I mean, like a GB/hr. I can turn it down but then I don't see the stuff that explains why it went boom. Evidently naively writing to a file and watching it with `tail -f` isn't the best idea here but all I'm spotting on the google is lots of 'use logrotate' stuff that doesn't seem to be applicable at all.
Is there some technique I can use that lets me write out stuff and watch it without it filling up my little SSD? Some way to specify a file that has a maximum size and that dumps the older stuff as more is added? Some program I can pipe data to that keeps a limited set of messages?
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Useful random insult:- Calls people to ask them their phone number.
On 2022-12-10, at 5:28 AM, Jakob Reschke jakres+squeak@gmail.com wrote:
Hi tim,
In what way is log rotation not applicable to your problem? It could compress or delete older log data regularly...
I may have misunderstood what it does; wouldn't be the first time. I get the impression it is intended for doing daily cleanup of system logs that don't grow as fast as my trace log does. I'm generating a gigabyte or more per hour, and I don't want (necessarily) to keep it, but to have a window of maybe 10-50Mb of the most recent that doesn't grow beyond some set limit.
About an external program, rsyslog comes to mind (has multiple sources and sinks for log data and can also do log rotation). There may be smaller solutions that I just don't know.
https://www.rsyslog.com/doc/v8-stable/tutorials/log_rotation_fix_size.html
Hmm, that reads like another slow-growing log solution.
What would be really helpful would be some thing to write to that behaves like a bucket of a certain size that get filled from the bottom and excess overflows from the top. Having a way to filter incoming text would be nice too; sometimes you don't want to see *all* the 'helpful' output. What is frustrating is that I feel sure that somebody is going to tell me of some completely obvious solution that is incredibly easy to find once you know what it is...
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Cloister: a pretentious clam
Tim,
rotate between log.1, log.2 … log.n. Once log.x reaches a certain size progress to the next in the sequence, wrapping back to 0 when on n. If on progressing the log already exists then delete it.
That way you have a maximum disc footprint, you keep the most recent n gb of logs, and it’s super simple. You can code this up in your sleep. forceNewFile: (sp?) is your friend…
_,,,^..^,,,_ (phone)
On Dec 10, 2022, at 2:54 PM, tim Rowledge tim@rowledge.org wrote:
On 2022-12-10, at 5:28 AM, Jakob Reschke jakres+squeak@gmail.com wrote:
Hi tim,
In what way is log rotation not applicable to your problem? It could compress or delete older log data regularly...
I may have misunderstood what it does; wouldn't be the first time. I get the impression it is intended for doing daily cleanup of system logs that don't grow as fast as my trace log does. I'm generating a gigabyte or more per hour, and I don't want (necessarily) to keep it, but to have a window of maybe 10-50Mb of the most recent that doesn't grow beyond some set limit.
About an external program, rsyslog comes to mind (has multiple sources and sinks for log data and can also do log rotation). There may be smaller solutions that I just don't know.
https://www.rsyslog.com/doc/v8-stable/tutorials/log_rotation_fix_size.html
Hmm, that reads like another slow-growing log solution.
What would be really helpful would be some thing to write to that behaves like a bucket of a certain size that get filled from the bottom and excess overflows from the top. Having a way to filter incoming text would be nice too; sometimes you don't want to see *all* the 'helpful' output. What is frustrating is that I feel sure that somebody is going to tell me of some completely obvious solution that is incredibly easy to find once you know what it is...
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Cloister: a pretentious clam
Hi,
Logrotate is designed to do just what you are asking.
On my PI install (Rasperry PI OS, 64 bit from September) logrotate is already installed. It will most likely be installed on almost any linux type system unless it is super small.
If you go look at the directory /var/log you can see what it is doing. And the config files which make that happen live in /etc/logrotate.conf and /etc/logrotate.d/
Now, the default is to run once a day and you need to run more often. Also rather than daily or weekly rotation you would like it to rotate on 10MB, 20MB, or what ever.
What I would personally recommend is that you not run this as root, rather, you run it as the user who is writing your log file. Just let the current logrotate keep running as it is and you run it separately.
Write a config file, say, TimRowledgesExcellentAdventure.conf.
It looks like this (or so, go read the man page)
/data/toothpick/bloodytoothpicklog.log {
missingok
rotate 10
size 10M
nocompress
nomail
}
There are lots of other options but I think this does what you want.
- missingok - don't complain if there are no log files.
- keep the last 10
- rotate regardless of anything when they are more than 10 Meg.
- do not compress the files.
- do not email anything
Then create a crontab entry for the user to run this that runs as often as you want (hourly, every minute)
with the command line
logrotate -s /data/toothpick/logrotate.state /data/toothpick/TimRowledgesExcellentAdventure.conf
NB: This will only work only if Toothpick closes and re-opens the file on each write. Or every so often. But if it keeps the file open then logrotate won't do what you want.
cheers
bruce
On 2022-12-10T02:03:36.000+01:00, tim Rowledge tim@rowledge.org wrote:
I spent way too long failing to find the right search phrase, so maybe somebody here knows sometihng to get me started. My server system is writing a *lot* of output to a file via the Toothpick package's FileLogger class. I mean, like a GB/hr. I can turn it down but then I don't see the stuff that explains why it went boom. Evidently naively writing to a file and watching it with `tail -f` isn't the best idea here but all I'm spotting on the google is lots of 'use logrotate' stuff that doesn't seem to be applicable at all. Is there some technique I can use that lets me write out stuff and watch it without it filling up my little SSD? Some way to specify a file that has a maximum size and that dumps the older stuff as more is added? Some program I can pipe data to that keeps a limited set of messages? tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Useful random insult:- Calls people to ask them their phone number.
One guy I worked with used to just send literally everything to Elastic Search
Keith
On 10 Dec 2022, at 1:03, tim Rowledge wrote:
I spent way too long failing to find the right search phrase, so maybe somebody here knows sometihng to get me started.
My server system is writing a *lot* of output to a file via the Toothpick package's FileLogger class. I mean, like a GB/hr. I can turn it down but then I don't see the stuff that explains why it went boom. Evidently naively writing to a file and watching it with `tail -f` isn't the best idea here but all I'm spotting on the google is lots of 'use logrotate' stuff that doesn't seem to be applicable at all.
Is there some technique I can use that lets me write out stuff and watch it without it filling up my little SSD? Some way to specify a file that has a maximum size and that dumps the older stuff as more is added? Some program I can pipe data to that keeps a limited set of messages?
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Useful random insult:- Calls people to ask them their phone number.
squeak-dev@lists.squeakfoundation.org