The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Ken
That is about right for ssdotcom. I'm doubtful it's responsible for the daily expansion.
On Sat, Nov 16, 2013 at 5:10 PM, Ken Causey ken@kencausey.com wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Ken
On Sat, Nov 16, 2013 at 05:10:18PM -0600, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Most of the variation in disk usage is related to our Jenkins jobs. This is to be expected, but it does mean that we will need to tend to the garden and make sure that weeds do not take over. I have two suggestions:
1) Every Jenkins job has a description that is set up when we configure the job. The description should (of course) explain the purpose of the job, but it should also have some sort of tag line to identify the person who is responsible for maintaining that job. For example, the description for the InterpreterVM job includes this:
"This Jenkins project is maintained by Dave Lewis (lewis@mail.msen.com)"
2) All of the jobs consume a fair amount of disk space, and it is pretty easy to let this get out of control. I think this can usually be managed in the Jenkins project configurations, so so need to keep an eye on the high-usage jobs an fix up their settings accordingly.
Here is the current disk utilization for our Jenkins jobs:
jenkins@box3-squeak:~/workspace$ du -s * 204660 CogVM 260320 ExternalPackage-AndreasSystemProfiler 262948 ExternalPackage-Control 286952 ExternalPackage-FFI 392016 ExternalPackage-FileSystem 456600 ExternalPackage-Fuel 427560 ExternalPackage-Magma 256144 ExternalPackage-Nebraska 256184 ExternalPackage-Nutcracker 263092 ExternalPackage-OSProcess 228668 ExternalPackage-Phexample 256732 ExternalPackage-Quaternion 255692 ExternalPackage-RoelTyper 417300 ExternalPackages 414608 ExternalPackages-Metacello 255580 ExternalPackage-SqueakCheck 412592 ExternalPackages-Squeak4.3 338264 ExternalPackages-Squeak4.4 256288 ExternalPackage-Universes 255752 ExternalPackage-WebClient 256268 ExternalPackage-XML-Parser 444996 ExternalPackage-Xtreams 387160 ExternalPackage-Xtreams-FileSystem 256144 ExternalPackage-Zippers 384448 InterpreterVM 205396 LatestReleasedVM 5370500 ReleaseSqueakTrunk 127408 Squeak 64-bit image 766392 SqueakTrunk 291772 SqueakTrunkOnBleedingEdgeCog 412972 SqueakTrunkOnInterpreter 419892 SqueakTrunkPerformance
At the moment, the ReleaseSqueakTrunk job is using a lot of space, and this is mostly due to saved images in its ./target directory. Most likely we can purge out some of the older images to free up some space.
DAve
On 17 November 2013 00:32, David T. Lewis lewis@mail.msen.com wrote:
On Sat, Nov 16, 2013 at 05:10:18PM -0600, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Most of the variation in disk usage is related to our Jenkins jobs. This is to be expected, but it does mean that we will need to tend to the garden and make sure that weeds do not take over. I have two suggestions:
- Every Jenkins job has a description that is set up when we configure
the job. The description should (of course) explain the purpose of the job, but it should also have some sort of tag line to identify the person who is responsible for maintaining that job. For example, the description for the InterpreterVM job includes this:
"This Jenkins project is maintained by Dave Lewis (lewis@mail.msen.com)"
- All of the jobs consume a fair amount of disk space, and it is pretty
easy to let this get out of control. I think this can usually be managed in the Jenkins project configurations, so so need to keep an eye on the high-usage jobs an fix up their settings accordingly.
Here is the current disk utilization for our Jenkins jobs:
jenkins@box3-squeak:~/workspace$ du -s * 204660 CogVM 260320 ExternalPackage-AndreasSystemProfiler 262948 ExternalPackage-Control 286952 ExternalPackage-FFI 392016 ExternalPackage-FileSystem 456600 ExternalPackage-Fuel 427560 ExternalPackage-Magma 256144 ExternalPackage-Nebraska 256184 ExternalPackage-Nutcracker 263092 ExternalPackage-OSProcess 228668 ExternalPackage-Phexample 256732 ExternalPackage-Quaternion 255692 ExternalPackage-RoelTyper 417300 ExternalPackages 414608 ExternalPackages-Metacello 255580 ExternalPackage-SqueakCheck 412592 ExternalPackages-Squeak4.3 338264 ExternalPackages-Squeak4.4 256288 ExternalPackage-Universes 255752 ExternalPackage-WebClient 256268 ExternalPackage-XML-Parser 444996 ExternalPackage-Xtreams 387160 ExternalPackage-Xtreams-FileSystem 256144 ExternalPackage-Zippers 384448 InterpreterVM 205396 LatestReleasedVM 5370500 ReleaseSqueakTrunk 127408 Squeak 64-bit image 766392 SqueakTrunk 291772 SqueakTrunkOnBleedingEdgeCog 412972 SqueakTrunkOnInterpreter 419892 SqueakTrunkPerformance
At the moment, the ReleaseSqueakTrunk job is using a lot of space, and this is mostly due to saved images in its ./target directory. Most likely we can purge out some of the older images to free up some space.
With the exception of ReleaseSqueakTrunk, the CI jobs _should_ be fairly careful with disk space. One way they'd leak disk space is through target/package-cache, which will tend to accumulate MCZs over time. The SqueakTrunk* and ExternalPackage* jobs are set up to work from a blank slate, so they should always be in a position to have their workspaces wiped.
frank
DAve
On Mon, Nov 18, 2013 at 10:02:17AM +0000, Frank Shearar wrote:
On 17 November 2013 00:32, David T. Lewis lewis@mail.msen.com wrote:
On Sat, Nov 16, 2013 at 05:10:18PM -0600, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Most of the variation in disk usage is related to our Jenkins jobs. This is to be expected, but it does mean that we will need to tend to the garden and make sure that weeds do not take over. I have two suggestions:
- Every Jenkins job has a description that is set up when we configure
the job. The description should (of course) explain the purpose of the job, but it should also have some sort of tag line to identify the person who is responsible for maintaining that job. For example, the description for the InterpreterVM job includes this:
"This Jenkins project is maintained by Dave Lewis (lewis@mail.msen.com)"
- All of the jobs consume a fair amount of disk space, and it is pretty
easy to let this get out of control. I think this can usually be managed in the Jenkins project configurations, so so need to keep an eye on the high-usage jobs an fix up their settings accordingly.
Here is the current disk utilization for our Jenkins jobs:
jenkins@box3-squeak:~/workspace$ du -s * 204660 CogVM 260320 ExternalPackage-AndreasSystemProfiler 262948 ExternalPackage-Control 286952 ExternalPackage-FFI 392016 ExternalPackage-FileSystem 456600 ExternalPackage-Fuel 427560 ExternalPackage-Magma 256144 ExternalPackage-Nebraska 256184 ExternalPackage-Nutcracker 263092 ExternalPackage-OSProcess 228668 ExternalPackage-Phexample 256732 ExternalPackage-Quaternion 255692 ExternalPackage-RoelTyper 417300 ExternalPackages 414608 ExternalPackages-Metacello 255580 ExternalPackage-SqueakCheck 412592 ExternalPackages-Squeak4.3 338264 ExternalPackages-Squeak4.4 256288 ExternalPackage-Universes 255752 ExternalPackage-WebClient 256268 ExternalPackage-XML-Parser 444996 ExternalPackage-Xtreams 387160 ExternalPackage-Xtreams-FileSystem 256144 ExternalPackage-Zippers 384448 InterpreterVM 205396 LatestReleasedVM 5370500 ReleaseSqueakTrunk 127408 Squeak 64-bit image 766392 SqueakTrunk 291772 SqueakTrunkOnBleedingEdgeCog 412972 SqueakTrunkOnInterpreter 419892 SqueakTrunkPerformance
At the moment, the ReleaseSqueakTrunk job is using a lot of space, and this is mostly due to saved images in its ./target directory. Most likely we can purge out some of the older images to free up some space.
With the exception of ReleaseSqueakTrunk, the CI jobs _should_ be fairly careful with disk space. One way they'd leak disk space is through target/package-cache, which will tend to accumulate MCZs over time. The SqueakTrunk* and ExternalPackage* jobs are set up to work from a blank slate, so they should always be in a position to have their workspaces wiped.
The issue is that we are running out of disk space and need to take action.
In the project configuration for ReleaseSqueakTrunk, the following option is *not* selected:
Discard all but the last successful/stable artifact to save disk space
Should this be changed?
The /var/lib/jenkins/workspace/ReleaseSqueakTrunk/target directory is using the majority of the disk space for this job, and it looks to me like most of this consists of transient files that could be deleted (or compressed).
I don't want to touch anything here without your approval.
Thanks, Dave
On 18 November 2013 12:38, David T. Lewis lewis@mail.msen.com wrote:
On Mon, Nov 18, 2013 at 10:02:17AM +0000, Frank Shearar wrote:
On 17 November 2013 00:32, David T. Lewis lewis@mail.msen.com wrote:
On Sat, Nov 16, 2013 at 05:10:18PM -0600, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Most of the variation in disk usage is related to our Jenkins jobs. This is to be expected, but it does mean that we will need to tend to the garden and make sure that weeds do not take over. I have two suggestions:
- Every Jenkins job has a description that is set up when we configure
the job. The description should (of course) explain the purpose of the job, but it should also have some sort of tag line to identify the person who is responsible for maintaining that job. For example, the description for the InterpreterVM job includes this:
"This Jenkins project is maintained by Dave Lewis (lewis@mail.msen.com)"
- All of the jobs consume a fair amount of disk space, and it is pretty
easy to let this get out of control. I think this can usually be managed in the Jenkins project configurations, so so need to keep an eye on the high-usage jobs an fix up their settings accordingly.
Here is the current disk utilization for our Jenkins jobs:
jenkins@box3-squeak:~/workspace$ du -s * 204660 CogVM 260320 ExternalPackage-AndreasSystemProfiler 262948 ExternalPackage-Control 286952 ExternalPackage-FFI 392016 ExternalPackage-FileSystem 456600 ExternalPackage-Fuel 427560 ExternalPackage-Magma 256144 ExternalPackage-Nebraska 256184 ExternalPackage-Nutcracker 263092 ExternalPackage-OSProcess 228668 ExternalPackage-Phexample 256732 ExternalPackage-Quaternion 255692 ExternalPackage-RoelTyper 417300 ExternalPackages 414608 ExternalPackages-Metacello 255580 ExternalPackage-SqueakCheck 412592 ExternalPackages-Squeak4.3 338264 ExternalPackages-Squeak4.4 256288 ExternalPackage-Universes 255752 ExternalPackage-WebClient 256268 ExternalPackage-XML-Parser 444996 ExternalPackage-Xtreams 387160 ExternalPackage-Xtreams-FileSystem 256144 ExternalPackage-Zippers 384448 InterpreterVM 205396 LatestReleasedVM 5370500 ReleaseSqueakTrunk 127408 Squeak 64-bit image 766392 SqueakTrunk 291772 SqueakTrunkOnBleedingEdgeCog 412972 SqueakTrunkOnInterpreter 419892 SqueakTrunkPerformance
At the moment, the ReleaseSqueakTrunk job is using a lot of space, and this is mostly due to saved images in its ./target directory. Most likely we can purge out some of the older images to free up some space.
With the exception of ReleaseSqueakTrunk, the CI jobs _should_ be fairly careful with disk space. One way they'd leak disk space is through target/package-cache, which will tend to accumulate MCZs over time. The SqueakTrunk* and ExternalPackage* jobs are set up to work from a blank slate, so they should always be in a position to have their workspaces wiped.
The issue is that we are running out of disk space and need to take action.
In the project configuration for ReleaseSqueakTrunk, the following option is *not* selected:
Discard all but the last successful/stable artifact to save disk space
Should this be changed?
I don't know. The job's set up to consider _all_ Squeak-*-*.zip files as part of the artifact, which is itself a problem. I suspect that if this switch is flagged, you won't be able to go to, say, ReleaseSqueakTrunk 22, and get the artifact. I _think_.
The /var/lib/jenkins/workspace/ReleaseSqueakTrunk/target directory is using the majority of the disk space for this job, and it looks to me like most of this consists of transient files that could be deleted (or compressed).
What are the transient files? The only things possibly worth keeping are Squeak-*-*.zip files. The rest of the files should be either interim build steps (TrunkImage.* and so on), XML files for test/performance coverage, and Squeak cruft like update logs, package-cache and so on. All of those can go.
I don't want to touch anything here without your approval.
You have my permission to wipe any of "my" workspaces at any time: if they break because of a wipe, I didn't do my job properly! :)
frank
Thanks, Dave
On Mon, Nov 18, 2013 at 01:26:26PM +0000, Frank Shearar wrote:
On 18 November 2013 12:38, David T. Lewis lewis@mail.msen.com wrote:
On Mon, Nov 18, 2013 at 10:02:17AM +0000, Frank Shearar wrote:
On 17 November 2013 00:32, David T. Lewis lewis@mail.msen.com wrote:
On Sat, Nov 16, 2013 at 05:10:18PM -0600, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Most of the variation in disk usage is related to our Jenkins jobs. This is to be expected, but it does mean that we will need to tend to the garden and make sure that weeds do not take over. I have two suggestions:
- Every Jenkins job has a description that is set up when we configure
the job. The description should (of course) explain the purpose of the job, but it should also have some sort of tag line to identify the person who is responsible for maintaining that job. For example, the description for the InterpreterVM job includes this:
"This Jenkins project is maintained by Dave Lewis (lewis@mail.msen.com)"
- All of the jobs consume a fair amount of disk space, and it is pretty
easy to let this get out of control. I think this can usually be managed in the Jenkins project configurations, so so need to keep an eye on the high-usage jobs an fix up their settings accordingly.
Here is the current disk utilization for our Jenkins jobs:
jenkins@box3-squeak:~/workspace$ du -s * 204660 CogVM 260320 ExternalPackage-AndreasSystemProfiler 262948 ExternalPackage-Control 286952 ExternalPackage-FFI 392016 ExternalPackage-FileSystem 456600 ExternalPackage-Fuel 427560 ExternalPackage-Magma 256144 ExternalPackage-Nebraska 256184 ExternalPackage-Nutcracker 263092 ExternalPackage-OSProcess 228668 ExternalPackage-Phexample 256732 ExternalPackage-Quaternion 255692 ExternalPackage-RoelTyper 417300 ExternalPackages 414608 ExternalPackages-Metacello 255580 ExternalPackage-SqueakCheck 412592 ExternalPackages-Squeak4.3 338264 ExternalPackages-Squeak4.4 256288 ExternalPackage-Universes 255752 ExternalPackage-WebClient 256268 ExternalPackage-XML-Parser 444996 ExternalPackage-Xtreams 387160 ExternalPackage-Xtreams-FileSystem 256144 ExternalPackage-Zippers 384448 InterpreterVM 205396 LatestReleasedVM 5370500 ReleaseSqueakTrunk 127408 Squeak 64-bit image 766392 SqueakTrunk 291772 SqueakTrunkOnBleedingEdgeCog 412972 SqueakTrunkOnInterpreter 419892 SqueakTrunkPerformance
At the moment, the ReleaseSqueakTrunk job is using a lot of space, and this is mostly due to saved images in its ./target directory. Most likely we can purge out some of the older images to free up some space.
With the exception of ReleaseSqueakTrunk, the CI jobs _should_ be fairly careful with disk space. One way they'd leak disk space is through target/package-cache, which will tend to accumulate MCZs over time. The SqueakTrunk* and ExternalPackage* jobs are set up to work from a blank slate, so they should always be in a position to have their workspaces wiped.
The issue is that we are running out of disk space and need to take action.
In the project configuration for ReleaseSqueakTrunk, the following option is *not* selected:
Discard all but the last successful/stable artifact to save disk space
Should this be changed?
I don't know. The job's set up to consider _all_ Squeak-*-*.zip files as part of the artifact, which is itself a problem. I suspect that if this switch is flagged, you won't be able to go to, say, ReleaseSqueakTrunk 22, and get the artifact. I _think_.
The /var/lib/jenkins/workspace/ReleaseSqueakTrunk/target directory is using the majority of the disk space for this job, and it looks to me like most of this consists of transient files that could be deleted (or compressed).
What are the transient files? The only things possibly worth keeping are Squeak-*-*.zip files. The rest of the files should be either interim build steps (TrunkImage.* and so on), XML files for test/performance coverage, and Squeak cruft like update logs, package-cache and so on. All of those can go.
I don't want to touch anything here without your approval.
You have my permission to wipe any of "my" workspaces at any time: if they break because of a wipe, I didn't do my job properly! :)
OK, if we actually get dangerously close to running out of disk space, one of us can delete /var/lib/jenkins/workspace/ReleaseSqueakTrunk/target to relieve the problem.
But we need to deal with this in a sustainable way that does not require manual intervention. In the case of ReleaseSqueakTrunk, possibly this can be done by adding one more build step to the job that cleans up files when the job is complete.
Right now you have these two build steps: bundle install DEBUG=1 bundle exec rake release
I don't know anything about ruby or rake but I am guessing that you might be able to add one more build step that might look something like this:
bundle exec rake cleanup
Or maybe it can be just a shell command that cleans up unneeded files in the workspace/ReleaseSqueakTrunk/targets directory. I see that there is a Cog VM executable in that directory, so I'm assuming that you do not want it completely wiped out. But perhaps the rest of the files can be purged as part of the job steps.
I'll be happy to help with a shell command if you like, but I'm not familiar enough with rake and ruby to know if that is the right thing to do in this case. If there is some way that you can do this in the ruby scripts, that would probably be a better approach.
Thanks :-) Dave
On 18 November 2013 14:53, David T. Lewis lewis@mail.msen.com wrote:
On Mon, Nov 18, 2013 at 01:26:26PM +0000, Frank Shearar wrote:
On 18 November 2013 12:38, David T. Lewis lewis@mail.msen.com wrote:
On Mon, Nov 18, 2013 at 10:02:17AM +0000, Frank Shearar wrote:
On 17 November 2013 00:32, David T. Lewis lewis@mail.msen.com wrote:
On Sat, Nov 16, 2013 at 05:10:18PM -0600, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Most of the variation in disk usage is related to our Jenkins jobs. This is to be expected, but it does mean that we will need to tend to the garden and make sure that weeds do not take over. I have two suggestions:
- Every Jenkins job has a description that is set up when we configure
the job. The description should (of course) explain the purpose of the job, but it should also have some sort of tag line to identify the person who is responsible for maintaining that job. For example, the description for the InterpreterVM job includes this:
"This Jenkins project is maintained by Dave Lewis (lewis@mail.msen.com)"
- All of the jobs consume a fair amount of disk space, and it is pretty
easy to let this get out of control. I think this can usually be managed in the Jenkins project configurations, so so need to keep an eye on the high-usage jobs an fix up their settings accordingly.
Here is the current disk utilization for our Jenkins jobs:
jenkins@box3-squeak:~/workspace$ du -s * 204660 CogVM 260320 ExternalPackage-AndreasSystemProfiler 262948 ExternalPackage-Control 286952 ExternalPackage-FFI 392016 ExternalPackage-FileSystem 456600 ExternalPackage-Fuel 427560 ExternalPackage-Magma 256144 ExternalPackage-Nebraska 256184 ExternalPackage-Nutcracker 263092 ExternalPackage-OSProcess 228668 ExternalPackage-Phexample 256732 ExternalPackage-Quaternion 255692 ExternalPackage-RoelTyper 417300 ExternalPackages 414608 ExternalPackages-Metacello 255580 ExternalPackage-SqueakCheck 412592 ExternalPackages-Squeak4.3 338264 ExternalPackages-Squeak4.4 256288 ExternalPackage-Universes 255752 ExternalPackage-WebClient 256268 ExternalPackage-XML-Parser 444996 ExternalPackage-Xtreams 387160 ExternalPackage-Xtreams-FileSystem 256144 ExternalPackage-Zippers 384448 InterpreterVM 205396 LatestReleasedVM 5370500 ReleaseSqueakTrunk 127408 Squeak 64-bit image 766392 SqueakTrunk 291772 SqueakTrunkOnBleedingEdgeCog 412972 SqueakTrunkOnInterpreter 419892 SqueakTrunkPerformance
At the moment, the ReleaseSqueakTrunk job is using a lot of space, and this is mostly due to saved images in its ./target directory. Most likely we can purge out some of the older images to free up some space.
With the exception of ReleaseSqueakTrunk, the CI jobs _should_ be fairly careful with disk space. One way they'd leak disk space is through target/package-cache, which will tend to accumulate MCZs over time. The SqueakTrunk* and ExternalPackage* jobs are set up to work from a blank slate, so they should always be in a position to have their workspaces wiped.
The issue is that we are running out of disk space and need to take action.
In the project configuration for ReleaseSqueakTrunk, the following option is *not* selected:
Discard all but the last successful/stable artifact to save disk space
Should this be changed?
I don't know. The job's set up to consider _all_ Squeak-*-*.zip files as part of the artifact, which is itself a problem. I suspect that if this switch is flagged, you won't be able to go to, say, ReleaseSqueakTrunk 22, and get the artifact. I _think_.
The /var/lib/jenkins/workspace/ReleaseSqueakTrunk/target directory is using the majority of the disk space for this job, and it looks to me like most of this consists of transient files that could be deleted (or compressed).
What are the transient files? The only things possibly worth keeping are Squeak-*-*.zip files. The rest of the files should be either interim build steps (TrunkImage.* and so on), XML files for test/performance coverage, and Squeak cruft like update logs, package-cache and so on. All of those can go.
I don't want to touch anything here without your approval.
You have my permission to wipe any of "my" workspaces at any time: if they break because of a wipe, I didn't do my job properly! :)
OK, if we actually get dangerously close to running out of disk space, one of us can delete /var/lib/jenkins/workspace/ReleaseSqueakTrunk/target to relieve the problem.
But we need to deal with this in a sustainable way that does not require manual intervention. In the case of ReleaseSqueakTrunk, possibly this can be done by adding one more build step to the job that cleans up files when the job is complete.
Right now you have these two build steps: bundle install DEBUG=1 bundle exec rake release
I don't know anything about ruby or rake but I am guessing that you might be able to add one more build step that might look something like this:
bundle exec rake cleanup
Close: "bundle exec rake clean release" will trash the target/ directory. I don't do that by default because that means the task will compile an Interpreter VM. Or, if you prefer, adding "clean" trades CPU/load for disk space. (We need to compile an Interpreter VM because build.squeak.org's OS can't run a new Interpreter for lack of a recent glibc.)
Or maybe it can be just a shell command that cleans up unneeded files in the workspace/ReleaseSqueakTrunk/targets directory. I see that there is a Cog VM executable in that directory, so I'm assuming that you do not want it completely wiped out. But perhaps the rest of the files can be purged as part of the job steps.
Any Cog VMs should be in target/cog.rNNNN/ directories, which are pulled in on demand by running rake. (A Rakefile is exactly like a Makefile, in a nicer language. In particular, lib/squeak-ci/build.rb has assert_interpreter_vm() and assert_cog_vm() methods for building/downloading VMs).
frank
I'll be happy to help with a shell command if you like, but I'm not familiar enough with rake and ruby to know if that is the right thing to do in this case. If there is some way that you can do this in the ruby scripts, that would probably be a better approach.
Thanks :-) Dave
On Mon, Nov 18, 2013 at 03:05:36PM +0000, Frank Shearar wrote:
On 18 November 2013 14:53, David T. Lewis lewis@mail.msen.com wrote:
But we need to deal with this in a sustainable way that does not require manual intervention. In the case of ReleaseSqueakTrunk, possibly this can be done by adding one more build step to the job that cleans up files when the job is complete.
Right now you have these two build steps: bundle install DEBUG=1 bundle exec rake release
I don't know anything about ruby or rake but I am guessing that you might be able to add one more build step that might look something like this:
bundle exec rake cleanup
Close: "bundle exec rake clean release" will trash the target/ directory. I don't do that by default because that means the task will compile an Interpreter VM. Or, if you prefer, adding "clean" trades CPU/load for disk space. (We need to compile an Interpreter VM because build.squeak.org's OS can't run a new Interpreter for lack of a recent glibc.)
I used checkinstall make a debian package for the interpreter VM, and installed it on the system as package "squeakvm". Sorry if I did not mention this earlier!
$ dpkg --list squeakvm Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-===============================-===============================-============================================================================== ii squeakvm 20131020-1 Standard Squeak interpreter VM
That means that you can run /usr/local/bin/squeak for an interpreter VM, and you do not need to compile it for the individual Jenkins jobs.
If you can change ReleaseSqueakTrunk job to use /usr/local/bin/squeak, and then update the "bundle exec rake clean release" step, then you should see the disk usage for this job go back down to a normal level.
Dave
On 18 November 2013 15:37, David T. Lewis lewis@mail.msen.com wrote:
On Mon, Nov 18, 2013 at 03:05:36PM +0000, Frank Shearar wrote:
On 18 November 2013 14:53, David T. Lewis lewis@mail.msen.com wrote:
But we need to deal with this in a sustainable way that does not require manual intervention. In the case of ReleaseSqueakTrunk, possibly this can be done by adding one more build step to the job that cleans up files when the job is complete.
Right now you have these two build steps: bundle install DEBUG=1 bundle exec rake release
I don't know anything about ruby or rake but I am guessing that you might be able to add one more build step that might look something like this:
bundle exec rake cleanup
Close: "bundle exec rake clean release" will trash the target/ directory. I don't do that by default because that means the task will compile an Interpreter VM. Or, if you prefer, adding "clean" trades CPU/load for disk space. (We need to compile an Interpreter VM because build.squeak.org's OS can't run a new Interpreter for lack of a recent glibc.)
I used checkinstall make a debian package for the interpreter VM, and installed it on the system as package "squeakvm". Sorry if I did not mention this earlier!
$ dpkg --list squeakvm Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-===============================-===============================-============================================================================== ii squeakvm 20131020-1 Standard Squeak interpreter VM
That means that you can run /usr/local/bin/squeak for an interpreter VM, and you do not need to compile it for the individual Jenkins jobs.
If you can change ReleaseSqueakTrunk job to use /usr/local/bin/squeak, and then update the "bundle exec rake clean release" step, then you should see the disk usage for this job go back down to a normal level.
That's a more complicated change than I can make today. I wiped the workspace for the moment, but actually seeing as we have multiple slaves, I'm not sure what "wipe current workspace" actually means. At any rate, the latest ReleaseSqueakTrunk job's running on one of Tony Garnock-Jones' machines.
This job's target/ directory ought to have used around 150MB, at an eyeball estimate.
frank
Dave
On 11/16/2013 05:10 PM, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Ken
As of today, 11/25/2013:
Filesystem Size Used Avail Use% Mounted on /dev/xvda1 60G 55G 2.0G 97% /
There is a test/image_test.rb process stuck and it appears ReleaseSqueakTrunk and ExternalSqueakPackages jobs stuck as well.
Ken
On 11/25/2013 04:02 PM, Ken Causey wrote:
On 11/16/2013 05:10 PM, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Ken
As of today, 11/25/2013:
Filesystem Size Used Avail Use% Mounted on /dev/xvda1 60G 55G 2.0G 97% /
There is a test/image_test.rb process stuck and it appears ReleaseSqueakTrunk and ExternalSqueakPackages jobs stuck as well.
Ken
This is probably going to be the last time I harp on this, at least within the small box-admins community. I'm going to be out of town and likely offline for the next 2 days and not able to address server issues.
As of today (12/6/2013, 16:30UTC):
Filesystem Size Used Avail Use% Mounted on /dev/xvda1 60G 55G 1.1G 99% /
That we have not yet filled the file system yet is (to my knowledge) largely due to a slowing in the growth of build.squeak.org and squeaksource.com as well as the fact that I managed to free up somewhat more than 0.5GB of space by removing a large package or two and deleting the package archive. Any more in this vein is likely to be more invasive.
I really need the community interested in build.squeak.org and squeaksource.com to make some serious choices. Something has to give or both of these services will soon cease to function.
While I'm at it I would like to remind the build.squeak.org group that the Jenkins version on build.squeak.org was frozen at 1.517 due to a breaking change that occurred in 1.518. That was in early June this year. Many releases have occurred since and the current easily installable release is 1.541.
Ken
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
On 11/25/2013 04:02 PM, Ken Causey wrote:
On 11/16/2013 05:10 PM, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Ken
As of today, 11/25/2013:
Filesystem Size Used Avail Use% Mounted on /dev/xvda1 60G 55G 2.0G 97% /
There is a test/image_test.rb process stuck and it appears ReleaseSqueakTrunk and ExternalSqueakPackages jobs stuck as well.
Ken
This is probably going to be the last time I harp on this, at least within the small box-admins community. I'm going to be out of town and likely offline for the next 2 days and not able to address server issues.
As of today (12/6/2013, 16:30UTC):
Filesystem Size Used Avail Use% Mounted on /dev/xvda1 60G 55G 1.1G 99% /
That we have not yet filled the file system yet is (to my knowledge) largely due to a slowing in the growth of build.squeak.org and squeaksource.com as well as the fact that I managed to free up somewhat more than 0.5GB of space by removing a large package or two and deleting the package archive. Any more in this vein is likely to be more invasive.
I really need the community interested in build.squeak.org and squeaksource.com to make some serious choices. Something has to give or both of these services will soon cease to function.
While I'm at it I would like to remind the build.squeak.org group that the Jenkins version on build.squeak.org was frozen at 1.517 due to a breaking change that occurred in 1.518. That was in early June this year. Many releases have occurred since and the current easily installable release is 1.541.
Ken
On 12/06/2013 11:19 AM, David T. Lewis wrote:
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
Thanks Dave,
I agree that it is probably easier to find some space to clear up under build.squeak.org but I think the community as a whole has to give some serious thought to the future of squeaksource.com.
It seems to me that at least a couple GB could be cleared up by archiving some old dead projects (I have a couple I should do so with but I doubt they amount to much) and migrating away more active ones. Ideally the owners of those projects would stand up and volunteer therefore avoiding a sequester-style across the board trim. Frankly I think squeaksource.com must be closed to all writes and serve a purely archive function, at best given the resources we have currently.
If someone else doesn't do so I may call for volunteers to archive and move existing projects myself next week on squeak-dev. I'm hoping we can at least come up with some space savings in the short term that will give us some weeks to think and discuss.
Regarding moving squeaksource.com to another server: we do have another server where there is more space but that is technically designated for other purposes. The sad fact is that there is not a lot more space available on box3 and box4 combined than we have on the old box2 server.
Server Total HD Size
Box2 146GB (aka squeak.org, lists.squeakfoundation.org, etc.) Box3 60GB Box4 103GB
The new additions build.squeak.org and squeaksource.com currently add up to about 54GB. I think you will agree that the math here does not work out.
I guess my point is that moving squeaksource.com (to any other existing server we have), as a solution, is only a temporary solution.
Ken
Would some kind soul add the following line in /home/frank/.ssh/authorized_keys?
My old key's lost in a hard drive accident, so this key is for my new laptop:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAgEAlK3Le/fY6l5nZsrdEqfgnoF1NG8lDBUjX7l2UAYRK1aM3B54Rx6j/X747WZm0AOXmQiHWAsKHrpXOmCnlmW8/+Nuy6Nk1x7WD5C1eFJnINVVvN+AfxQ6spUUpJSPMt3cQNO1BSDgjm06nSlpesSAwSKuJhVvgTqHSHWA42JfKdApLQLZZkYpkmwvz9WCyGNsI5Yb6no4JafSscts3iefxT2vWfJePisQV0NOeWSwEc65YF0j4kSvsPA6+xvX4u5fA0VAMMB7hFVUihdMfu5ELQOdxCxc1Xa8aAjskRWN6RFbykBBMlfO29RUvfFq1YXvNW2e1IsOqpJvpMWadW1xJTEOcrVNFo/6xnxL8BaoxCIEoR9JuzUcCpSmjfT/4EwrZDoVyp4GnoJJJ6CtaetfoRDx7BIcIRon0al5knHb8KgpsNlTBOROwupjLG6Nm6JaHASJpBt4xpFZ+5VWqpiv3qQ/zjSYW8fTje3xSR6cgaY9mT+LwUOeDm0VSQLtVzo1yubJg3OJhr42FVifdamXc3EVQ5NDLT8ywkj2IXq46eLQEH/A1y4Sv+UjI/YuS68aol2dLMNlnaUdzDoU9wK/weAhR7h/SpqQrQ1IYXi1vT5kl4qT7GP8OHfVa/Rn+UXy2vL8ISGoq/dOWb2pTv7TKc/bJUHbMg77aAzRuI/YDYc= frank-atuan
(This is not a security violation: this is the _public_ part of the keypair.)
I'll see what I can do about space.
frank
On 6 December 2013 17:40, Ken Causey ken@kencausey.com wrote:
On 12/06/2013 11:19 AM, David T. Lewis wrote:
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
Thanks Dave,
I agree that it is probably easier to find some space to clear up under build.squeak.org but I think the community as a whole has to give some serious thought to the future of squeaksource.com.
It seems to me that at least a couple GB could be cleared up by archiving some old dead projects (I have a couple I should do so with but I doubt they amount to much) and migrating away more active ones. Ideally the owners of those projects would stand up and volunteer therefore avoiding a sequester-style across the board trim. Frankly I think squeaksource.com must be closed to all writes and serve a purely archive function, at best given the resources we have currently.
If someone else doesn't do so I may call for volunteers to archive and move existing projects myself next week on squeak-dev. I'm hoping we can at least come up with some space savings in the short term that will give us some weeks to think and discuss.
Regarding moving squeaksource.com to another server: we do have another server where there is more space but that is technically designated for other purposes. The sad fact is that there is not a lot more space available on box3 and box4 combined than we have on the old box2 server.
Server Total HD Size
Box2 146GB (aka squeak.org, lists.squeakfoundation.org, etc.) Box3 60GB Box4 103GB
The new additions build.squeak.org and squeaksource.com currently add up to about 54GB. I think you will agree that the math here does not work out.
I guess my point is that moving squeaksource.com (to any other existing server we have), as a solution, is only a temporary solution.
Ken
Done. I left the old key in which you probably should remove. FYI your username is frankshearar not simply frank.
Ken
On 12/06/2013 12:37 PM, Frank Shearar wrote:
Would some kind soul add the following line in /home/frank/.ssh/authorized_keys?
My old key's lost in a hard drive accident, so this key is for my new laptop:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAgEAlK3Le/fY6l5nZsrdEqfgnoF1NG8lDBUjX7l2UAYRK1aM3B54Rx6j/X747WZm0AOXmQiHWAsKHrpXOmCnlmW8/+Nuy6Nk1x7WD5C1eFJnINVVvN+AfxQ6spUUpJSPMt3cQNO1BSDgjm06nSlpesSAwSKuJhVvgTqHSHWA42JfKdApLQLZZkYpkmwvz9WCyGNsI5Yb6no4JafSscts3iefxT2vWfJePisQV0NOeWSwEc65YF0j4kSvsPA6+xvX4u5fA0VAMMB7hFVUihdMfu5ELQOdxCxc1Xa8aAjskRWN6RFbykBBMlfO29RUvfFq1YXvNW2e1IsOqpJvpMWadW1xJTEOcrVNFo/6xnxL8BaoxCIEoR9JuzUcCpSmjfT/4EwrZDoVyp4GnoJJJ6CtaetfoRDx7BIcIRon0al5knHb8KgpsNlTBOROwupjLG6Nm6JaHASJpBt4xpFZ+5VWqpiv3qQ/zjSYW8fTje3xSR6cgaY9mT+LwUOeDm0VSQLtVzo1yubJg3OJhr42FVifdamXc3EVQ5NDLT8ywkj2IXq46eLQEH/A1y4Sv+UjI/YuS68aol2dLMNlnaUdzDoU9wK/weAhR7h/SpqQrQ1IYXi1vT5kl4qT7GP8OHfVa/Rn+UXy2vL8ISGoq/dOWb2pTv7TKc/bJUHbMg77aAzRuI/YDYc= frank-atuan
(This is not a security violation: this is the _public_ part of the keypair.)
I'll see what I can do about space.
frank
On 6 December 2013 17:40, Ken Causey ken@kencausey.com wrote:
On 12/06/2013 11:19 AM, David T. Lewis wrote:
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
Thanks Dave,
I agree that it is probably easier to find some space to clear up under build.squeak.org but I think the community as a whole has to give some serious thought to the future of squeaksource.com.
It seems to me that at least a couple GB could be cleared up by archiving some old dead projects (I have a couple I should do so with but I doubt they amount to much) and migrating away more active ones. Ideally the owners of those projects would stand up and volunteer therefore avoiding a sequester-style across the board trim. Frankly I think squeaksource.com must be closed to all writes and serve a purely archive function, at best given the resources we have currently.
If someone else doesn't do so I may call for volunteers to archive and move existing projects myself next week on squeak-dev. I'm hoping we can at least come up with some space savings in the short term that will give us some weeks to think and discuss.
Regarding moving squeaksource.com to another server: we do have another server where there is more space but that is technically designated for other purposes. The sad fact is that there is not a lot more space available on box3 and box4 combined than we have on the old box2 server.
Server Total HD Size
Box2 146GB (aka squeak.org, lists.squeakfoundation.org, etc.) Box3 60GB Box4 103GB
The new additions build.squeak.org and squeaksource.com currently add up to about 54GB. I think you will agree that the math here does not work out.
I guess my point is that moving squeaksource.com (to any other existing server we have), as a solution, is only a temporary solution.
Ken
On Fri, Dec 06, 2013 at 11:40:54AM -0600, Ken Causey wrote:
On 12/06/2013 11:19 AM, David T. Lewis wrote:
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
Thanks Dave,
I agree that it is probably easier to find some space to clear up under build.squeak.org but I think the community as a whole has to give some serious thought to the future of squeaksource.com.
I fully agree that the community should give some consideration to how squeaksource.com should be managed moving forward. But please do not portray this as a disk space problem. If that is the problem, then I'll pay for the disk space myself, just tell me where to send the check.
The disk utilization problem is due to unnecessary accumulation of build artifacts from Jenkins jobs. It looks to me like most of this is accumulating by accident rather than by intent, and this can probably be easily fixed with some changes to the job configurations, with no loss of useful data from the jobs themselves. Clearly this needs to be addressed anyway, because if you doubled our available disk space we would be having the same discussion 12 months from now. So we need to fix it.
I'll try to get with Frank over the weekend and see if we can clean up some easy stuff (Frank, I am "dtlewis290@gmail.com" on gmail, so I'll try to connect with you there).
Meanwhile I deleted a few unnecessary backup files under ~ssdotcom, which gives us another 1% free disk space to keep things going for another day or so ;-)
Dave
I mailed Ken separately, but the time zone happy conjunction must have just passed. I'd have solved the problem (wipe out the ReleaseSqueakTrunk/target/ directory (5.9GB)) except that I can't sudo because I don't know the password for the account (because I don't think I ever actually set it)! If some kind soul can change it and let me know the password, I'll (a) change it to something only I know and (b) wipe out the directory causing the problem.
As a separate step, we can think about how to both produce versioned artifacts (i.e., zipfiles with versions in their file names) and not eat all the disk space.
frank
On 7 December 2013 01:23, David T. Lewis lewis@mail.msen.com wrote:
On Fri, Dec 06, 2013 at 11:40:54AM -0600, Ken Causey wrote:
On 12/06/2013 11:19 AM, David T. Lewis wrote:
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
Thanks Dave,
I agree that it is probably easier to find some space to clear up under build.squeak.org but I think the community as a whole has to give some serious thought to the future of squeaksource.com.
I fully agree that the community should give some consideration to how squeaksource.com should be managed moving forward. But please do not portray this as a disk space problem. If that is the problem, then I'll pay for the disk space myself, just tell me where to send the check.
The disk utilization problem is due to unnecessary accumulation of build artifacts from Jenkins jobs. It looks to me like most of this is accumulating by accident rather than by intent, and this can probably be easily fixed with some changes to the job configurations, with no loss of useful data from the jobs themselves. Clearly this needs to be addressed anyway, because if you doubled our available disk space we would be having the same discussion 12 months from now. So we need to fix it.
I'll try to get with Frank over the weekend and see if we can clean up some easy stuff (Frank, I am "dtlewis290@gmail.com" on gmail, so I'll try to connect with you there).
Meanwhile I deleted a few unnecessary backup files under ~ssdotcom, which gives us another 1% free disk space to keep things going for another day or so ;-)
Dave
On Sat, Dec 07, 2013 at 10:17:06AM +0000, Frank Shearar wrote:
I mailed Ken separately, but the time zone happy conjunction must have just passed. I'd have solved the problem (wipe out the ReleaseSqueakTrunk/target/ directory (5.9GB)) except that I can't sudo because I don't know the password for the account (because I don't think I ever actually set it)! If some kind soul can change it and let me know the password, I'll (a) change it to something only I know and (b) wipe out the directory causing the problem.
As a separate step, we can think about how to both produce versioned artifacts (i.e., zipfiles with versions in their file names) and not eat all the disk space.
frank
Hi Frank,
Cool, I think we just came to the identical conclusions :) I was just about to send the following email to you before I read this, so here is what I was going to say:
I think we can pretty easily free up a bunch of disk space. It turns out that our SqueakTrunk job is currently using 44% of the entire disk space on box3, and a lot of that can probably be freed without harming anything for the SqueakTrunk job itself.
Here are some things I think we can do, but I want to run it by you before actually changing anything:
1) The build artifacts are TrunkImage.changes, TrunkImage.image, TrunkImage.manifest and TrunkImage.version. The image and changes files take most of the space, so we could add a build step to compress them like this:
$ zip TrunkImage.zip TrunkImage.changes TrunkImage.image TrunkImage.manifest TrunkImage.version adding: TrunkImage.changes (deflated 77%) adding: TrunkImage.image (deflated 54%) adding: TrunkImage.manifest (deflated 54%) adding: TrunkImage.version (stored 0%)
Then we can specify TrunkImage.zip as the build artifact. This will save a lot of disk space in the future.
2) The Jenkins job is saving all of the build artifacts since we began running the job. I'm not sure if that's what you want it to do, but if the old artifacts are not needed, then we can change the job configuration. In the "Archive the Artifacts" section of the job configuration, there is a setting for "Discard all but the last successful/stable artifact to save disk space". That might more aggressive that we want, but there must be some setting that would let us trim the archives down to what we really need.
3) If we run out of disk space and need to take emergency action, we can just compress the older build artifacts (from the unix command line). It's probably not good to do this outside of Jenkins tools, but at least we would not lose the actual data, and it would free up a lot of disk space right away.
4) If we don't need all of the historical artifacts, and if we can't figure out how to trim them down through Jenkins job configurations, then I can delete the older ones from the unix command line.
5) Not directly related to disk space, but we should probably also enable the "Abort the build if it's stuck" option under "Build Environment". We can set it to time out after 30 minutes or so, and I think that might cure our problem with stuck ruby and squeakvm processes.
Dave
On 7 December 2013 01:23, David T. Lewis lewis@mail.msen.com wrote:
On Fri, Dec 06, 2013 at 11:40:54AM -0600, Ken Causey wrote:
On 12/06/2013 11:19 AM, David T. Lewis wrote:
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
Thanks Dave,
I agree that it is probably easier to find some space to clear up under build.squeak.org but I think the community as a whole has to give some serious thought to the future of squeaksource.com.
I fully agree that the community should give some consideration to how squeaksource.com should be managed moving forward. But please do not portray this as a disk space problem. If that is the problem, then I'll pay for the disk space myself, just tell me where to send the check.
The disk utilization problem is due to unnecessary accumulation of build artifacts from Jenkins jobs. It looks to me like most of this is accumulating by accident rather than by intent, and this can probably be easily fixed with some changes to the job configurations, with no loss of useful data from the jobs themselves. Clearly this needs to be addressed anyway, because if you doubled our available disk space we would be having the same discussion 12 months from now. So we need to fix it.
I'll try to get with Frank over the weekend and see if we can clean up some easy stuff (Frank, I am "dtlewis290@gmail.com" on gmail, so I'll try to connect with you there).
Meanwhile I deleted a few unnecessary backup files under ~ssdotcom, which gives us another 1% free disk space to keep things going for another day or so ;-)
Dave
Hi guys -- First, Ken, thanks again for setting a reference example of how we need to be thinking about our systems from an admin-support perspective. Taking a proactive tact is not just excellent service, there are many intangible benefits like image and community constitution.
I totally agree with Dave. Pruning SqueakSource may be something we might want to consider in the future, but not now, and never simply because we're low on disk space. First because pruning empty projects won't recover anything significant, secondly because we're in a "archival preservation" mode right now with SqueakSource -- not a mode to be making significant changes to it.
How many times have we said, "disk space is cheap"? We cannot go back on that now! :)
Frank wrote:
As a separate step, we can think about how to both produce versioned artifacts (i.e., zipfiles with versions in their file names) and not eat all the disk space.
I thought about it 4 years ago and made an object model to take care of it. The problem with zip-files is how wasteful they are. When someone changes one single method of the Morphic package, the other 1K definitions (however many there are) are duplicated in new zip (mcz) file. By contrast, the object model refers to the same canonicalized MCDefinition instances across Versions, adding only one new MCDefinition to the bulk of the model in that example.
The result is that the redundant Magma-backed copy of source.squeak.org consumes less than 1/4th the space of the original File based version. A Magma-backed copy of squeaksource.com would about 12GB of space.
For now, though, Frank has the ability and responsibility to trim up the jenkins stuff. Thanks Frank.
PS -- For interest, I just kicked off a bulk-load of entire squeaksource.com repository into Magma to see how much space it will take..
On Sat, Dec 7, 2013 at 9:37 AM, David T. Lewis lewis@mail.msen.com wrote:
On Sat, Dec 07, 2013 at 10:17:06AM +0000, Frank Shearar wrote:
I mailed Ken separately, but the time zone happy conjunction must have just passed. I'd have solved the problem (wipe out the ReleaseSqueakTrunk/target/ directory (5.9GB)) except that I can't sudo because I don't know the password for the account (because I don't think I ever actually set it)! If some kind soul can change it and let me know the password, I'll (a) change it to something only I know and (b) wipe out the directory causing the problem.
As a separate step, we can think about how to both produce versioned artifacts (i.e., zipfiles with versions in their file names) and not eat all the disk space.
frank
Hi Frank,
Cool, I think we just came to the identical conclusions :) I was just about to send the following email to you before I read this, so here is what I was going to say:
I think we can pretty easily free up a bunch of disk space. It turns out that our SqueakTrunk job is currently using 44% of the entire disk space on box3, and a lot of that can probably be freed without harming anything for the SqueakTrunk job itself.
Here are some things I think we can do, but I want to run it by you before actually changing anything:
- The build artifacts are TrunkImage.changes, TrunkImage.image,
TrunkImage.manifest and TrunkImage.version. The image and changes files take most of the space, so we could add a build step to compress them like this:
$ zip TrunkImage.zip TrunkImage.changes TrunkImage.image TrunkImage.manifest TrunkImage.version adding: TrunkImage.changes (deflated 77%) adding: TrunkImage.image (deflated 54%) adding: TrunkImage.manifest (deflated 54%) adding: TrunkImage.version (stored 0%)
Then we can specify TrunkImage.zip as the build artifact. This will save a lot of disk space in the future.
- The Jenkins job is saving all of the build artifacts since we began
running the job. I'm not sure if that's what you want it to do, but if the old artifacts are not needed, then we can change the job configuration. In the "Archive the Artifacts" section of the job configuration, there is a setting for "Discard all but the last successful/stable artifact to save disk space". That might more aggressive that we want, but there must be some setting that would let us trim the archives down to what we really need.
- If we run out of disk space and need to take emergency action, we can
just compress the older build artifacts (from the unix command line). It's probably not good to do this outside of Jenkins tools, but at least we would not lose the actual data, and it would free up a lot of disk space right away.
- If we don't need all of the historical artifacts, and if we can't figure
out how to trim them down through Jenkins job configurations, then I can delete the older ones from the unix command line.
- Not directly related to disk space, but we should probably also enable the
"Abort the build if it's stuck" option under "Build Environment". We can set it to time out after 30 minutes or so, and I think that might cure our problem with stuck ruby and squeakvm processes.
Dave
On 7 December 2013 01:23, David T. Lewis lewis@mail.msen.com wrote:
On Fri, Dec 06, 2013 at 11:40:54AM -0600, Ken Causey wrote:
On 12/06/2013 11:19 AM, David T. Lewis wrote:
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
Thanks Dave,
I agree that it is probably easier to find some space to clear up under build.squeak.org but I think the community as a whole has to give some serious thought to the future of squeaksource.com.
I fully agree that the community should give some consideration to how squeaksource.com should be managed moving forward. But please do not portray this as a disk space problem. If that is the problem, then I'll pay for the disk space myself, just tell me where to send the check.
The disk utilization problem is due to unnecessary accumulation of build artifacts from Jenkins jobs. It looks to me like most of this is accumulating by accident rather than by intent, and this can probably be easily fixed with some changes to the job configurations, with no loss of useful data from the jobs themselves. Clearly this needs to be addressed anyway, because if you doubled our available disk space we would be having the same discussion 12 months from now. So we need to fix it.
I'll try to get with Frank over the weekend and see if we can clean up some easy stuff (Frank, I am "dtlewis290@gmail.com" on gmail, so I'll try to connect with you there).
Meanwhile I deleted a few unnecessary backup files under ~ssdotcom, which gives us another 1% free disk space to keep things going for another day or so ;-)
Dave
On Sat, Dec 07, 2013 at 04:58:51PM -0600, Chris Muller wrote:
I thought about it 4 years ago and made an object model to take care of it. The problem with zip-files is how wasteful they are. When someone changes one single method of the Morphic package, the other 1K definitions (however many there are) are duplicated in new zip (mcz) file. By contrast, the object model refers to the same canonicalized MCDefinition instances across Versions, adding only one new MCDefinition to the bulk of the model in that example.
The result is that the redundant Magma-backed copy of source.squeak.org consumes less than 1/4th the space of the original File based version. A Magma-backed copy of squeaksource.com would about 12GB of space.
PS -- For interest, I just kicked off a bulk-load of entire squeaksource.com repository into Magma to see how much space it will take..
I changed the subject line, and am moving the discussion over to squeak-dev because I think it may be of more general interest.
I'm quite interested to know how that turns out. Entirely aside from disk space concerns, the approach you are describing makes a lot of sense to me, and the squeaksource.com archive provides a fairly large data set to try it out.
Bob Arning has been doing some really interesting things with a Seaside browser for exploring the change set records of earlier Squeak development. Meanwhile you (Chris) are doing equally interesting work to enable a Monticello browser to browse through the historical record of source.squeak.org, backed up a Magma-enabled image currently running on box4.squeak.org:8888.
I have a of vague hand-waving notion that these two should be related, and that if I wanted to figure out how some method in e.g. ObjectMemory came into being, it would be really convenient if I could explore its change history to see various things that Eliot or Tim or I might have done in recent years in the Monticello repositories, and continue back in time through the change set update stream to see how and why Dan Ingalls might have originally implemented it in Squeak 1.x.
Is there some way in which the change set based update stream from earlier Squeak could also be captured in the Magma back end, similar to what you are doing with the Monticello packages?
Dave
Magma itself supports persisting change-sets via the following API:
============== "file-out a ChangeSet" mySession commit: [ mySession codeBase fileOutChangeSet: (ChangeSorter changeSetNamed: 'myChangeSet') ]
"Load a ChangeSet" mySession codeBase fileInChangeSetNamed: 'myChangeSet'
"browse a change-set before filing it in" mySession codeBase browseChangeSetNamed: 'myChangeSet'
"Answer a collectioon of all changeSet names in the codeBase." mySession codeBase changeSetNames
"Install all the changeSets in the codeBase immediately." mySession codeBase installChangeSets ===============
But I didn't do anything to integrate methods stored in change-sets this way into the new MC History function.
Seems like it would be a neat thing to do though if someone had the time..
On Sat, Dec 7, 2013 at 5:54 PM, David T. Lewis lewis@mail.msen.com wrote:
On Sat, Dec 07, 2013 at 04:58:51PM -0600, Chris Muller wrote:
I thought about it 4 years ago and made an object model to take care of it. The problem with zip-files is how wasteful they are. When someone changes one single method of the Morphic package, the other 1K definitions (however many there are) are duplicated in new zip (mcz) file. By contrast, the object model refers to the same canonicalized MCDefinition instances across Versions, adding only one new MCDefinition to the bulk of the model in that example.
The result is that the redundant Magma-backed copy of source.squeak.org consumes less than 1/4th the space of the original File based version. A Magma-backed copy of squeaksource.com would about 12GB of space.
PS -- For interest, I just kicked off a bulk-load of entire squeaksource.com repository into Magma to see how much space it will take..
I changed the subject line, and am moving the discussion over to squeak-dev because I think it may be of more general interest.
I'm quite interested to know how that turns out. Entirely aside from disk space concerns, the approach you are describing makes a lot of sense to me, and the squeaksource.com archive provides a fairly large data set to try it out.
Bob Arning has been doing some really interesting things with a Seaside browser for exploring the change set records of earlier Squeak development. Meanwhile you (Chris) are doing equally interesting work to enable a Monticello browser to browse through the historical record of source.squeak.org, backed up a Magma-enabled image currently running on box4.squeak.org:8888.
I have a of vague hand-waving notion that these two should be related, and that if I wanted to figure out how some method in e.g. ObjectMemory came into being, it would be really convenient if I could explore its change history to see various things that Eliot or Tim or I might have done in recent years in the Monticello repositories, and continue back in time through the change set update stream to see how and why Dan Ingalls might have originally implemented it in Squeak 1.x.
Is there some way in which the change set based update stream from earlier Squeak could also be captured in the Magma back end, similar to what you are doing with the Monticello packages?
Dave
On 7 December 2013 22:58, Chris Muller asqueaker@gmail.com wrote:
Hi guys -- First, Ken, thanks again for setting a reference example of how we need to be thinking about our systems from an admin-support perspective. Taking a proactive tact is not just excellent service, there are many intangible benefits like image and community constitution.
I totally agree with Dave. Pruning SqueakSource may be something we might want to consider in the future, but not now, and never simply because we're low on disk space. First because pruning empty projects won't recover anything significant, secondly because we're in a "archival preservation" mode right now with SqueakSource -- not a mode to be making significant changes to it.
How many times have we said, "disk space is cheap"? We cannot go back on that now! :)
Frank wrote:
As a separate step, we can think about how to both produce versioned artifacts (i.e., zipfiles with versions in their file names) and not eat all the disk space.
I thought about it 4 years ago and made an object model to take care of it. The problem with zip-files is how wasteful they are. When someone changes one single method of the Morphic package, the other 1K definitions (however many there are) are duplicated in new zip (mcz) file. By contrast, the object model refers to the same canonicalized MCDefinition instances across Versions, adding only one new MCDefinition to the bulk of the model in that example.
That's very cool, but it's not the zips we're talking about. We're talking about the artifacts produced by Jenkins, which are Squeak images. I wrote the release process to produce a series of artifacts with different names. Because nothing gets replaced/overwritten, disk usage is unbound.
The result is that the redundant Magma-backed copy of source.squeak.org consumes less than 1/4th the space of the original File based version. A Magma-backed copy of squeaksource.com would about 12GB of space.
For now, though, Frank has the ability and responsibility to trim up the jenkins stuff. Thanks Frank.
I nuked the ReleaseSqueakTrunk target directory, freeing up 5.9 GB of space. (We should always consider the target/ directory of every job as being evanescent. Feel free to nuke these at any time. Every job should be written such that it can reconstitute its working environment in target/.
PS -- For interest, I just kicked off a bulk-load of entire squeaksource.com repository into Magma to see how much space it will take..
On Sat, Dec 7, 2013 at 9:37 AM, David T. Lewis lewis@mail.msen.com wrote:
On Sat, Dec 07, 2013 at 10:17:06AM +0000, Frank Shearar wrote:
I mailed Ken separately, but the time zone happy conjunction must have just passed. I'd have solved the problem (wipe out the ReleaseSqueakTrunk/target/ directory (5.9GB)) except that I can't sudo because I don't know the password for the account (because I don't think I ever actually set it)! If some kind soul can change it and let me know the password, I'll (a) change it to something only I know and (b) wipe out the directory causing the problem.
As a separate step, we can think about how to both produce versioned artifacts (i.e., zipfiles with versions in their file names) and not eat all the disk space.
frank
Hi Frank,
Cool, I think we just came to the identical conclusions :) I was just about to send the following email to you before I read this, so here is what I was going to say:
I think we can pretty easily free up a bunch of disk space. It turns out that our SqueakTrunk job is currently using 44% of the entire disk space on box3, and a lot of that can probably be freed without harming anything for the SqueakTrunk job itself.
Here are some things I think we can do, but I want to run it by you before actually changing anything:
- The build artifacts are TrunkImage.changes, TrunkImage.image,
TrunkImage.manifest and TrunkImage.version. The image and changes files take most of the space, so we could add a build step to compress them like this:
$ zip TrunkImage.zip TrunkImage.changes TrunkImage.image TrunkImage.manifest TrunkImage.version adding: TrunkImage.changes (deflated 77%) adding: TrunkImage.image (deflated 54%) adding: TrunkImage.manifest (deflated 54%) adding: TrunkImage.version (stored 0%)
Then we can specify TrunkImage.zip as the build artifact. This will save a lot of disk space in the future.
I've implemented this for ReleaseSqueakTrunk, which will produce an artifact named Squeak4.5.zip. This will contain a properly named image & changes file of the form Squeak4.5-NNNN.
- The Jenkins job is saving all of the build artifacts since we began
running the job. I'm not sure if that's what you want it to do, but if the old artifacts are not needed, then we can change the job configuration. In the "Archive the Artifacts" section of the job configuration, there is a setting for "Discard all but the last successful/stable artifact to save disk space". That might more aggressive that we want, but there must be some setting that would let us trim the archives down to what we really need.
- If we run out of disk space and need to take emergency action, we can
just compress the older build artifacts (from the unix command line). It's probably not good to do this outside of Jenkins tools, but at least we would not lose the actual data, and it would free up a lot of disk space right away.
- If we don't need all of the historical artifacts, and if we can't figure
out how to trim them down through Jenkins job configurations, then I can delete the older ones from the unix command line.
- Not directly related to disk space, but we should probably also enable the
"Abort the build if it's stuck" option under "Build Environment". We can set it to time out after 30 minutes or so, and I think that might cure our problem with stuck ruby and squeakvm processes.
Dave
On 7 December 2013 01:23, David T. Lewis lewis@mail.msen.com wrote:
On Fri, Dec 06, 2013 at 11:40:54AM -0600, Ken Causey wrote:
On 12/06/2013 11:19 AM, David T. Lewis wrote:
Thanks Ken,
I will look at it as soon as I get home, about 8 hours from now.
Frank,
There's not much I can do on the squeaksource.com side other than move the repository to another box (which is not easy to do). Short term we need to tidy up the build.squeak.org jobs where can can.
Dave
Thanks Dave,
I agree that it is probably easier to find some space to clear up under build.squeak.org but I think the community as a whole has to give some serious thought to the future of squeaksource.com.
I fully agree that the community should give some consideration to how squeaksource.com should be managed moving forward. But please do not portray this as a disk space problem. If that is the problem, then I'll pay for the disk space myself, just tell me where to send the check.
The disk utilization problem is due to unnecessary accumulation of build artifacts from Jenkins jobs. It looks to me like most of this is accumulating by accident rather than by intent, and this can probably be easily fixed with some changes to the job configurations, with no loss of useful data from the jobs themselves. Clearly this needs to be addressed anyway, because if you doubled our available disk space we would be having the same discussion 12 months from now. So we need to fix it.
I'll try to get with Frank over the weekend and see if we can clean up some easy stuff (Frank, I am "dtlewis290@gmail.com" on gmail, so I'll try to connect with you there).
Meanwhile I deleted a few unnecessary backup files under ~ssdotcom, which gives us another 1% free disk space to keep things going for another day or so ;-)
Dave
I want to respond to some comments made in this thread.
First I want to admit that my posting on Friday was a bit shrill. I was getting frustrated that this was my third or fourth post on this increasingly problematic issue and little if any action had been taken. Further there seemed a real possibility that over the next few days, possibly over the weekend when I had little time to provide assistance, that the file system for the server which hosts both build.squeak.org and squeaksource.com would fill up. I have seen greater than 1% per 24 hour increases on that server in the past.
Thanks to Frank the immediate issue has been addressed and hopefully we have a couple of weeks of breathing time now to consider how best to avoid the issue in the future.
There has been some discussion regarding my admittedly somewhat extreme comments regarding squeaksource.com. One thing that has been mentioned is the idea that 'disk space is cheap'. I think that is easy to say and true in general, but I'm not sure it is true in this specific case. I will admit to possibly over-estimating the 'cost' but... Keep in mind that we have no direct control over the configuration of either box3.squeak.org and box4.squeak.org. These were contributed to us by Gandi.net at the request of the Software Freedom Conservancy. Neither I nor anyone else in our community has any access to modify the server configuration and do things like add disk space. At best we have to go through Software Freedom Conservancy for this. They don't have a lot of time to spare to such issues themselves, further I don't think we should make assumptions that Gandi.net is going to be willing to donate more resources. I'm not sure it is even easy to throw money at the issue given the fact that we are using donated resources. But then, I may just be unreasonably pessimistic about this.
Someone kindly thanked me and gave the impression that I was the only one that 'cared' enough to monitor the servers for such issues. Thanks but don't assign me too much altruism or think that I'm so interested. The minor amount of daily server checking I do is largely habit for me and is an easy way for me to trigger a few endorphins and feel like I have in some way contributed for the day.
To be honest my interest in Squeak and the community has been waning for some time and is quite low at this point. Don't assume I'm going to continue to do what little I do indefinitely. Someone else must step up to take responsibility for the Squeak servers.
Ken
Hi Ken Thank you for all the times you helped out. I do not have much spare time at the moment so I can not be of much help maintaining stuff.
Seems like interest in Squeak has peaked and now is in a downward slope. But just wait another 20 years. It will come back !
Best regards, Karl
On Mon, Dec 9, 2013 at 9:02 PM, Ken Causey ken@kencausey.com wrote:
I want to respond to some comments made in this thread.
First I want to admit that my posting on Friday was a bit shrill. I was getting frustrated that this was my third or fourth post on this increasingly problematic issue and little if any action had been taken. Further there seemed a real possibility that over the next few days, possibly over the weekend when I had little time to provide assistance, that the file system for the server which hosts both build.squeak.org and squeaksource.com would fill up. I have seen greater than 1% per 24 hour increases on that server in the past.
Thanks to Frank the immediate issue has been addressed and hopefully we have a couple of weeks of breathing time now to consider how best to avoid the issue in the future.
There has been some discussion regarding my admittedly somewhat extreme comments regarding squeaksource.com. One thing that has been mentioned is the idea that 'disk space is cheap'. I think that is easy to say and true in general, but I'm not sure it is true in this specific case. I will admit to possibly over-estimating the 'cost' but... Keep in mind that we have no direct control over the configuration of either box3.squeak.organd box4.squeak.org. These were contributed to us by Gandi.net at the request of the Software Freedom Conservancy. Neither I nor anyone else in our community has any access to modify the server configuration and do things like add disk space. At best we have to go through Software Freedom Conservancy for this. They don't have a lot of time to spare to such issues themselves, further I don't think we should make assumptions that Gandi.net is going to be willing to donate more resources. I'm not sure it is even easy to throw money at the issue given the fact that we are using donated resources. But then, I may just be unreasonably pessimistic about this.
Someone kindly thanked me and gave the impression that I was the only one that 'cared' enough to monitor the servers for such issues. Thanks but don't assign me too much altruism or think that I'm so interested. The minor amount of daily server checking I do is largely habit for me and is an easy way for me to trigger a few endorphins and feel like I have in some way contributed for the day.
To be honest my interest in Squeak and the community has been waning for some time and is quite low at this point. Don't assume I'm going to continue to do what little I do indefinitely. Someone else must step up to take responsibility for the Squeak servers.
Ken
It'd be worth asking the SFC if they can increase the size of the disks on box3 and box4.
Levente
On Fri, 6 Dec 2013, Ken Causey wrote:
On 11/25/2013 04:02 PM, Ken Causey wrote:
On 11/16/2013 05:10 PM, Ken Causey wrote:
The disk space on box3 is beginning to get dangerously low on filesystem space. At the moment it is 94% full with 3.8GB of free space. Lately it seems to increase 1% every 2 or 3 days.
Primary offenders seem to be
/var/lib/jenkins/ 33GB
/home/ssdotcom/ 18GB
I hope one or both of you can find something to delete.
Ken
As of today, 11/25/2013:
Filesystem Size Used Avail Use% Mounted on /dev/xvda1 60G 55G 2.0G 97% /
There is a test/image_test.rb process stuck and it appears ReleaseSqueakTrunk and ExternalSqueakPackages jobs stuck as well.
Ken
This is probably going to be the last time I harp on this, at least within the small box-admins community. I'm going to be out of town and likely offline for the next 2 days and not able to address server issues.
As of today (12/6/2013, 16:30UTC):
Filesystem Size Used Avail Use% Mounted on /dev/xvda1 60G 55G 1.1G 99% /
That we have not yet filled the file system yet is (to my knowledge) largely due to a slowing in the growth of build.squeak.org and squeaksource.com as well as the fact that I managed to free up somewhat more than 0.5GB of space by removing a large package or two and deleting the package archive. Any more in this vein is likely to be more invasive.
I really need the community interested in build.squeak.org and squeaksource.com to make some serious choices. Something has to give or both of these services will soon cease to function.
While I'm at it I would like to remind the build.squeak.org group that the Jenkins version on build.squeak.org was frozen at 1.517 due to a breaking change that occurred in 1.518. That was in early June this year. Many releases have occurred since and the current easily installable release is 1.541.
Ken
box-admins@lists.squeakfoundation.org