[Vm-dev] Re: [Pharo-project] Memory problems on Windows

Jimmie Houchin jlhouchin at gmail.com
Sat Sep 4 20:32:06 UTC 2010


On 9/4/2010 2:28 PM, Andreas Raab wrote:
>
> There is indeed a hard limit of 512MB in the Windows VM currently in
> effect, which is the result of debugging some extremely strange effects
> when loading DLLs. What we found was that on some systems, under some
> circumstances (related to memory load etc) DLLs would not be loaded if
> we'd reserve more than 512MB of memory leading the application either to
> not start at all, or to cause weird effects at later points etc.
>
> Cheers,
> - Andreas

Thanks Andreas for the explanation. When I re-read the 100 Million 
Objects thread, you mentioned that limit but I thought that you had 
removed it. Which may be why the 3.7.1 vm with the 3.8 image went up to 
800+ mb of ram. But then it started acting strange.

Unfortunately I have to interface with Windows COM libraries for 
business reasons. So currently Windows is required. Later I hope to 
upgrade to using the Java libraries which would release me from Windows 
and allow me to use Linux or OS X. Presently I am using Python to 
interface the COM libraries and then communicate with Squeak/Pharo via http.

Since I know I have a hard limit, I'll quit trying to get it to work. :)

Thanks again.

Jimmie Houchin



> On 9/4/2010 2:14 AM, Mariano Martinez Peck wrote:
>> You can also ask in VM mailing list. I cc'ed them.
>>
>> On Sat, Sep 4, 2010 at 4:52 AM, Jimmie Houchin <jlhouchin at gmail.com>
>> wrote:
>>
>> Hello,
>>
>> Sorry for the delay in reply. For some reason the first time I
>> looked at your message in my newsreader (Thunderbird/GMane) the
>> message was in French and written to someone else. Apparently the
>> software did something strange.
>>
>> To answer your question. Yes. I tried this in both the standard vm
>> and the standard Pharo 1.1 image.
>>
>> I start the vm/image with -memory: 1000
>> and open a new Workspace, then copy the below memory settings and doit.
>> I then do a := Array new: 100000000.
>>
>> That seemingly succeeds but only takes me up to a little below 500mb
>> ram. If I attempt to do another array of another 100,000,000
>> objects, I get the Low Space error.
>>
>> With the standard vm and image, it just sits there consuming cpu
>> with simply doing the array initialization. I am on a quad core
>> server with 6gm ram and it has currently consumed over 36 cpu
>> minutes of one of the cores. At this point I quit the image. I could
>> do things in the UI but it is not very responsive.
>>
>> In the Cog vm/image, the array creation returns almost immediately
>> and waits for other instructions.
>>
>> As I write this I have to go back to Squeak 3.7.1vm and either the
>> 3.8 or 3.10 image, for it to be successful. Outside of that I can't
>> get past 500mb of ram. This server generally sits at only using 30%
>> of ram.
>>
>> For enterprise/business endeavors, I thing Pharo really needs to be
>> able to use all the memory the OS will allow it. I know that it
>> being a 32 bit app does create some limits on some OSes. But neither
>> of my Vista machines imposes a limit that I can't live with at the
>> moment.
>>
>> Thanks for your reply. I am neither a vm nor Smalltalk expert so I
>> don't know how to proceed from here outside of reducing my
>> applications memory needs by putting more into the database and only
>> having the data in memory that is absolutely necessary for the
>> analysis I am attempting.
>>
>> Jimmie
>>
>>
>> On 9/1/2010 4:48 AM, Stéphane Ducasse wrote:
>>
>> do you have the same problem with the normal VM?
>>
>> On Sep 1, 2010, at 5:25 AM, Jimmie Houchin wrote:
>>
>> Hello,
>>
>> I am developing an application which processes and generates
>> a large amount of data. In a recent attempt I encountered an
>> Space is Low error.
>>
>> This is occurring in a Pharo 1.1 image using the latest
>> Pharo and Cog VMs. I am opening the vm with the -memory:
>> 1000 parameter.
>> The below code I have applied the from the Squeak list from
>> the 100 Million Objects thread. But the problem occurs at
>> about 500mb of ram on a computer with 3 (or 6) gb of ram,
>> with only 65% of physical ram in use. The os is Vista.
>>
>> Any help in using more memory, as much as necessary for the
>> app, would be greatly appreciated.
>>
>> initializeMemorySettings
>> "Initialize the memory and GC settings to be more in line
>> with QF requirements"
>>
>> "The following settings affect the rate incremental GCs and
>> tenuring"
>>
>> "Limit incremental GC activity to run every 40k allocations"
>> SmalltalkImage current vmParameterAt: 5 put: 40000.
>> "allocations between GCs (default: 4000)"
>> "Limit tenuring threshold to only tenure w/> 10k survivors"
>> SmalltalkImage current vmParameterAt: 6 put: 10000.
>> "tenuring threshold (default: 2000)"
>>
>> "These settings affect overall memory usage"
>>
>> "Only give memory back to the OS when we have more than 16MB
>> free"
>> SmalltalkImage current vmParameterAt: 24 put:
>> 16*1024*1024. "default 8MB"
>> "Try to keep 8MB headroom at all times"
>> SmalltalkImage current vmParameterAt: 25 put: 8*1024*1024.
>> "default 4MB"
>> "These settings describe what to do when we're close to
>> running out of free space"
>>
>> "Tell the VM that we'd rather grow than spinning in tight GC
>> loops"
>> SmalltalkImage current gcBiasToGrow: true. "default: false"
>> "Tell the VM to do a fullGC for good measure if the above
>> growth exceeded 16MB"
>> SmalltalkImage current gcBiasToGrowLimit: 16*1024*1024.
>> "default: 0"
>>
>> Thanks,
>>
>> Jimmie Houchin
>>
>>
>> _______________________________________________
>> Pharo-project mailing list
>> Pharo-project at lists.gforge.inria.fr
>> <mailto:Pharo-project at lists.gforge.inria.fr>
>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>
>>
>>
>>
>> _______________________________________________
>> Pharo-project mailing list
>> Pharo-project at lists.gforge.inria.fr
>> <mailto:Pharo-project at lists.gforge.inria.fr>
>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>
>>
>




More information about the Vm-dev mailing list