Dynamic system memory use

Scott A Crosby crosby at qwes.math.cmu.edu
Sat Feb 2 00:55:07 UTC 2002


On Sat, 2 Feb 2002, Andreas Raab wrote:

> Daniel,
>
> > 	There is no way to guarantee all the memory you
> > allocate from Malloc is actual ram, and not disk space,
> > if this is what you are asking....
>
> Not quite. The keyword of this discussion is "lazy" not virtual. I don't
> exactly care where the memory is (that's exactly what the manager is
> for). But I _do_ care that if I call malloc() and get a non-null pointer
> back then I should in fact be able to use that memory block. According


This is called overcommit. Basically, do you overcommit your virtual
memory resources (memory+swap). If you don't, then trying to start up one
or two programs that attempt to malloc 512mb, but will only use a fraction
of it, this will fail.

So, you trade flexibility (you only need virtual memory resources to
sufficidnt for what you're actually going to use) for robustness (The
systems already promised that it has the memory, which is lying.)

Both views are widely held, sometimes different kernel developers of the
same kernel. For some kernels, its a runtime option.

In all cases, its endlessly argued:
   http://www.uwsg.iu.edu/hypermail/linux/kernel/0003.3/0365.html


Generally, you can *never* have an 'out of memory' situation in a system
without overcomit. With overcommit, the result can be variable, sometimes
a hard system crash, or the system starts killing random processes until
it hits a magic one..

Generally, depending on overcommit can lead to portability problems. You
shouldn't, say, malloc a gigabyte and assume that the system will
overcommit it and only use what you say actually use.

Several of the other interactive programming systems that I use that work
by mmapping a large image and only touching the subset of that that they
need. [*]

> to the explanations I've heard here that is _not_ the case. In other
> words the "lazy" malloc()ator may give me a pointer from malloc() but at
> the point where I actually start writing to that memory it may say
> "Oops. Sorry. I promised you that memory but I lied. I don't actually
> have any. You loose. Core dumped."

Think system freeze, random process dying, or however your system deals
with exceptional out-of-memory situations.


---
[*] One such program, here's what it prints out when it can't allocate
2gb:

crosby at dragonlight:~$ cmucl
Error in allocating memory

CMUCL asks the kernel to make a lot of memory potentially available.
Truely a lot of memory, actually it asks for all memory a process
can allocate.

Note that due to the high demands placed on the kernel,
I am only sure CMUCL works with 2.2.X or higher kernels.

Now you have two choices:
 - Accept this and lift the kernel and other limits by doing:
 as root:
 echo 1 > /proc/sys/vm/overcommit_memory
 as the user:
 ulimit -d unlimited
 ulimit -v unlimited
 ulimit -m unlimited

  - Try to use the lazy-allocation routines. They are pretty experimental
 and might interact badly with some kernels. To do this start lisp with
the
 "-lazy" flag, like:
 lisp -lazy

---

(In truth, this is hitting a runaway-process catcher that doesn't like
programs that use >256mb of VM, but the error message is still accurate.)





More information about the Squeak-dev mailing list