On Wed, May 13, 2009 at 7:08 PM, Igor Stasenko siguctua@gmail.com wrote:
2009/5/14 Eliot Miranda eliot.miranda@gmail.com:
Hi Igor, On Wed, May 13, 2009 at 6:18 PM, Igor Stasenko siguctua@gmail.com
wrote:
2009/5/14 Eliot Miranda eliot.miranda@gmail.com:
On Wed, May 13, 2009 at 5:05 PM, Jecel Assumpcao Jr <
jecel@merlintec.com> wrote:
Eliot,
[JIT code uses call which pushes PC first]
Ok, so this can't be helped.
So while one could I don't see that its worth-while. Even if one
did
keep the arguments and temporaries together one would still have
the
stack contents separate from the arguments and temporaries and temporary access bytecodes can still access those so arguably one would still have to check the index against the temp count.
Really? I wouldn't expect the compiler to ever generate such
bytecodes
and so wasn't too worried if the VM did the wrong thing in this situation.
There's a tension between implementing what the current compiler
produces and implementing what the instruction set defines. For example should one assume arguments are never written to? I lean on the side of implementing the instruction set.
In the JIT the flag is a bit in the method reference's LSBs and is
set for free
on frame build.
That sounds like a neat trick. Are the stack formats for the
interpreted
stack vm and the jit a little diffeent?
Yes. In the JIT an interpreted frame needs an extra field to hold the
saved bytecode instruction pointer when an interpreted frame calls a machine code frame because the return address is the "return to interpreter trampoline" pc. There is no flag word in a machine code frame. So machine code frames save one word w.r.t. the stack vm and interpreted frames gain a word. But most frames are machine code ones so most of the time one is saving space.
Thanks for the explanations. I haven't figured out how to do this in hardware in a reasonable way and so might have to go with a
different design.
I guess that in hardware you can create an instruction that will load
a descriptor register as part of the return sequence in parallel with restoring the frame pointer and method so one would never indirect through the frame pointer to fetch the flags word; instead it would be part of the register state. But that's an extremely uneducated guess :)
-- Jecel
The one thing you mentioned is separation of data stack & code stack (use separate space to push args & temps, and another space to push VM/JIT-specific state, such as context pointers & return addresses).
I think the "official" names used in Forth implementations are "control
stack" and "operand stack".
In this way, there are no big issue with this separation, because to access the state in both variants you still have to track two pointers
- namely stack pointer and frame pointer.
I think you end up with three pointers. There's a frame pointer into the
control stack, a stack pointer to the receiver/0th temp for that frame on the operand stack and a stack pointer proper to the top of stack on the operand stack.
maybe you right, but i think this can be avoided, becasue at each call site you know exactly the number of arguments and the order how they will be pushed on stack, so to make a send
self x:1 y: 2
you can do: [sp - 4] <- self [sp - 8] <- 1 [sp - 12] <- 2 on 32bit machine
Hang on; I think you're changing the context. Jecel and I (I think) have been discussing how to avoid checking the number of arguments on each temp var access on a machine where one has push/Store/StorePopTempVarN bytecodes. You're discussing a more general instruction set now. Yes, if you use a conventional instruction set where you've computed the offsets of al variables relative to the stack pointer at all points in the computation then you can do this; this is what gcc does when one says -fomit-frame-pointer. But that instruction set is not a 1 token=1 instruction set like a bytecode set and so is difficult to compile for, difficult to decompile, etc.
in case if you have a nested sends:
self x: (12 + 5) y: 2
" at the moment of preparing to send 12+5" [sp - 4] <- self [sp - 8] <- 12 [sp - 12] <- 5 " at the moment of preparing to send #x:y:" [sp - 4] <- self [sp - 8] <- 17 [sp - 12] <- 5
and even more complex, if you have a temp initialization in the middle:
| temp | self x: 2 y: (temp:=5)
here you have to reserve the space for 'temp' first (push nil), before storing 'self' at [sp-4].
Is there another cases, when you need to know the 'real' top of operand stack for given context, except by the code flow? I mean, for debugger, it will see the initialized temps in caller's context and arguments in callee context - so there is no problem with losing introspection abilities.
An operand stack page needs to be large enough to hold the maximum
amount of stack space used by as many frames as can fit on the control stack and this is an issue because one ends up wasting a lot of space, something the single stack organization avoids.
can't catch up with this statement. Do you mean that you have to maintain 2 stack spaces instead of 1 (and consequently double the overhead of reserving extra space to prevent the occasional overflow)? And check 2 stacks before proceeding with call instead of just one.
Yes, but you don't have to check 2 stacks instead of one if the operand stack is large enough. i.e. if a control stack page holds N frames and the maximum stack space per frame is M then if an operand stack page is M * N it can't overflow before the control stack page does. But if the average stack space per frame is much less than the maximum (which it is) then one ends up wasting a lot of space. This is not a big issue if memory is cheap. I expect in Jecel's process technology memory isn't that expensive and would also expect that since an operand stack page would be heavily used it is good use for the memory.
But from other side, control stack frames always have a constant size
- so its very easy to predict how deep call chain should be before you
need to allocate another page for control stack. This means that you can check the control stack 1/4 1/8 1/16 times the depth of call chain changes to minimize this overhead.
And in hardware the check would be done in parallel with other instructions so it would effectively be free. One only pays for moving the overflow state from one page to the next.
But such a separation could be very helpful to handle the unitialized
temps.
Right. I took this approach in my BrouHaHa VMs, with the simplification
that there was only one stack page. After becoming familiar with HPS's organization I prefer it. It fits better with conventional hardware. But I think Jecel could use the split stack approach profitably on custom hardware where having 3 registers frame, stack base and top of stack is no problem, and where memory is still cheap. Its fine to throw transistors at the operand stack as that memory will be made frequent use of, even if one is wasting 25% of it most of the time.
Jecel can also design the machine to avoid taking interrupts on the
operand stack and provide a separate interrupt stack.
To call a method you usually pushing args (sp decreasing - if stack organized in bottom-to-top order) then pushing return address, save sp & other required context state (fp decreasing) And then you are ready to activate a new method. Now there are 2 variants: at the time when you entering new method, you can simply reserve the stack space for its temps, OR, leave the sp unchanged, but organize the code in such way, that each time you get a temp initialized, you decreasing sp. Then, at each point of execution, if debugger needs to determine if some of the method's temps are unitialized , it can simply check that context's saved sp should be <= temp pointer. This of course makes things more complex, because you should reorganize temps in order of their initialization i.e. | a b | b := 5. a := 6.
to make pointer to a will be < pointer to b. As well, as you should not alter the sp when "pushing" arguments (otherwise debugger will assume that you have already initialized temps).
But, it allows you to conserve the stack space between a method calls:
someMethod | a b |
"1" b := 5. "2" self foo. "3" a := 6.
at 1) you decrement sp by pushing a initialized temp value of b at 2) you saving the sp then increasing the sp by number of arguments for method (foo) and activating a new method, on return you restoring sp at 3) you pushing a new initialized value , which leads to another decrement of sp.
and, as i said, if you suspend the process in the middle of "self foo" call, and inspecting the context of someMethod, debugger can easily see, that it's saved sp is too small to hold all temps, and if user wanting to see a current uninitialized value of 'a', then answer is nil, because offset of 'a' is greater than currently saved sp.
There's another cost , of couse, - if method having 10 temps, then the generated code needs 10 pushes for each initialization, in contrast to single instruction for reserving space at the method's activation, when you simply decreasing the sp by a known constant.
But i think there is no much difference: in both variants you have to write at some memory location , and there just a little difference how you calculate the address at initial store i.e. doing push reg or mov [fp + offset] <- reg
So the major pros is:
- conserving a stack space
- easy to determine unitialized temps for debugger
and cons is:
- this could complicate the code generation
-- Best regards, Igor Stasenko AKA sig.
-- Best regards, Igor Stasenko AKA sig.
cheers Eliot