[Vm-dev] VM Maker: VMMaker.oscog-eem.503.mcz

commits at source.squeak.org commits at source.squeak.org
Fri Nov 8 21:37:03 UTC 2013


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.503.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.503
Author: eem
Time: 8 November 2013, 1:33:17.353 pm
UUID: 42e2a34e-13cc-4324-8eaa-7a6df5a1fd01
Ancestors: VMMaker.oscog-eem.502

Fix bug in initFreeChunkWithBytes:at: which caused object
enumeration to break or objects with 254 slots.

Fix bug in freeUnmarkedObjectsAndSortAndCoalesceFreeSpace
that forgot to set a freed unmarked object free.

Rename bestFitCompact to firstFitCompact (that's what it's doing)
and use the same refill-objects approach as exactFitCompact.
Relocate firstFreeChunk after each compaction pass.

Don't bother breaking out of the loop to return the result in 
allocateOldSpaceChunkOfBytes:suchThat: et al, & remember
to return the found chun in allocateOldSpaceChunkOfBytes:suchThat:.

Abandon exactFitCompact as compaction policy for global GC; it
doesn't work well in image use (worked well in image bootstrap).

Revise marking to correctly initialize stack page tracing and
mark the hiidenRootsObj and freeListObj correctly.
Remember to mark metaclasses in markAndTraceClassOf:.

Add symbolic constants for stack page tracing.

Implment comparison for CArray (for testing freeLists).

=============== Diff against VMMaker.oscog-eem.502 ===============

Item was added:
+ ----- Method: CArray>>= (in category 'comparing') -----
+ = anObject
+ 	^self species = anObject species
+ 	  and: [(1 to: self class instSize) allSatisfy:
+ 			[:i| (self instVarAt: i) = (anObject instVarAt: i)]]!

Item was added:
+ ----- Method: CArray>>hash (in category 'comparing') -----
+ hash
+ 	^interpreter hash bitXor: arrayBaseAddress + ptrOffset + unitSize!

Item was changed:
  ----- Method: CoInterpreter>>markAndTraceStackPage: (in category 'object memory support') -----
  markAndTraceStackPage: thePage
  	| theSP theFP frameRcvrOffset callerFP oop |
  	<var: #thePage type: #'StackPage *'>
  	<var: #theSP type: #'char *'>
  	<var: #theFP type: #'char *'>
  	<var: #frameRcvrOffset type: #'char *'>
  	<var: #callerFP type: #'char *'>
  	<inline: false>
+ 
  	self assert: (stackPages isFree: thePage) not.
+ 	self assert: thePage trace ~= StackPageTraced.
+ 	thePage trace: StackPageTraced.
+ 
  	theSP := thePage headSP.
  	theFP := thePage  headFP.
  	"Skip the instruction pointer on top of stack of inactive pages."
  	thePage = stackPage ifFalse:
  		[theSP := theSP + BytesPerWord].
  	[frameRcvrOffset := self frameReceiverOffset: theFP.
  	 [theSP <= frameRcvrOffset] whileTrue:
  		[oop := stackPages longAt: theSP.
  		 (objectMemory isImmediate: oop) ifFalse:
  			[objectMemory markAndTrace: oop].
  		 theSP := theSP + BytesPerWord].
  	(self frameHasContext: theFP) ifTrue:
  		[self assert: (objectMemory isContext: (self frameContext: theFP)).
  		 objectMemory markAndTrace: (self frameContext: theFP)].
  	(self isMachineCodeFrame: theFP)
  		ifTrue: [self markAndTraceMachineCodeMethod: (self mframeCogMethod: theFP)]
  		ifFalse: [objectMemory markAndTrace: (self iframeMethod: theFP)].
  	(callerFP := self frameCallerFP: theFP) ~= 0] whileTrue:
  		[theSP := theFP + FoxCallerSavedIP + BytesPerWord.
  		 theFP := callerFP].
  	theSP := theFP + FoxCallerSavedIP + BytesPerWord. "caller ip is ceBaseReturnPC"
  	[theSP <= thePage baseAddress] whileTrue:
  		[oop := stackPages longAt: theSP.
  		 (objectMemory isImmediate: oop) ifFalse:
  			[objectMemory markAndTrace: oop].
  		 theSP := theSP + BytesPerWord]!

Item was changed:
  ----- Method: CoInterpreterStackPages>>initializeStack:numSlots:pageSize: (in category 'initialization') -----
  initializeStack: theStackPages numSlots: stackSlots pageSize: slotsPerPage
  	"Initialize the stack pages.  In the C VM theStackPages will be alloca'ed memory to hold the
  	 stack pages on the C stack.  In the simulator they are housed in the memory between the
  	 cogMethodZone and the heap."
  
  	<var: #theStackPages type: #'char *'>
  	<returnTypeC: #void>
  	| numPages page structStackPageSize pageStructBase count |
  	<var: #page type: #'StackPage *'>
  	<var: #pageStructBase type: #'char *'>
  	self cCode: []
  		inSmalltalk:
  			[self assert: objectMemory startOfMemory - coInterpreter cogCodeSize - coInterpreter methodCacheSize - coInterpreter primTraceLogSize - coInterpreter rumpCStackSize
  					= (stackSlots * BytesPerWord roundUpTo: objectMemory allocationUnit)].
  	structStackPageSize := coInterpreter sizeof: InterpreterStackPage.
  	bytesPerPage := slotsPerPage * BytesPerWord.
  	numPages := coInterpreter numStkPages.
  
  	"Because stack pages grow down baseAddress is at the top of a stack page and so to avoid
  	 subtracting BytesPerWord from baseAddress and lastAddress in the init loop below we simply
  	 push the stackPage array up one word to avoid the overlap.  This word is extraStackBytes."
  	pageStructBase := theStackPages + (numPages * bytesPerPage) + BytesPerWord.
  	pages := self cCode: [self cCoerceSimple: pageStructBase to: #'StackPage *']
  				  inSmalltalk:
  					[pageMap := Dictionary new.
  					 ((0 to: numPages - 1) collect:
  						[:i|
  						 InterpreterStackPage surrogateClass new
  							address: pageStructBase + (i * structStackPageSize)
  							simulator: coInterpreter
  							zoneBase: coInterpreter stackZoneBase
  							zoneLimit: objectMemory startOfMemory])
  						do: [:pageSurrogate|
  							pageMap at: pageSurrogate address put: pageSurrogate];
  						yourself].
  	"make sure there's enough headroom"
  	self assert: coInterpreter stackPageByteSize - coInterpreter stackLimitBytes - coInterpreter stackLimitOffset
  				>= coInterpreter stackPageHeadroom.
  	0 to: numPages - 1 do:
  		[:index|
  		 page := self stackPageAt: index.
  		 page
  			lastAddress: theStackPages + (index * bytesPerPage);
  			baseAddress: page lastAddress + bytesPerPage;
  			stackLimit: page baseAddress - coInterpreter stackLimitBytes;
  			realStackLimit: page stackLimit;
  			baseFP: 0;
  			nextPage: (self stackPageAt: (index = (numPages - 1) ifTrue: [0] ifFalse: [index + 1]));
  			prevPage: (self stackPageAt: (index = 0 ifTrue: [numPages - 1] ifFalse: [index - 1]))].
  
  	"Now compute stackBasePlus1 so that the pageIndexFor: call maps all addresses from
  	 aPage baseAddress to aBase limitAddress + 1 to the same index (stacks grow down)"
  	stackBasePlus1 := (self cCoerceSimple: theStackPages to: #'char *') + 1.
  	self cCode: []
  		inSmalltalk:
  			[minStackAddress := theStackPages.
  			 maxStackAddress := theStackPages + (numPages * bytesPerPage) + BytesPerWord - 1].
  
  	"The overflow limit is the amount of stack to retain when moving frames from an overflowing
  	 stack to reduce thrashing.  See stackOverflowOrEvent:mayContextSwitch:"
  	page := self stackPageAt: 0.
  	overflowLimit := page baseAddress - page realStackLimit * 3 // 5.
  	0 to: numPages - 1 do:
  		[:index|
  		 page := self stackPageAt: index.
  		 self assert: (self pageIndexFor: page baseAddress) == index.
  		 self assert: (self pageIndexFor: page baseAddress - (slotsPerPage - 1 * BytesPerWord)) == index.
  		 self assert: (self stackPageFor: page baseAddress) == page.
  		 self assert: (self stackPageFor: page stackLimit) == page.
  		 self cCode: []
  			inSmalltalk:
  				[| memIndex |
  				 memIndex := index * slotsPerPage + 1. "this is memIndex in the block above"
  				 self assert: (self memIndexFor: (self oopForPointer: page baseAddress))
  							== (memIndex + slotsPerPage - 1).
  				 index < (numPages - 1) ifTrue:
  					[self assert: (self stackPageFor: page baseAddress + BytesPerWord) == (self stackPageAt: index + 1)]].
+ 		coInterpreter initializePageTraceToInvalid: page].
- 		self assert: (page trace: -1) ~= 0 "for assert checking of the page tracing flags. -1 == invalid state"].
  
  	mostRecentlyUsedPage := self stackPageAt: 0.
  	page := mostRecentlyUsedPage.
  	count := 0.
  	[| theIndex |
  	 count := count + 1.
  	 theIndex := self pageIndexFor: page baseAddress.
  	 self assert: (self stackPageAt: theIndex) == page.
  	 self assert: (self pageIndexFor: page baseAddress) == theIndex.
  	 self assert: (self pageIndexFor: page stackLimit) == theIndex.
  	 self assert: (self pageIndexFor: page lastAddress + 1) == theIndex.
  	 (page := page nextPage) ~= mostRecentlyUsedPage] whileTrue.
  	self assert: count == numPages.
  	self assert: self pageListIsWellFormed!

Item was changed:
  ----- Method: InterpreterStackPage>>printOn: (in category 'printing') -----
  printOn: aStream
  	<doNotGenerate>
  	super printOn: aStream.
  	aStream nextPut: $@; print: baseAddress; space.
  	self isFree
  		ifTrue: [aStream nextPutAll: 'free']
+ 		ifFalse: [aStream print: baseFP; nextPutAll: '<->'; print: headFP; space; nextPutAll: 'trace '; print: trace]!
- 		ifFalse: [aStream print: baseFP; nextPutAll: '<->'; print: headFP]!

Item was changed:
  ----- Method: InterpreterStackPages>>initializeStack:numSlots:pageSize: (in category 'initialization') -----
  initializeStack: theStackPages numSlots: stackSlots pageSize: slotsPerPage
  	"Initialize the stack pages.  For testing I want stack addresses to be disjoint from
  	 normal memory addresses so stack addresses are negative.  The first address is
  	 -pageSize bytes.  So for example if there are 1024 bytes per page and 3 pages
  	 then the pages are organized as
  
  		byte address: -1024 <-> -2047 | -2048 <-> -3071 | -3072 <-> -4096 |
  							page 3			page 2			page 1
  		mem index:        769 <-> 513  |     512 <->  257  |   256 <->        1 |
  
  	 The byte address is the external address corresponding to a real address in the VM.
  	 mem index is the index in the memory Array holding the stack, an index internal to
  	 the stack pages.  The first stack page allocated will be the last page in the array of pages
  	 at the highest effective address.  Its base address be -1024  and grow down towards -2047."
  
  	"The lFoo's are to get around the foo->variable scheme in the C call to allocStackPages below."
  	<var: #theStackPages type: #'char *'>
  	| page structStackPageSize pageStructBase count |
  	<var: #page type: #'StackPage *'>
  	<var: #pageStructBase type: #'char *'>
  	self cCode: ''
  		inSmalltalk:
  			[self assert: stackMemory size = stackSlots.
  			 self assert: stackMemory == theStackPages].
  	stackMemory := theStackPages. "For initialization in the C code."
  	self cCode: '' inSmalltalk: [pageSizeInSlots := slotsPerPage].
  	structStackPageSize := interpreter sizeof: InterpreterStackPage.
  	bytesPerPage := slotsPerPage * BytesPerWord.
  	numPages := stackSlots // (slotsPerPage + (structStackPageSize / BytesPerWord)).
  
  	"Because stack pages grow down baseAddress is at the top of a stack page and so to avoid
  	 subtracting BytesPerWord from baseAddress and lastAddress in the init loop below we simply
  	 push the stackPage array up one word to avoid the overlap.  This word is extraStackBytes."
  	pageStructBase := theStackPages + (numPages * bytesPerPage) + BytesPerWord.
  	pages := self cCode: '(StackPage *)pageStructBase'
  				  inSmalltalk:
  						[pageStructBase class.
  						 (1 to: numPages) collect: [:i| InterpreterStackPage new]].
  
  	"Simulation only.  Since addresses are negative the offset is positive.  To make all
  	 stack addresses negative we make the offset a page more than it needs to be so the
  	 address of the last slot in memory (the highest address in the stack, or its start) is
  		- pageByteSize
  	 and the address of the first slot (the lowest address, or its end) is
  		- pageByteSize * (numPages + 1)"
  	self cCode: '' inSmalltalk: [indexOffset := (numPages + 1) * slotsPerPage].
  	"make sure there's enough headroom"
  	self assert: interpreter stackPageByteSize - interpreter stackLimitBytes - interpreter stackLimitOffset
  				>= interpreter stackPageHeadroom.
  	0 to: numPages - 1 do:
  		[:index|
  		 page := self stackPageAt: index.
  		 page
  			lastAddress: (self cCode: '(char *)theStackPages + (index * GIV(bytesPerPage))'
  							inSmalltalk: [(index * slotsPerPage - indexOffset) * BytesPerWord]);
  			baseAddress: (page lastAddress + bytesPerPage);
  			stackLimit: page baseAddress - interpreter stackLimitBytes;
  			realStackLimit: page stackLimit;
  			baseFP: 0;
  			nextPage: (self stackPageAt: (index = (numPages - 1) ifTrue: [0] ifFalse: [index + 1]));
  			prevPage: (self stackPageAt: (index = 0 ifTrue: [numPages - 1] ifFalse: [index - 1]))].
  	self cCode: ''
  		inSmalltalk:
  			[| lowestAddress highestAddress |
  			lowestAddress := (pages at: 1) lastAddress + BytesPerWord.
  			highestAddress := (pages at: numPages) baseAddress.
  			"see InterpreterStackPages>>longAt:"
  			self assert: lowestAddress // BytesPerWord + indexOffset = 1.
  			self assert: highestAddress // BytesPerWord + indexOffset = (numPages * slotsPerPage)].
  
  	"The overflow limit is the amount of stack to retain when moving frames from an overflowing
  	 stack to reduce thrashing.  See stackOverflowOrEvent:mayContextSwitch:"
  	page := self stackPageAt: 0.
  	overflowLimit := page baseAddress - page realStackLimit * 3 // 5.
  
  	0 to: numPages - 1 do:
  		[:index|
  		 page := self stackPageAt: index.
  		 self assert: (self pageIndexFor: page baseAddress) == index.
  		 self assert: (self pageIndexFor: page baseAddress - (slotsPerPage - 1 * BytesPerWord)) == index.
  		 self assert: (self stackPageFor: page baseAddress) == page.
  		 self assert: (self stackPageFor: page stackLimit) == page.
  		 self cCode: ''
  			inSmalltalk:
  				[| memIndex |
  				 memIndex := index * slotsPerPage + 1. "this is memIndex in the block above"
  				 self assert: (self memIndexFor: (self oopForPointer: page baseAddress))
  							== (memIndex + slotsPerPage - 1).
  				 index < (numPages - 1) ifTrue:
  					[self assert: (self stackPageFor: page baseAddress + BytesPerWord) == (self stackPageAt: index + 1)]].
+ 		interpreter initializePageTraceToInvalid: page].
- 		self assert: (page trace: -1) ~= 0 "for assert checking of the page tracing flags. -1 == invalid state"].
  
  	mostRecentlyUsedPage := self stackPageAt: 0.
  	page := mostRecentlyUsedPage.
  	count := 0.
  	[| theIndex |
  	 count := count + 1.
  	 theIndex := self pageIndexFor: page baseAddress.
  	 self assert: (self stackPageAt: theIndex) == page.
  	 self assert: (self pageIndexFor: page baseAddress) == theIndex.
  	 self assert: (self pageIndexFor: page stackLimit) == theIndex.
  	 self assert: (self pageIndexFor: page lastAddress + BytesPerWord) == theIndex.
  	 (page := page nextPage) ~= mostRecentlyUsedPage] whileTrue.
  	self assert: count == numPages.
  	self assert: self pageListIsWellFormed!

Item was removed:
- ----- Method: Spur32BitMMLESimulator>>bestFitCompact (in category 'compaction') -----
- bestFitCompact
- 	coInterpreter transcript nextPutAll: 'compacting...'; flush.
- 	^super bestFitCompact!

Item was added:
+ ----- Method: Spur32BitMMLESimulator>>exactFitCompact (in category 'compaction') -----
+ exactFitCompact
+ 	coInterpreter transcript nextPutAll: 'ef compacting...'; flush.
+ 	^super exactFitCompact!

Item was added:
+ ----- Method: Spur32BitMMLESimulator>>firstFitCompact (in category 'compaction') -----
+ firstFitCompact
+ 	coInterpreter transcript nextPutAll: 'ff compacting...'; flush.
+ 	^super firstFitCompact!

Item was changed:
  ----- Method: Spur32BitMMLESimulator>>longAt:put: (in category 'memory access') -----
  longAt: byteAddress put: a32BitValue
  	"Note: Adjusted for Smalltalk's 1-based array indexing."
  	"(byteAddress = 16r1614CB8 and: [a32BitValue = 16rA000035]) ifTrue:
  		[self halt]."
+ 	"(byteAddress between: 16r33FBB8 and: 16r33FBCF) ifTrue:
+ 		[self halt]."
  	byteAddress \\ 4 ~= 0 ifTrue: [self unalignedAccessError].
  	^memory at: byteAddress // 4 + 1 put: a32BitValue!

Item was changed:
  ----- Method: Spur32BitMemoryManager>>initFreeChunkWithBytes:at: (in category 'free space') -----
  initFreeChunkWithBytes: numBytes at: address
  	<var: #numBytes type: #usqLong>
  	| numSlots |
  	"must have room for a header (single or double) plus the next free pointer"
  	self assert: (numBytes \\ self allocationUnit = 0
  				 and: [numBytes >= (self baseHeaderSize + self wordSize)]).
  	self flag: #endianness.
  	"double header"
+ 	numBytes >= ((self numSlotsMask << self shiftForWord) + self baseHeaderSize + self baseHeaderSize) ifTrue:
- 	numBytes >= (self numSlotsMask << self shiftForWord) ifTrue:
  		[numSlots := numBytes - self baseHeaderSize - self baseHeaderSize >> self shiftForWord.
+ 		 self assert: numSlots >= self numSlotsMask.
  		 self longAt: address put: numSlots;
  			longAt: address + 4 put: self numSlotsMask << self numSlotsHalfShift;
  			longAt: address + 8 put: 0; "0's classIndex; 0 = classIndex of free chunks"
  			longAt: address + 12 put: self numSlotsMask << self numSlotsHalfShift.
  		^address + 8].
  	"single header"
  	numSlots := numBytes - self baseHeaderSize >> self shiftForWord.
+ 	self assert: numSlots < self numSlotsMask.
  	self longAt: address put: 0; "0's classIndex; 0 = classIndex of free chunks"
  		longAt: address + 4 put: numSlots << self numSlotsHalfShift.
  	^address!

Item was changed:
  ----- Method: Spur64BitMemoryManager>>initFreeChunkWithBytes:at: (in category 'free space') -----
  initFreeChunkWithBytes: numBytes at: address
  	<var: #numBytes type: #usqLong>
  	| numSlots |
  	"must have room for a header (single or double) plus the next free pointer"
  	self assert: (numBytes \\ self allocationUnit = 0
  				 and: [numBytes >= (self baseHeaderSize + self wordSize)]).
  	"double header"
+ 	numBytes >= ((self numSlotsMask << self shiftForWord) + self baseHeaderSize + self baseHeaderSize) ifTrue:
- 	numBytes >= (self numSlotsMask << self shiftForWord) ifTrue:
  		[numSlots := numBytes - self baseHeaderSize - self baseHeaderSize >> self shiftForWord.
+ 		 self assert: numSlots >= self numSlotsMask.
  		 self longAt: address put: self numSlotsMask << self numSlotsFullShift + numSlots;
  			longAt: address + 8 put: self numSlotsMask << self numSlotsFullShift. "0's classIndex; 0 = classIndex of free chunks"
  		^address + 8].
  	"single header"
  	numSlots := numBytes - self baseHeaderSize >> self shiftForWord.
+ 	self assert: numSlots < self numSlotsMask.
  	self longAt: address put: numSlots << self numSlotsFullShift. "0's classIndex; 0 = classIndex of free chunks"
  	^address!

Item was added:
+ ----- Method: SpurCircularBuffer>>usedSize (in category 'accessing') -----
+ usedSize
+ 	^last < start
+ 		ifTrue: [0]
+ 		ifFalse:
+ 			[last >= first
+ 				ifTrue: [last - first / manager wordSize + 1]
+ 				ifFalse: [limit - start - (first - last) / manager wordSize - 1]]!

Item was changed:
  ----- Method: SpurMemoryManager>>allocateOldSpaceChunkOfBytes:suchThat: (in category 'free space') -----
  allocateOldSpaceChunkOfBytes: chunkBytes suchThat: acceptanceBlock
  	"Answer a chunk of oldSpace from the free lists that satisfies acceptanceBlock,
  	 if available, otherwise answer nil.  Break up a larger chunk if one of the exact
  	 size cannot be found.  N.B.  the chunk is simply a pointer, it has no valid header.
  	 The caller *must* fill in the header correctly."
+ 	| initialIndex node next prev index child childBytes acceptedChunk acceptedNode |
- 	| initialIndex node next prev index child acceptedChunk acceptedNode |
  	<inline: true> "must inline for acceptanceBlock"
  	"for debugging:" "totalFreeOldSpace := self totalFreeListBytes"
  	totalFreeOldSpace := totalFreeOldSpace - chunkBytes. "be optimistic (& don't wait for the write)"
  	initialIndex := chunkBytes / self allocationUnit.
  	(initialIndex < self numFreeLists and: [1 << initialIndex <= freeListsMask]) ifTrue:
  		[(freeListsMask anyMask: 1 << initialIndex) ifTrue:
  			[(node := freeLists at: initialIndex) = 0
  				ifTrue: [freeListsMask := freeListsMask - (1 << initialIndex)]
  				ifFalse:
  					[prev := 0.
  					 [node ~= 0] whileTrue:
  						[self assert: node = (self startOfObject: node).
  						 self assert: (self isValidFreeObject: node).
  						 next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
  						 (acceptanceBlock value: node) ifTrue:
  							[prev = 0
  								ifTrue: [freeLists at: initialIndex put: next]
  								ifFalse: [self storePointer: self freeChunkNextIndex ofFreeChunk: prev withValue: next].
  							 ^node].
  						 prev := node.
  						 node := next]]].
  		 "first search for free chunks of a multiple of chunkBytes in size"
  		 index := initialIndex.
  		 [(index := index + initialIndex) < self numFreeLists
  		  and: [1 << index <= freeListsMask]] whileTrue:
  			[(freeListsMask anyMask: 1 << index) ifTrue:
  				[(node := freeLists at: index) = 0
  					ifTrue: [freeListsMask := freeListsMask - (1 << index)]
  					ifFalse:
  						[prev := 0.
  						 [node ~= 0] whileTrue:
  							[self assert: node = (self startOfObject: node).
  							 self assert: (self isValidFreeObject: node).
  							 next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
  							 (acceptanceBlock value: node) ifTrue:
  								[prev = 0
  									ifTrue: [freeLists at: index put: next]
  									ifFalse: [self storePointer: self freeChunkNextIndex ofFreeChunk: prev withValue: next]. 
  								 self freeChunkWithBytes: index * self allocationUnit - chunkBytes
  									at: (self startOfObject: node) + chunkBytes.
  								 ^node].
  							 prev := node.
  							 node := next]]]].
  		 "now get desperate and use the first that'll fit.
  		  Note that because the minimum free size is 16 bytes (2 * allocationUnit), to
  		  leave room for the forwarding pointer/next free link, we can only break chunks
  		  that are at least 16 bytes larger, hence start at initialIndex + 2."
  		 index := initialIndex + 1.
  		 [(index := index + 1) < self numFreeLists
  		  and: [1 << index <= freeListsMask]] whileTrue:
  			[(freeListsMask anyMask: 1 << index) ifTrue:
  				[(node := freeLists at: index) = 0
  					ifTrue: [freeListsMask := freeListsMask - (1 << index)]
  					ifFalse:
  						[prev := 0.
  						 [node ~= 0] whileTrue:
  							[self assert: node = (self startOfObject: node).
  							 self assert: (self isValidFreeObject: node).
  							 next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
  							 (acceptanceBlock value: node) ifTrue:
  								[prev = 0
  									ifTrue: [freeLists at: index put: next]
  									ifFalse: [self storePointer: self freeChunkNextIndex ofFreeChunk: prev withValue: next]. 
  								 self freeChunkWithBytes: index * self allocationUnit - chunkBytes
  									at: (self startOfObject: node) + chunkBytes.
  								 ^node].
  							 prev := node.
  							 node := next]]]]].
  
  	"Large chunk, or no space on small free lists.  Search the large chunk list.
  	 Large chunk list organized as a tree, each node of which is a list of chunks
  	 of the same size. Beneath the node are smaller and larger blocks.
  	 When the search ends parent should hold the smallest chunk at least as
  	 large as chunkBytes, or 0 if none.  acceptedChunk and acceptedNode save
  	 us from having to back-up when the acceptanceBlock filters-out all nodes
  	 of the right size, but there are nodes of the wrong size it does accept."
  	child := freeLists at: 0.
+ 	node := acceptedChunk := acceptedNode := 0.
- 	acceptedChunk := acceptedNode := 0.
  	[child ~= 0] whileTrue:
+ 		[self assert: (self isValidFreeObject: child).
- 		[| childBytes |
- 		 self assert: (self isValidFreeObject: child).
  		 childBytes := self bytesInObject: child.
  		 childBytes = chunkBytes ifTrue: "size match; try to remove from list at node."
  			[node := child.
  			 [prev := node.
  			  node := self fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
  			  node ~= 0] whileTrue:
  				[(acceptanceBlock value: node) ifTrue:
  					[self assert: (self isValidFreeObject: node).
  					 self storePointer: self freeChunkNextIndex
  						ofFreeChunk: prev
  						withValue: (self fetchPointer: self freeChunkNextIndex ofFreeChunk: node).
  					 ^self startOfObject: node]].
+ 			 (acceptanceBlock value: child) ifTrue:
+ 				[next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: child.
+ 				 next = 0
+ 					ifTrue: "no list; remove the interior node"
+ 						[self unlinkSolitaryFreeTreeNode: child]
+ 					ifFalse: "list; replace node with it"
+ 						[self inFreeTreeReplace: child with: next].
+ 				 ^self startOfObject: child]].
- 			 (acceptanceBlock value: node) ifTrue:
- 				[node := child.
- 				 child := 0]]. "break out of loop to remove interior node"
  		 child ~= 0 ifTrue:
  			["Note that because the minimum free size is 16 bytes (2 * allocationUnit), to
  			  leave room for the forwarding pointer/next free link, we can only break chunks
  			  that are at least 16 bytes larger, hence reject chunks < 2 * allocationUnit larger."
  			childBytes <= (chunkBytes + self allocationUnit)
  				ifTrue: "node too small; walk down the larger size of the tree"
  					[child := self fetchPointer: self freeChunkLargerIndex ofFreeChunk: child]
  				ifFalse:
+ 					[self flag: 'we can do better here; preferentially choosing the lowest node. That would be a form of best-fit since we are trying to compact down'.
+ 					 acceptedNode = 0 ifTrue:
- 					[acceptedNode = 0 ifTrue:
  						[acceptedChunk := child.
  						 "first search the list."
  						 [acceptedChunk := self fetchPointer: self freeChunkNextIndex
  													ofFreeChunk: acceptedChunk.
  						  (acceptedChunk ~= 0 and: [acceptanceBlock value: acceptedChunk]) ifTrue:
  							[acceptedNode := child].
  						  acceptedChunk ~= 0 and: [acceptedNode = 0]] whileTrue.
  						 "nothing on the list; will the node do?  This prefers
  						  acceptable nodes higher up the tree over acceptable
  						  list elements further down, but we haven't got all day..."
  						 (acceptedNode = 0
  						  and: [acceptanceBlock value: child]) ifTrue:
  							[acceptedNode := child]].
  					 child := self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: child]]].
+ 
  	acceptedNode ~= 0 ifTrue:
  		[acceptedChunk ~= 0 ifTrue:
  			[self assert: (self bytesInObject: acceptedChunk) >= (chunkBytes + self allocationUnit).
  			 [next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: acceptedNode.
  			  next ~= acceptedChunk] whileTrue:
  				[acceptedNode := next].
  			 self storePointer: self freeChunkNextIndex
  				ofFreeChunk: acceptedNode
  				withValue: (self fetchPointer: self freeChunkNextIndex ofFreeChunk: acceptedChunk).
  			self freeChunkWithBytes: (self bytesInObject: acceptedChunk) - chunkBytes
  					at: (self startOfObject: acceptedChunk) + chunkBytes.
  			^self startOfObject: acceptedChunk].
  		next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: acceptedNode.
  		next = 0
  			ifTrue: "no list; remove the interior node"
  				[self unlinkSolitaryFreeTreeNode: acceptedNode]
  			ifFalse: "list; replace node with it"
  				[self inFreeTreeReplace: acceptedNode with: next].
  		 self assert: (self bytesInObject: acceptedNode) >= (chunkBytes + self allocationUnit).
  		 self freeChunkWithBytes: (self bytesInObject: acceptedNode) - chunkBytes
  				at: (self startOfObject: acceptedNode) + chunkBytes.
  		^self startOfObject: acceptedNode].
+ 
  	totalFreeOldSpace := totalFreeOldSpace + chunkBytes. "optimism was unfounded"
  	^nil!

Item was changed:
  ----- Method: SpurMemoryManager>>allocateOldSpaceChunkOfExactlyBytes: (in category 'free space') -----
  allocateOldSpaceChunkOfExactlyBytes: chunkBytes
  	"Answer a chunk of oldSpace from the free lists, if one of this size
  	 is available, otherwise answer nil.  N.B.  the chunk is simply a pointer,
  	 it has no valid header.  The caller *must* fill in the header correctly."
+ 	| index node child |
- 	| index node nodeBytes child |
  	"for debugging:" "totalFreeOldSpace := self totalFreeListBytes"
  
  	index := chunkBytes / self allocationUnit.
  	index < self numFreeLists ifTrue:
  		[(freeListsMask anyMask: 1 << index) ifTrue:
  			[(node := freeLists at: index) ~= 0 ifTrue:
  				[self assert: node = (self startOfObject: node).
  				 self assert: (self isValidFreeObject: node).
  				 totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
  				 ^self unlinkFreeChunk: node atIndex: index].
  			 freeListsMask := freeListsMask - (1 << index)].
  		 ^nil].
  
  	"Large chunk.  Search the large chunk list.
  	 Large chunk list organized as a tree, each node of which is a list of
  	 chunks of the same size. Beneath the node are smaller and larger
  	 blocks.  When the search ends parent should hold the first chunk of
  	 the same size as chunkBytes, or 0 if none."
- 	node := 0.
  	child := freeLists at: 0.
  	[child ~= 0] whileTrue:
  		[| childBytes |
  		 self assert: (self isValidFreeObject: child).
  		 childBytes := self bytesInObject: child.
  		 childBytes = chunkBytes
  			ifTrue: "size match; try to remove from list at node."
  				[node := self fetchPointer: self freeChunkNextIndex
  								ofFreeChunk: child.
  				 node ~= 0 ifTrue:
  					[self assert: (self isValidFreeObject: node).
  					 self storePointer: self freeChunkNextIndex
  						ofFreeChunk: child
  						withValue: (self fetchPointer: self freeChunkNextIndex
  										ofFreeChunk: node).
  					 totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
  					 ^self startOfObject: node].
+ 				 "nothing acceptable on node's list; answer the node."
+ 				 self unlinkSolitaryFreeTreeNode: child.
+ 				 totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
+ 				 ^self startOfObject: child]
- 				 node := child.
- 				 nodeBytes := childBytes.
- 				 child := 0] "break out of loop to remove interior node"
  			ifFalse:
+ 				[child := self fetchPointer: (childBytes < chunkBytes
+ 												ifTrue: [self freeChunkLargerIndex]
+ 												ifFalse: [self freeChunkSmallerIndex])
+ 							ofFreeChunk: child]].
+ 	^nil!
- 				[childBytes < chunkBytes
- 					ifTrue: "walk down the tree"
- 						[child := self fetchPointer: self freeChunkLargerIndex ofFreeChunk: child]
- 					ifFalse:
- 						[nodeBytes := childBytes.
- 						 child := self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: child]]].
- 	"if no chunk, there was no exact fit"
- 	node = 0 ifTrue:
- 		[^nil].
- 
- 	"self printFreeChunk: parent"
- 	self assert: nodeBytes = chunkBytes.
- 	self assert: (self bytesInObject: node) = chunkBytes.
- 
- 	"can't be a list; would have removed and returned it above."
- 	self assert: (self fetchPointer: self freeChunkNextIndex ofFreeChunk: node) = 0.
- 
- 	"no list; remove the interior node"
- 	self unlinkSolitaryFreeTreeNode: node.
- 	totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
- 	^self startOfObject: node!

Item was changed:
  ----- Method: SpurMemoryManager>>allocateOldSpaceChunkOfExactlyBytes:suchThat: (in category 'free space') -----
  allocateOldSpaceChunkOfExactlyBytes: chunkBytes suchThat: acceptanceBlock
  	"Answer a chunk of oldSpace from the free lists that satisfies acceptanceBlock,
  	 if one of this size is available, otherwise answer nil.  N.B.  the chunk is simply a
  	 pointer, it has no valid header.  The caller *must* fill in the header correctly."
  	| index node next prev child childBytes |
  	<inline: true> "must inline for acceptanceBlock"
  	"for debugging:" "totalFreeOldSpace := self totalFreeListBytes"
  
  	index := chunkBytes / self allocationUnit.
  	index < self numFreeLists ifTrue:
  		[(freeListsMask anyMask: 1 << index) ifTrue:
  			[(node := freeLists at: index) = 0
  				ifTrue: [freeListsMask := freeListsMask - (1 << index)]
  				ifFalse:
  					[prev := 0.
  					 [node ~= 0] whileTrue:
  						[self assert: node = (self startOfObject: node).
  						 self assert: (self isValidFreeObject: node).
  						 next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
  						 (acceptanceBlock value: node) ifTrue:
  							[prev = 0
  								ifTrue: [freeLists at: index put: next]
  								ifFalse: [self storePointer: self freeChunkNextIndex ofFreeChunk: prev withValue: next].
  							 totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
  							 ^node].
  						 node := next]]].
  		 ^nil].
  
  	"Large chunk.  Search the large chunk list.
  	 Large chunk list organized as a tree, each node of which is a list of
  	 chunks of the same size. Beneath the node are smaller and larger
  	 blocks.  When the search ends parent should hold the first chunk of
  	 the same size as chunkBytes, or 0 if none."
  	node := 0.
  	child := freeLists at: 0.
  	[child ~= 0] whileTrue:
  		[self assert: (self isValidFreeObject: child).
  		 childBytes := self bytesInObject: child.
  		 childBytes = chunkBytes
  			ifTrue: "size match; try to remove from list at node first."
  				[node := child.
  				 [prev := node.
  				  node := self fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
  				  node ~= 0] whileTrue:
  					[(acceptanceBlock value: node) ifTrue:
  						[self assert: (self isValidFreeObject: node).
  						 self storePointer: self freeChunkNextIndex
  							ofFreeChunk: prev
  							withValue: (self fetchPointer: self freeChunkNextIndex ofFreeChunk: node).
  						 totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
  						 ^self startOfObject: node]].
+ 				 (acceptanceBlock value: child) ifFalse:
+ 					[^nil]. "node was right size but unaceptable."
+ 				 next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: child.
+ 				 next = 0
+ 					ifTrue: "no list; remove the interior node"
+ 						[self unlinkSolitaryFreeTreeNode: child]
+ 					ifFalse: "list; replace node with it"
+ 						[self inFreeTreeReplace: child with: next].
+ 				 totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
+ 				 ^self startOfObject: child]
- 				 node := child.
- 				 child := 0] "break out of loop to remove interior node"
  			ifFalse: "no size match; walk down the tree"
  				[child := self fetchPointer: (childBytes < chunkBytes
  												ifTrue: [self freeChunkLargerIndex]
  												ifFalse: [self freeChunkSmallerIndex])
  							ofFreeChunk: child]].
+ 	^nil!
- 	"if no chunk, there was no exact fit"
- 	(node ~= 0 and: [acceptanceBlock value: node]) ifFalse:
- 		[^nil].
- 
- 	"self printFreeChunk: parent"
- 	self assert: (self bytesInObject: node) = chunkBytes.
- 
- 	next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
- 	next = 0
- 		ifTrue: "no list; remove the interior node"
- 			[self unlinkSolitaryFreeTreeNode: node]
- 		ifFalse: "list; replace node with it"
- 			[self inFreeTreeReplace: node with: next].
- 	totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
- 	^self startOfObject: node!

Item was removed:
- ----- Method: SpurMemoryManager>>bestFitCompact (in category 'compaction') -----
- bestFitCompact
- 	"Compact all of memory using best-fit, assuming free space is sorted
- 	 and that the highest objects are recorded in highestObjects."
- 
- 	<inline: false>
- 	| freePriorToExactFit |
- 	self checkFreeSpace.
- 	freePriorToExactFit := totalFreeOldSpace.
- 	self exactFitCompact.
- 	self checkFreeSpace.
- 	highestObjects isEmpty ifTrue:
- 		[^self]. "either no high objects, or no misfits."
- 	statCompactPassCount := statCompactPassCount + 1.
- 	highestObjects reverseDo:
- 		[:o| | b |
- 		 self assert: ((self isForwarded: o) or: [self isPinned: o]) not.
- 		 b := self bytesInObject: o.
- 		 (self allocateOldSpaceChunkOfBytes: b suchThat: [:f| f < o]) ifNotNil:
- 			[:f| self copyAndForward: o withBytes: b toFreeChunk: f]].
- 	self checkFreeSpace.
- 	self flag: 'this should perhaps be a loop, recharging highestObjects as per exactFitCompact, but for now we assume the number of misfits not in highestObjects is very small'.
- 	self allOldSpaceObjectsFrom: firstFreeChunk
- 		do: [:o| | b |
- 			((self isForwarded: o)
- 			 or: [self isPinned: o]) ifFalse:
- 				[b := self bytesInObject: o.
- 				 (self allocateOldSpaceChunkOfBytes: b suchThat: [:f| f < o]) ifNotNil:
- 					[:f| self copyAndForward: o withBytes: b toFreeChunk: f]]].
- 	self checkFreeSpace.
- 	self touch: freePriorToExactFit!

Item was added:
+ ----- Method: SpurMemoryManager>>compact (in category 'compaction') -----
+ compact
+ 	"We'd like to use exact fit followed by best fit.  But in practice exact fit doesn't move
+ 	 many objects (it does in artificial contexts such as the image bootstrap, but not in
+ 	 use of a real system).  bestFit is potentially worth-while but takes effort to code.
+ 	 For now we just use fist-fit."
+ 	^self firstFitCompact!

Item was changed:
  ----- Method: SpurMemoryManager>>copyAndForward:withBytes:toFreeChunk: (in category 'compaction') -----
  copyAndForward: objOop withBytes: bytes toFreeChunk: freeChunk
+ 	"Copy and forward objOop to freeChunk, the inner operation in compaction."
- 	"Copy and forward objOop to freeChunk, the inner operation in
- 	 exact and best fit compact."
  
  	<inline: true>
  	| startOfObj freeObj |
  	startOfObj := self startOfObject: objOop.
  	self mem: freeChunk asVoidPointer cp: startOfObj asVoidPointer y: bytes.
  	freeObj := freeChunk + (objOop - startOfObj).
  	"leave it to followRememberedForwarders to remember..."
  	"(self isRemembered: objOop) ifTrue:
  		[scavenger remember: freeObj]."
  	self forward: objOop to: freeObj!

Item was added:
+ ----- Method: SpurMemoryManager>>countMarkedAndUnmarkdObjects: (in category 'debug support') -----
+ countMarkedAndUnmarkdObjects: printFlags
+ 	"print the count of marked and unmarked objects.
+ 	 In addition if 1 is set in printFlags, short-print marked objects,
+ 	 and/or if 2 is set, short-print unmarked obejcts."
+ 	<api>
+ 	| nm nu |
+ 	nm := nu := 0.
+ 	self allObjectsDo:
+ 		[:o|
+ 		(self isMarked: o)
+ 			ifTrue:
+ 				[nm := nm + 1.
+ 				 (printFlags anyMask: 1) ifTrue:
+ 					[coInterpreter shortPrintOop: o]]
+ 			ifFalse:
+ 				[nu := nu + 1.
+ 				 (printFlags anyMask: 2) ifTrue:
+ 					[coInterpreter shortPrintOop: o]]].
+ 	self print: 'n marked: '; print: nm; cr.
+ 	self print: 'n unmarked: '; print: nu; cr!

Item was changed:
  ----- Method: SpurMemoryManager>>exactFitCompact (in category 'compaction') -----
  exactFitCompact
  	"Compact all of memory above firstFreeChunk using exact-fit, assuming free
+ 	 space is sorted and that as many of the the highest objects as will fit are
+ 	 recorded in highestObjects.  Don't move pinned objects.
- 	 space is sorted and that the highest objects are recorded in highestObjects.
  	 Note that we don't actually move; we merely copy and forward.  Eliminating
  	 forwarders will be done in a final pass.  Leave the objects that don't fit
  	 exactly (the misfits), and hence aren't moved, in highestObjects."
  
  	<inline: false>
+ 	| misfits first nfits nmiss nHighest nMisses savedLimit |
- 	| misfits first |
  	<var: #misfits type: #usqInt>
+ 	self checkFreeSpace.
  	totalFreeOldSpace = 0 ifTrue: [^self].
+ 	highestObjects isEmpty ifTrue:
+ 		[^self].
+ 	nfits := nmiss  := 0.
  	misfits := highestObjects last + self wordSize.
  	[statCompactPassCount := statCompactPassCount + 1.
  	 highestObjects from: misfits - self wordSize reverseDo:
  		[:o| | b |
+ 		 self assert: o > firstFreeChunk.
- 		o < firstFreeChunk ifTrue:
- 			[misfits = (highestObjects last + self wordSize)
- 				ifTrue: [highestObjects resetAsEmpty]
- 				ifFalse: [highestObjects first: misfits].
- 			 ^self].
  		 ((self isForwarded: o) or: [self isPinned: o]) ifFalse:
  			[b := self bytesInObject: o.
  			 (self allocateOldSpaceChunkOfExactlyBytes: b suchThat: [:f| f < o])
  				ifNil:
+ 					[nmiss := nmiss + 1.
+ 					 misfits := misfits - self wordSize.
- 					[misfits := misfits - self wordSize.
  					 misfits < highestObjects start ifTrue:
  						[misfits := highestObjects limit - self wordSize].
  					 self longAt: misfits put: o]
  				ifNotNil:
+ 					[:f|
+ 					 nfits := nfits + 1.
+ 					 self copyAndForward: o withBytes: b toFreeChunk: f]]].
+ 	 self checkFreeSpace.
- 					[:f| self copyAndForward: o withBytes: b toFreeChunk: f]]].
  	 "now highestObjects contains only misfits, if any, from misfits to last.
  	  set first to first failure and refill buffer. next cycle will add more misfits.
  	  give up on exact-fit when half of the highest objects fail to fit."
+ 	first := self longAt: highestObjects first.
+ 	 self assert: first > firstFreeChunk.
+ 	 nHighest := highestObjects usedSize.
+ 	 highestObjects first: misfits.
+ 	 nMisses := highestObjects usedSize.
+ 	 nMisses > (nHighest // 2) ifTrue:
+ 		[coInterpreter print: 'exactFitCompact fits: '; printNum: nfits; print: ' misfits: '; printNum: nmiss; cr.
+ 		 ^self].
+ 	 self findFirstFreeChunkPostCompactionPass.
+ 	 savedLimit := self moveMisfitsToTopOfHighestObjects: misfits.
+ 	 self fillHighestObjectsWithMovableObjectsFrom: firstFreeChunk upTo: first.
+ 	 misfits := self moveMisfitsInHighestObjectsBack: savedLimit.
+ 	 highestObjects usedSize > 0] whileTrue!
- 	 first := self longAt: highestObjects first.
- 	 first > firstFreeChunk ifTrue:
- 		[| highestObjBytes failureBytes savedLimit |
- 		 highestObjBytes := highestObjects limit - highestObjects start.
- 		 failureBytes := highestObjects last >= misfits
- 							ifTrue: [highestObjects last - misfits]
- 							ifFalse: [highestObjBytes - (misfits - highestObjects last)].
- 		 failureBytes >= (highestObjBytes // 2) ifTrue:
- 			[highestObjects first: misfits.
- 			 ^self].
- 		 savedLimit := self moveMisfitsToTopOfHighestObjects: misfits.
- 		 self fillHighestObjectsWithMovableObjectsFrom: firstFreeChunk upTo: first.
- 		 misfits := self moveMisfitsInHighestObjectsBack: savedLimit]] repeat!

Item was added:
+ ----- Method: SpurMemoryManager>>findFirstFreeChunkAfter: (in category 'compaction') -----
+ findFirstFreeChunkAfter: start
+ 	self allOldSpaceObjectsFrom: start do:
+ 		[:o|
+ 		(self isFreeObject: o) ifTrue:
+ 			[^o]].
+ 	^endOfMemory!

Item was added:
+ ----- Method: SpurMemoryManager>>findFirstFreeChunkPostCompactionPass (in category 'compaction') -----
+ findFirstFreeChunkPostCompactionPass
+ 	(self isFreeObject: firstFreeChunk) ifFalse:
+ 		[firstFreeChunk := self findFirstFreeChunkAfter: firstFreeChunk]!

Item was added:
+ ----- Method: SpurMemoryManager>>firstFitCompact (in category 'compaction') -----
+ firstFitCompact
+ 	"Compact all of memory above firstFreeChunk using first-fit, assuming free
+ 	 space is sorted and that as many of the the highest objects as will fit are
+ 	 recorded in highestObjects.  Don't move pinned objects.
+ 	 Note that we don't actually move; we merely copy and forward.  Eliminating
+ 	 forwarders will be done in a final pass."
+ 
+ 	<inline: false>
+ 	| first nhits nmisses |
+ 	self checkFreeSpace.
+ 	totalFreeOldSpace = 0 ifTrue: [^self].
+ 	highestObjects isEmpty ifTrue:
+ 		[^self].
+ 	nhits := nmisses  := 0.
+ 	[statCompactPassCount := statCompactPassCount + 1.
+ 	 highestObjects reverseDo:
+ 		[:o| | b |
+ 		 o <= firstFreeChunk ifTrue:
+ 			[coInterpreter print: 'firstFitCompact fits: '; printNum: nhits; print: ' misfits: '; printNum: nmisses; cr.
+ 			 ^self].
+ 		 ((self isForwarded: o) or: [self isPinned: o]) ifFalse:
+ 			[b := self bytesInObject: o.
+ 			 (self allocateOldSpaceChunkOfBytes: b suchThat: [:f| f < o])
+ 				ifNil:
+ 					[nmisses := nmisses + 1]
+ 				ifNotNil:
+ 					[:f|
+ 					 nhits := nhits + 1.
+ 					 self copyAndForward: o withBytes: b toFreeChunk: f]]].
+ 	 self checkFreeSpace.
+ 	 first := self longAt: highestObjects first.
+ 	 self assert: first > firstFreeChunk.
+ 	 self findFirstFreeChunkPostCompactionPass.
+ 	 self fillHighestObjectsWithMovableObjectsFrom: firstFreeChunk upTo: first.
+ 	 highestObjects usedSize > 0] whileTrue.
+ 
+ 	coInterpreter print: 'firstFitCompact fits: '; printNum: nhits; print: ' misfits: '; printNum: nmisses; cr!

Item was changed:
  ----- Method: SpurMemoryManager>>freeListsObj (in category 'free space') -----
  freeListsObj
+ 	self assert: (self firstIndexableField: (self objectAfter: trueObj)) = freeLists.
  	^self objectAfter: trueObj!

Item was added:
+ ----- Method: SpurMemoryManager>>freeSpaceCharacterisation (in category 'debug support') -----
+ freeSpaceCharacterisation
+ 	<doNotGenerate>
+ 	| n s |
+ 	n := 0.
+ 	s := Bag new.
+ 	self allFreeObjectsDo:
+ 		[:f| n := n + 1. s add: (self bytesInObject: f)].
+ 	^{ n. s sortedCounts. s sortedElements }!

Item was changed:
  ----- Method: SpurMemoryManager>>freeUnmarkedObjectsAndSortAndCoalesceFreeSpace (in category 'gc - global') -----
  freeUnmarkedObjectsAndSortAndCoalesceFreeSpace
  	"Sweep all of old space, freeing unmarked objects, coalescing free chunks, and sorting free space.
  
  	 Small free chunks are sorted in address order on each small list head.  Large free chunks
  	 are sorted on the sortedFreeChunks list.  Record as many of the highest objects as there
  	 is room for in highestObjects, a circular buffer, for the use of exactFitCompact.  Use
  	 unused eden space for highestObjects.  If highestObjects does not wrap, store 0
  	 at highestObjects last.  Record the lowest free object in firstFreeChunk.  Let the
  	 segmentManager mark which segments contain pinned objects via notePinned:."
  
  	| lastLargeFree lastHighest highestObjectsWraps sortedFreeChunks |
  	<var: #lastHighest type: #usqInt>
  	self checkFreeSpace.
  	scavenger forgetUnmarkedRememberedObjects.
  	segmentManager prepareForGlobalSweep."for notePinned:"
  	"for sorting free space throw away the list heads, rebuilding them for small free chunks below."
  	self resetFreeListHeads.
  	highestObjects initializeStart: freeStart limit: scavenger eden limit.
  	lastHighest := highestObjects last "a.k.a. freeStart - wordSize".
  	highestObjectsWraps := 0.
  	self assert: highestObjects limit - highestObjects start // self wordSize >= 1024.
  	firstFreeChunk := sortedFreeChunks := lastLargeFree := 0.
  	"Note that if we were truly striving for performance we could split the scan into
  	 two phases, one up to the first free object and one after, which would remove
  	 the need to test firstFreeChunk when filling highestObjects."
  	self allOldSpaceEntitiesForCoalescingDo:
  		[:o|
  		(self isMarked: o)
  			ifTrue: "forwarders should have been followed in markAndTrace:"
  				[self assert: (self isForwarded: o) not.
  				 self setIsMarkedOf: o to: false.
  				 (self isPinned: o) ifTrue:
  					[segmentManager notePinned: o].
  				 firstFreeChunk ~= 0 ifTrue:
  					[false "conceptually...: "
  						ifTrue: [highestObjects addLast: o]
  						ifFalse: "but we inline so we can use the local lastHighest"
  							[(lastHighest := lastHighest + self wordSize) >= highestObjects limit ifTrue:
  								[highestObjectsWraps := highestObjectsWraps + 1].
  							 self longAt: lastHighest put: o]]]
  			ifFalse: "unmarked; two cases, an unreachable object or a free chunk."
  				[| here next |
  				 self assert: (self isRemembered: o) not. "scavenger should have cleared this above"
  				 here := o.
  				 next := self objectAfter: here limit: endOfMemory.
  				 (self isMarked: next) ifFalse: "coalescing; rare case"
  					[self assert: (self isRemembered: o) not.
  					 [statCoalesces := statCoalesces + 1.
  					  here := self coalesce: here and: next.
  					  next := self objectAfter: here limit: endOfMemory.
  					  next = endOfMemory or: [self isMarked: next]] whileFalse].
+ 				 self setFree: here.
  				 firstFreeChunk = 0 ifTrue:
  					[firstFreeChunk := here].
  				 (self isLargeFreeObject: here)
  					ifTrue:
  						[lastLargeFree = 0
  							ifTrue: [sortedFreeChunks := here]
  							ifFalse:
  								[self setFree: here.
  								 self storePointer: self freeChunkNextAddressIndex ofFreeChunk: lastLargeFree withValue: here].
  						 lastLargeFree := here]
  					ifFalse:
  						[self freeSmallObject: here]]].
  	highestObjects last: lastHighest.
  	highestObjectsWraps ~= 0 ifTrue:
  		[highestObjects first: (lastHighest + self wordSize >= highestObjects limit
  								ifTrue: [highestObjects start]
  								ifFalse: [lastHighest + self wordSize])].
  	lastLargeFree ~= 0 ifTrue:
  		[self storePointer: self freeChunkNextAddressIndex ofFreeChunk: lastLargeFree withValue: 0].
  	totalFreeOldSpace := self reverseSmallListHeads.
  	totalFreeOldSpace := totalFreeOldSpace + (self rebuildFreeTreeFrom: sortedFreeChunks).
  	self checkFreeSpace.
  	self touch: highestObjectsWraps!

Item was changed:
  ----- Method: SpurMemoryManager>>globalGarbageCollect (in category 'gc - global') -----
  globalGarbageCollect
  	self runLeakCheckerForFullGC: true.
  	self assert: self validObjStacks.
  	self markObjects.
  	self nilUnmarkedWeaklingSlots.
  	self freeUnmarkedObjectsAndSortAndCoalesceFreeSpace.
+ 	self runLeakCheckerForFullGC: true.
+ 	self compact.
- 	self bestFitCompact.
  	self eliminateAndFreeForwarders.
  	self assert: self validObjStacks.
  	self assert: self allObjectsUnmarked.
  	self runLeakCheckerForFullGC: true!

Item was changed:
  ----- Method: SpurMemoryManager>>isEnumerableObject: (in category 'object enumeration') -----
  isEnumerableObject: objOop
  	"Answer if objOop should be included in an allObjects...Do: enumeration.
  	 Non-objects should be excluded; these are bridges and free chunks."
+ 	| classIndex |
+ 	<inline: true>
+ 	classIndex := self classIndexOf: objOop.
+ 	self assert: (classIndex >= 0 and: [classIndex < (numClassTablePages * self classTablePageSize)]).
+ 	^classIndex >= self isForwardedObjectClassIndexPun!
- 	^(self classIndexOf: objOop) >= self isForwardedObjectClassIndexPun!

Item was changed:
  ----- Method: SpurMemoryManager>>markAccessibleObjects (in category 'gc - global') -----
  markAccessibleObjects
  	self assert: self validClassTableRootPages.
  	self assert: segmentManager allBridgesMarked.
- 	marking := true.
  	self cCode: [] "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)"
  		inSmalltalk: [MarkStackRecord ifNotNil: [MarkStackRecord resetTo: 1]].
+ 
+ 	marking := true.
+ 	"This must come first to enable stack page reclamation.  It clears
+ 	  the trace flags on stack pages and so must preceed any marking.
+ 	  Otherwise it will clear the trace flags of reached pages."
+ 	coInterpreter initStackPageGC.
  	self markAndTraceHiddenRoots.
  	self assert: self validClassTableRootPages.
  	coInterpreter markAndTraceInterpreterOops: true.
- 	self markAndTrace: self freeListsObj.
- 	self markAndTrace: self specialObjectsOop.
  	self assert: self validObjStacks.
  	self markWeaklingsAndMarkAndFireEphemerons.
  	self assert: self validObjStacks.
  	marking := false!

Item was changed:
  ----- Method: SpurMemoryManager>>markAndTraceClassOf: (in category 'gc - global') -----
  markAndTraceClassOf: objOop
  	"Ensure the class of the argument is marked, pushing it on the markStack if not already marked.
+ 	 Also set the relevant bit in the classTableBitmap so that duplicate entries can be eliminated.
+ 	 Note that this is recursive, but the metaclass chain should terminate quickly."
- 	 Also set the relevant bit in the classTableBitmap so that duplicate entries can be eliminated."
  	<inline: false>
  	| classIndex classObj |
  	classIndex := self classIndexOf: objOop.
  	self inClassTableBitmapSet: classIndex.
  	classObj := self classAtIndex: classIndex.
  	(self isMarked: classObj) ifFalse:
  		[self setIsMarkedOf: classObj to: true.
+ 		 self markAndTraceClassOf: classObj.
  		 self push: classObj onObjStack: markStack]!

Item was changed:
  ----- Method: SpurMemoryManager>>markAndTraceHiddenRoots (in category 'gc - global') -----
  markAndTraceHiddenRoots
  	"The hidden roots hold both the class table pages and the obj stacks,
  	 and hence need special treatment.
  	 The obj stacks must be marked specially; their pages must be marked,
  	 but only the contents of the ephemeronQueue should be marked.
  	 If a class table page is weak we can mark and trace the hiddenRoots,
  	 which will not trace throguh class table opages because they are weak.
  	 But if class table pages are strong, we must mark the pages and *not*
  	 trace them so that only classes reachable from the true roots will be
  	 marked, and unreachable classes will be left unmarked."
  
  	self markAndTraceObjStack: markStack andContents: false.
  	self markAndTraceObjStack: weaklingStack andContents: false.
  	self markAndTraceObjStack: ephemeronQueue andContents: true.
  
+ 	self setIsMarkedOf: self freeListsObj to: true.
+ 
  	(self isWeakNonImm: classTableFirstPage) ifTrue:
  		[^self markAndTrace: hiddenRootsObj].
  
+ 	self setIsMarkedOf: hiddenRootsObj to: true.
  	self markAndTrace: classTableFirstPage.
  	1 to: numClassTablePages - 1 do:
  		[:i| self setIsMarkedOf: (self fetchPointer: i ofObject: hiddenRootsObj)
  				to: true]!

Item was changed:
  InterpreterPrimitives subclass: #StackInterpreter
  	instanceVariableNames: 'currentBytecode bytecodeSetSelector localFP localIP localSP stackLimit stackPage stackPages method instructionPointer stackPointer framePointer localReturnValue extA extB primitiveFunctionPointer methodCache atCache lkupClassTag lkupClass methodDictLinearSearchLimit highestRunnableProcessPriority nextWakeupUsecs nextPollUsecs inIOProcessEvents interruptKeycode interruptPending savedWindowSize imageHeaderFlags fullScreenFlag deferDisplayUpdates pendingFinalizationSignals extraVMMemory interpreterProxy showSurfaceFn primitiveTable externalPrimitiveTable externalPrimitiveTableFirstFreeIndex overflowedPage extraFramesToMoveOnOverflow globalSessionID jmpBuf jmpDepth suspendedCallbacks suspendedMethods numStackPages desiredNumStackPages desiredEdenBytes classNameIndex thisClassIndex metaclassNumSlots interruptCheckChain suppressHeartbeatFlag breakSelector breakSelectorLength longRunningPrimitiveCheckMethod longRunningPrimitiveCheckSemaphore longRunningPrimitiveStartUsecs longRunningPrimitiveStopUsecs longRunningPrimitiveGCUsecs longRunningPrimitiveCheckSequenceNumber longRunningPrimitiveSignalUndelivered tempOop statForceInterruptCheck statStackOverflow statStackPageDivorce statCheckForEvents statProcessSwitch statIOProcessEvents theUnknownShort the2ndUnknownShort imageFloatsBigEndian maxExtSemTabSizeSet lastMethodCacheProbeWrite statPendingFinalizationSignals gcSemaphoreIndex classByteArrayCompactIndex'
+ 	classVariableNames: 'AtCacheEntries AtCacheFixedFields AtCacheFmt AtCacheMask AtCacheOop AtCacheSize AtCacheTotalSize AtPutBase BytecodeTable CacheProbeMax DirBadPath DirEntryFound DirNoMoreEntries DumpStackOnLowSpace EnclosingMixinIndex EnclosingObjectIndex FailImbalancedPrimitives HeaderFlagBitPosition MaxExternalPrimitiveTableSize MaxJumpBuf MaxPrimitiveIndex MaxQuickPrimitiveIndex MixinIndex PrimitiveExternalCallIndex PrimitiveTable StackPageReachedButUntraced StackPageTraceInvalid StackPageTraced StackPageUnreached'
- 	classVariableNames: 'AtCacheEntries AtCacheFixedFields AtCacheFmt AtCacheMask AtCacheOop AtCacheSize AtCacheTotalSize AtPutBase BytecodeTable CacheProbeMax DirBadPath DirEntryFound DirNoMoreEntries DumpStackOnLowSpace EnclosingMixinIndex EnclosingObjectIndex FailImbalancedPrimitives HeaderFlagBitPosition MaxExternalPrimitiveTableSize MaxJumpBuf MaxPrimitiveIndex MaxQuickPrimitiveIndex MixinIndex PrimitiveExternalCallIndex PrimitiveTable'
  	poolDictionaries: 'VMBasicConstants VMMethodCacheConstants VMObjectIndices VMSpurObjectRepresentationConstants VMSqueakClassIndices VMSqueakV3BytecodeConstants VMSqueakV3ObjectRepresentationConstants VMStackFrameOffsets'
  	category: 'VMMaker-Interpreter'!
  
  !StackInterpreter commentStamp: 'eem 9/11/2013 18:30' prior: 0!
  This class is a complete implementation of the Smalltalk-80 virtual machine, derived originally from the Blue Book specification but quite different in some areas.  This VM supports Closures but *not* old-style BlockContexts.
  
  It has been modernized with 32-bit pointers, better management of Contexts (see next item), and attention to variable use that allows the CCodeGenerator (qv) to keep, eg, the instruction pointer and stack pointer in registers as well as keeping most simple variables in a global array that seems to improve performance for most platforms.
  
  The VM does not use Contexts directly.  Instead Contexts serve as proxies for a more conventional stack format that is invisible to the image.  There is considerable explanation at http://www.mirandabanda.org/cogblog/2009/01/14/under-cover-contexts-and-the-big-frame-up.  The VM maintains a fixed-size stack zone divided into pages, each page being capable of holding several method/block activations.  A send establishes a new frame in the current stack page, a return returns to the previous frame.  This eliminates allocation/deallocation of contexts and the moving of receiver and arguments from caller to callee on each send/return.  Contexts are created lazily when an activation needs a context (creating a block, explicit use of thisContext, access to sender when sender is a frame, or linking of stack pages together).  Contexts are either conventional and heap-resident ("single") or "married" and serve as proxies for their corresponding frame or "widowed", meaning that their spouse frame has been returned from (died).  A married context is specially marked (more details in the code) and refers to its frame.  Likewise a married frame is specially marked and refers to its context.
  
  In addition to SmallInteger arithmetic and Floats, the VM supports logic on 32-bit PositiveLargeIntegers, thus allowing it to simulate itself much more effectively than would otherwise be the case.
  
  NOTE:  Here follows a list of things to be borne in mind when working on this code, or when making changes for the future.
  
  1.  There are a number of things that should be done the next time we plan to release a completely incompatible image format.  These include unifying the instanceSize field of the class format word -- see instantiateClass:indexableSize:, and unifying the bits of the method primitive index (if we decide we need more than 512, after all) -- see primitiveIndexOf:.  Also, contexts should be given a special format code (see next item).
  
  2.  There are several fast checks for contexts (see isContextHeader: and isMethodContextHeader:) which will fail if the compact class indices of BlockContext or MethodContext change.  This is necessary because the oops may change during a compaction when the oops are being adjusted.  It's important to be aware of this when writing a new image using the SystemTracer.  A better solution would be to reserve one of the format codes for Contexts only.  An even better solution is to eliminate compact classes altogether (see 6.).
  
  3.  We have made normal files tolerant to size and positions up to 32 bits.  This has not been done for async files, since they are still experimental.  The code in size, at: and at:put: should work with sizes and indices up to 31 bits, although I have not tested it (di 12/98); it might or might not work with 32-bit sizes. [Late news, the support has been extended to 64-bit file sizes].
  
  4.  Note that 0 is used in a couple of places as an impossible oop.  This should be changed to a constant that really is impossible (or perhaps there is code somewhere that guarantees it --if so it should be put in this comment).  The places include the method cache and the at cache.
  
  5. Moving to a 2 bit immediate tag and having immediate Characters is a good choice for Unicode and the JIT.  We can still have 31-bit SmallIntegers by allowing two tag patterns for SmallInteger.
  
  6.  If Eliot Miranda's 2 word header scheme is acceptable in terms of footprint (we estimate about a 10% increase in image size with about 35 reclaimed by better representation of CompiledMethod - loss of MethodProperties) then the in-line cache for the JIT is simplified, class access is faster and header access is the same in 32-bit and full 64-bit images.  [Late breaking news, the 2-word header scheme is more compact, by over 2%].  See SpurMemorymanager's class comment.!

Item was changed:
  ----- Method: StackInterpreter class>>initializeMiscConstants (in category 'initialization') -----
  initializeMiscConstants
  
  	super initializeMiscConstants.
  	STACKVM := true.
  
  	GCModeFull := 1.
  	GCModeIncr := 2.
  	GCModeScavenge := 3.
  	GCModeBecome := 4.
  
+ 	StackPageTraceInvalid := -1.
+ 	StackPageUnreached := 0.
+ 	StackPageReachedButUntraced := 1.
+ 	StackPageTraced := 2.
+ 
  	DumpStackOnLowSpace := 0.
  	PrimitiveExternalCallIndex := 117. "Primitive index for #primitiveExternalCall"
  	MillisecondClockMask := 16r1FFFFFFF.
  	"Note: The external primitive table should actually be dynamically sized but for the sake of inferior platforms (e.g., Mac :-) who cannot allocate memory in any reasonable way, we keep it static (and cross our fingers...)"
  	MaxExternalPrimitiveTableSize := 4096. "entries"
  
  	MaxJumpBuf := 32. "max. callback depth"
  	FailImbalancedPrimitives := true!

Item was changed:
  ----- Method: StackInterpreter>>freeUntracedStackPages (in category 'object memory support') -----
  freeUntracedStackPages
  	"Free any untraced stack pages."
  	<var: #thePage type: #'StackPage *'>
  	<inline: false>
  
  	0 to: numStackPages - 1 do:
  		[:i| | thePage |
  		thePage := stackPages stackPageAt: i.
  		((stackPages isFree: thePage) not
+ 		 and: [thePage trace = StackPageUnreached]) ifTrue:
- 		 and: [thePage trace = 0]) ifTrue:
  			[self assert: (self noMarkedContextsOnPage: thePage).
  			 stackPages freeStackPage: thePage].
+ 		self assert: (thePage trace: StackPageTraceInvalid) ~= 0] "Invalidate the trace state for assertion checks"!
- 		self assert: (thePage trace: -1) ~= 0] "Invalidate the trace state for assertion checks"!

Item was changed:
  ----- Method: StackInterpreter>>initStackPageGC (in category 'object memory support') -----
  initStackPageGC
  	"GC of pages.  Throwing away all stack pages on full GC is simple but dangerous
  	 because it causes us to allocate lots of contexts immediately before a GC.
  	 Reclaiming pages whose top context is not referenced is poor because it would
  	 take N incrementalGCs to reclaim N unused pages.  Only the page whose top
  	 context is not referred to by the bottom context of any other page would be
  	 reclaimed.  Not until the next GC would the page whose top contect is the
  	 previously reclaimed page's base frame's bottom context be reclaimed.
  
  	 Better is to not mark stack pages until their contexts are encountered.  We can
  	 eagerly trace the active page and the page reachable from its bottom context
  	 if any, and so on.  Other pages can be marked when we encounter a married
  	 context."
  	<var: #thePage type: #'StackPage *'>
  	<inline: true>
  	"Need to write back the frame pointers unless all pages are free (as in snapshot)"
  	stackPage ~= 0 ifTrue:
  		[self externalWriteBackHeadFramePointers].
  
  	0 to: numStackPages - 1 do:
  		[:i| | thePage |
  		thePage := stackPages stackPageAt: i.
+ 		thePage trace: StackPageUnreached]!
- 		thePage trace: 0]!

Item was added:
+ ----- Method: StackInterpreter>>initializePageTraceToInvalid: (in category 'stack pages') -----
+ initializePageTraceToInvalid: aPage
+ 	<var: #aPage type: #'StackPage *'>
+ 	aPage trace: StackPageTraceInvalid "for assert checking of the page tracing flags"!

Item was changed:
  ----- Method: StackInterpreter>>markAndTraceStackPage: (in category 'object memory support') -----
  markAndTraceStackPage: thePage
  	| theSP theFP frameRcvrOffset callerFP oop |
  	<var: #thePage type: #'StackPage *'>
  	<var: #theSP type: #'char *'>
  	<var: #theFP type: #'char *'>
  	<var: #frameRcvrOffset type: #'char *'>
  	<var: #callerFP type: #'char *'>
  	<inline: false>
+ 
  	self assert: (stackPages isFree: thePage) not.
+ 	self assert: thePage trace ~= StackPageTraced.
+ 	thePage trace: StackPageTraced.
+ 
  	theSP := thePage headSP.
  	theFP := thePage  headFP.
  	"Skip the instruction pointer on top of stack of inactive pages."
  	thePage = stackPage ifFalse:
  		[theSP := theSP + BytesPerWord].
  	[frameRcvrOffset := self frameReceiverOffset: theFP.
  	 [theSP <= frameRcvrOffset] whileTrue:
  		[oop := stackPages longAt: theSP.
  		 (objectMemory isImmediate: oop) ifFalse:
  			[objectMemory markAndTrace: oop].
  		 theSP := theSP + BytesPerWord].
  	(self frameHasContext: theFP) ifTrue:
  		[self assert: (objectMemory isContext: (self frameContext: theFP)).
  		 objectMemory markAndTrace: (self frameContext: theFP)].
  	objectMemory markAndTrace: (self iframeMethod: theFP).
  	(callerFP := self frameCallerFP: theFP) ~= 0] whileTrue:
  		[theSP := theFP + FoxCallerSavedIP + BytesPerWord.
  		 theFP := callerFP].
  	theSP := theFP + FoxCallerSavedIP. "caller ip is frameCallerContext in a base frame"
  	[theSP <= thePage baseAddress] whileTrue:
  		[oop := stackPages longAt: theSP.
  		 (objectMemory isImmediate: oop) ifFalse:
  			[objectMemory markAndTrace: oop].
  		 theSP := theSP + BytesPerWord]!

Item was changed:
  ----- Method: StackInterpreter>>markAndTraceStackPages: (in category 'object memory support') -----
  markAndTraceStackPages: fullGCFlag
  	"GC of pages.  Throwing away all stack pages on full GC is simple but dangerous
  	 because it causes us to allocate lots of contexts immediately before a GC.
  	 Reclaiming pages whose top context is not referenced is poor because it would
  	 take N incrementalGCs to reclaim N unused pages.  Only the page whose top
  	 context is not referred to by the bottom context of any other page would be
  	 reclaimed.  Not until the next GC would the page whose top contect is the
  	 previously reclaimed page's base frame's bottom context be reclaimed.
  
  	 Better is to not mark stack pages until their contexts are encountered.  We can
  	 eagerly trace the active page and the page reachable from its bottom context
  	 if any, and so on.  Other pages can be marked when we encounter a married
  	 context."
  	| thePage context |
  	<var: #thePage type: #'StackPage *'>
  	<inline: false>
+ 	objectMemory hasSpurMemoryManagerAPI ifFalse:
+ 		[self initStackPageGC].
- 	self initStackPageGC.
  
  	"On an incremental GC simply consider all non-free stack pages to be roots."
  	fullGCFlag ifFalse:
  		[0 to: numStackPages - 1 do:
  			[:i|
  			thePage := stackPages stackPageAt: i.
  			(stackPages isFree: thePage) ifFalse:
+ 				[self markAndTraceStackPage: thePage]].
- 				[thePage trace: 2.
- 				 self markAndTraceStackPage: thePage]].
  		^nil].
  
  	"On a full GC only eagerly trace pages referenced from
  	 the base of the active page, i.e. on the active stack."
  	stackPage = 0 ifTrue: [^nil].
  	thePage := stackPage.
+ 	[self markAndTraceStackPage: thePage.
- 	[thePage trace: 2.
- 	 self markAndTraceStackPage: thePage.
  	 context := self frameCallerContext: thePage baseFP.
  	 ((objectMemory isContext: context)
  	  and: [(self isMarriedOrWidowedContext: context)
  	  and: [self isStillMarriedContext: context]]) ifTrue:
  		[thePage := stackPages stackPageFor:  (self frameOfMarriedContext: context).
  		 self assert: (stackPages isFree: thePage) not].
+ 	 thePage trace < StackPageTraced] whileTrue!
- 	 thePage trace < 2] whileTrue!

Item was changed:
  ----- Method: StackInterpreter>>markAndTraceUntracedReachableStackPages (in category 'object memory support') -----
  markAndTraceUntracedReachableStackPages
  	"Trace any untraced pages"
  	| thePage foundToBeTracedPage |
  	<var: #thePage type: #'StackPage *'>
  	<inline: false>
  
  	[foundToBeTracedPage := false.
  	0 to: numStackPages - 1 do:
  		[:i|
  		thePage := stackPages stackPageAt: i.
  		((stackPages isFree: thePage) not
+ 		 and: [thePage trace = StackPageReachedButUntraced]) ifTrue:
- 		 and: [thePage trace = 1]) ifTrue:
  			[foundToBeTracedPage := true.
- 			 thePage trace: 2.
  			 self markAndTraceStackPage: thePage]].
  	foundToBeTracedPage] whileTrue!

Item was changed:
  ----- Method: StackInterpreter>>printFramesOnStackPageListInUse (in category 'debug printing') -----
  printFramesOnStackPageListInUse
  	<export: true>
  	| page |
  	<inline: false>
  	<var: #page type: #'StackPage *'>
  	page := stackPages mostRecentlyUsedPage.
  	[(stackPages isFree: page) ifFalse:
+ 		[self print: 'page '; printHexnp: page; cr.
+ 		 self printFramesInPage: page.
- 		[self printFramesInPage: page.
  		 self cr].
  	 (page := page prevPage) ~= stackPages mostRecentlyUsedPage] whileTrue!

Item was changed:
  ----- Method: StackInterpreter>>setTraceFlagOnContextsFramesPageIfNeeded: (in category 'object memory support') -----
  setTraceFlagOnContextsFramesPageIfNeeded: aContext
  	| thePage |
  	<inline: false>
  	<var: #thePage type: #'StackPage *'>
  	(self isStillMarriedContext: aContext) ifTrue:
  		[thePage := stackPages stackPageFor: (self frameOfMarriedContext: aContext).
+ 		 self assert: (thePage trace between: StackPageUnreached and: StackPageTraced).
+ 		 thePage trace = StackPageUnreached ifTrue:
+ 			[thePage trace: StackPageReachedButUntraced]]!
- 		 self assert: (thePage trace between: 0 and: 2).
- 		 thePage trace = 0 ifTrue:
- 			[thePage trace: 1]]!

Item was changed:
  ----- Method: StackInterpreter>>shortPrintFramesOnStackPageListInUse (in category 'debug printing') -----
  shortPrintFramesOnStackPageListInUse
  	<export: true>
  	| page |
  	<inline: false>
  	<var: #page type: #'StackPage *'>
  	page := stackPages mostRecentlyUsedPage.
  	[(stackPages isFree: page) ifFalse:
+ 		[self print: 'page '; printHexnp: page; cr.
+ 		 self shortPrintFramesInPage: page.
- 		[self shortPrintFramesInPage: page.
  		 self cr].
  	 (page := page prevPage) ~= stackPages mostRecentlyUsedPage] whileTrue!



More information about the Vm-dev mailing list