[Vm-dev] VM Maker: VMMaker.oscog-eem.460.mcz

commits at source.squeak.org commits at source.squeak.org
Tue Oct 15 23:19:36 UTC 2013


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.460.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.460
Author: eem
Time: 15 October 2013, 4:16:39.921 pm
UUID: 8deb49e7-9138-443d-9650-a99a3daf0963
Ancestors: VMMaker.oscog-eem.459

Add classTableBitmap and numClassTablePages to SpurMemMgr.
classaTableBitmap used to expunge duplicate entries from class
table, and eventually to GC classes.

Enter used class indices into bitmap when swizzling and expunge
duplicate entries there-after.

Make primitiveForceTenure a SqueakV3ObjectMemory option (for now).

Nuke unused isImmediateObject:.

Update SpurMemMgr's class cmment with more accurate design
info, as well as adding new inst var descriptions.

Rename isClassInClassTable: to isInClassTable:.

Fix SpurMemMgr>>initialize to init the right part of extraRoots.

=============== Diff against VMMaker.oscog-eem.459 ===============

Item was changed:
  ----- Method: InterpreterPrimitives>>primitiveForceTenure (in category 'I/O primitives') -----
  primitiveForceTenure
  	"Set force tenure flag to true, this forces a tenure operation on the next incremental GC"
  
  	<export: true>
+ 	<option: #SqueakV3ObjectMemory>
  	objectMemory forceTenureFlag: 1!

Item was removed:
- ----- Method: ObjectMemory>>isImmediateObject: (in category 'interpreter access') -----
- isImmediateObject: objectPointer
- 
- 	^(objectPointer bitAnd: 1) > 0!

Item was changed:
  CogClass subclass: #SpurMemoryManager
(excessive size, no diff calculated)

Item was added:
+ ----- Method: SpurMemoryManager class>>classTableBitmapBytes (in category 'accessing') -----
+ classTableBitmapBytes
+ 	"Max size of the classTableBitmap.  A liottle too large to contemplate allocating statically."
+ 	^1 << (self basicNew classIndexFieldWidth - (BitsPerByte log: 2) asInteger)!

Item was changed:
  ----- Method: SpurMemoryManager class>>declareCVarsIn: (in category 'translation') -----
  declareCVarsIn: aCCodeGenerator
+ 	self declareCAsOop: #(	memory freeStart scavengeThreshold newSpaceLimit pastSpaceStart
+ 							lowSpaceThreshold freeOldSpaceStart endOfMemory sortedFreeChunks)
- 	self declareCAsOop: #(memory freeStart scavengeThreshold newSpaceLimit lowSpaceThreshold freeOldSpaceStart endOfMemory sortedFreeChunks)
  		in: aCCodeGenerator.
  	self declareCAsUSqLong: (self allInstVarNames select: [:ivn| ivn endsWith: 'Usecs'])
  		in: aCCodeGenerator.
- 	aCCodeGenerator var: #freeLists declareC: #'sqInt *freeLists'.
  	aCCodeGenerator
+ 		var: #freeLists type: #'sqInt *';
+ 		var: #classTableBitmap type: #'unsigned char *'.
+ 	aCCodeGenerator
  		var: #remapBuffer
  		declareC: 'sqInt remapBuffer[RemapBufferSize + 1 /* ', (RemapBufferSize + 1) printString, ' */]'.
  	aCCodeGenerator
  		var: #extraRoots
+ 		declareC: 'sqInt *extraRoots[ExtraRootsSize + 1 /* ', (ExtraRootsSize + 1) printString, ' */]'.!
- 		declareC: 'sqInt* extraRoots[ExtraRootSize + 1 /* ', (ExtraRootsSize + 1) printString, ' */]'.!

Item was changed:
  ----- Method: SpurMemoryManager class>>initialize (in category 'class initialization') -----
  initialize
+ 	BitsPerByte := 8.
+ 
+ 	"SpurMemoryManager initialize"
- 	"CogObjectMemory initialize"
  	CheckObjectOverwrite := true.
  
  	"The remap buffer support is for compatibility; Spur doesn't GC during allocation.
  	 Eventually this should die."
  	RemapBufferSize := 25.
  
  	"Extra roots are for plugin support."
  	ExtraRootsSize := 2048 "max. # of external roots"!

Item was changed:
  ----- Method: SpurMemoryManager>>adjustAllOopsBy: (in category 'snapshot') -----
  adjustAllOopsBy: bytesToShift
  	"Adjust all oop references by the given number of bytes. This is
  	 done just after reading in an image when the new base address
  	 of the object heap is different from the base address in the image,
+ 	 or when loading multiple segments that have been coallesced.  Also
+ 	 set bits in the classTableBitmap corresponding to used classes."
- 	 or when loading multiple segments that have been coallesced."
  
  	| obj |
+ 	self countNumClassPagesPreSwizzle: bytesToShift;
+ 		ensureAdequateClassTableBitmap.
  	(bytesToShift ~= 0
+ 	 or: [segmentManager numSegments > 1])
+ 		ifTrue:
+ 			[self assert: self newSpaceIsEmpty.
+ 			 obj := self objectStartingAt: newSpaceLimit.
+ 			 [self oop: obj isLessThan: freeOldSpaceStart] whileTrue:
+ 				[(self isFreeObject: obj)
+ 					ifTrue: [self swizzleFieldsOfFreeChunk: obj]
+ 					ifFalse:
+ 						[self inClassTableBitmapSet: (self classIndexOf: obj).
+ 						 self swizzleFieldsOfObject: obj].
+ 				 obj := self objectAfter: obj]]
+ 		ifFalse:
+ 			[self assert: self newSpaceIsEmpty.
+ 			 obj := self objectStartingAt: newSpaceLimit.
+ 			 [self oop: obj isLessThan: freeOldSpaceStart] whileTrue:
+ 				[(self isFreeObject: obj) ifFalse:
+ 					[self inClassTableBitmapSet: (self classIndexOf: obj)].
+ 				 obj := self objectAfter: obj]]!
- 	 or: [segmentManager numSegments > 1]) ifTrue:
- 		[self assert: self newSpaceIsEmpty.
- 		 obj := self objectStartingAt: newSpaceLimit.
- 		 [self oop: obj isLessThan: freeOldSpaceStart] whileTrue:
- 			[(self isFreeObject: obj)
- 				ifTrue: [self swizzleFieldsOfFreeChunk: obj]
- 				ifFalse: [self swizzleFieldsOfObject: obj].
- 			 obj := self objectAfter: obj]]!

Item was changed:
  ----- Method: SpurMemoryManager>>allocateOldSpaceChunkOfBytes: (in category 'free space') -----
  allocateOldSpaceChunkOfBytes: chunkBytes
  	"Answer a chunk of oldSpace from the free lists, if available,
  	 otherwise answer nil.  Break up a larger chunk if one of the
  	 exact size does not exist.  N.B.  the chunk is simply a pointer, it
  	 has no valid header.  The caller *must* fill in the header correctly."
  	| initialIndex chunk index nodeBytes parent child smaller larger |
  	"for debugging:" "totalFreeOldSpace := self totalFreeListBytes"
  	totalFreeOldSpace := totalFreeOldSpace - chunkBytes. "be optimistic (& don't wait for the write)"
  	initialIndex := chunkBytes / self allocationUnit.
  	(initialIndex < self numFreeLists and: [1 << initialIndex <= freeListsMask]) ifTrue:
  		[(freeListsMask anyMask: 1 << initialIndex) ifTrue:
  			[(chunk := freeLists at: initialIndex) ~= 0 ifTrue:
  				[self assert: chunk = (self startOfObject: chunk).
  				 self assert: (self isValidFreeObject: chunk).
  				^self unlinkFreeChunk: chunk atIndex: initialIndex].
  			 freeListsMask := freeListsMask - (1 << initialIndex)].
  		 "first search for free chunks of a multiple of chunkBytes in size"
  		 index := initialIndex.
  		 [(index := index + index) < self numFreeLists
  		  and: [1 << index <= freeListsMask]] whileTrue:
  			[((freeListsMask anyMask: 1 << index)
  			 and: [(chunk := freeLists at: index) ~= 0]) ifTrue:
  				[self assert: chunk = (self startOfObject: chunk).
  				 self assert: (self isValidFreeObject: chunk).
  				 self unlinkFreeChunk: chunk atIndex: index.
  				 self assert: (self bytesInObject: chunk) = (index * self allocationUnit).
  				 self freeChunkWithBytes: index * self allocationUnit - chunkBytes
  					at: (self startOfObject: chunk) + chunkBytes.
  				^chunk]].
  		 "now get desperate and use the first that'll fit.
  		  Note that because the minimum free size is 16 bytes (2 * allocationUnit), to
  		  leave room for the forwarding pointer/next free link, we can only break chunks
  		  that are at least 16 bytes larger, hence start at initialIndex + 2."
  		 index := initialIndex + 1.
  		 [(index := index + 1) < self numFreeLists
  		  and: [1 << index <= freeListsMask]] whileTrue:
  			[(freeListsMask anyMask: 1 << index) ifTrue:
  				[(chunk := freeLists at: index) ~= 0 ifTrue:
  					[self assert: chunk = (self startOfObject: chunk).
  					 self assert: (self isValidFreeObject: chunk).
  					 self unlinkFreeChunk: chunk atIndex: index.
  					 self assert: (self bytesInObject: chunk) = (index * self allocationUnit).
  					 self freeChunkWithBytes: index * self allocationUnit - chunkBytes
  						at: (self startOfObject: chunk) + chunkBytes.
  					^chunk].
  				 freeListsMask := freeListsMask - (1 << index)]]].
  
  	"Large chunk, or no space on small free lists.  Search the large chunk list.
  	 Large chunk list organized as a tree, each node of which is a list of chunks
  	 of the same size. Beneath the node are smaller and larger blocks.
  	 When the search ends parent should hold the smallest chunk at least as
  	 large as chunkBytes, or 0 if none."
  	parent := 0.
  	child := freeLists at: 0.
  	[child ~= 0] whileTrue:
  		[| childBytes |
  		 self assert: (self isValidFreeObject: child).
  		 childBytes := self bytesInObject: child.
  		 childBytes = chunkBytes
  			ifTrue: "size match; try to remove from list at node."
  				[chunk := self fetchPointer: self freeChunkNextIndex
  								ofFreeChunk: child.
  				 chunk ~= 0 ifTrue:
  					[self assert: (self isValidFreeObject: chunk).
  					 self storePointer: self freeChunkNextIndex
  						ofFreeChunk: child
  						withValue: (self fetchPointer: self freeChunkNextIndex
  										ofFreeChunk: chunk).
  					 ^self startOfObject: chunk].
  				 child := 0] "break out of loop to remove interior node"
  			ifFalse:
  				["Note that because the minimum free size is 16 bytes (2 * allocationUnit), to
  				  leave room for the forwarding pointer/next free link, we can only break chunks
  				  that are at least 16 bytes larger, hence reject chunks < 2 * allocationUnit larger."
  				childBytes <= (chunkBytes + self allocationUnit)
  					ifTrue: "node too small; walk down the larger size of the tree"
  						[child := self fetchPointer: self freeChunkLargerIndex ofFreeChunk: child]
  					ifFalse:
  						[parent := child. "parent will be smallest node >= chunkBytes + allocationUnit"
  						 nodeBytes := childBytes.
  						 child := self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: child]]].
  	parent = 0 ifTrue:
  		[totalFreeOldSpace := totalFreeOldSpace + chunkBytes. "optimism was unfounded"
- 		 self halt.
  		 ^nil].
  
  	"self printFreeChunk: parent"
  	self assert: (nodeBytes = chunkBytes or: [nodeBytes >= (chunkBytes + (2 * self allocationUnit))]).
  	self assert: (self bytesInObject: parent) = nodeBytes.
  
  	"attempt to remove from list"
  	chunk := self fetchPointer: self freeChunkNextIndex
  					ofFreeChunk: parent.
  	chunk ~= 0 ifTrue:
  		[self assert: (chunkBytes = nodeBytes or: [chunkBytes + self allocationUnit < nodeBytes]).
  		 self storePointer: self freeChunkNextIndex
  			ofFreeChunk: parent
  			withValue: (self fetchPointer: self freeChunkNextIndex
  							ofFreeChunk: chunk).
  		 chunkBytes ~= nodeBytes ifTrue:
  			[self freeChunkWithBytes: nodeBytes - chunkBytes
  					at: (self startOfObject: chunk) + chunkBytes].
  		 ^self startOfObject: chunk].
  
  	"no list; remove an interior node; reorder tree simply.  two cases (which have mirrors, for four total):
  	 case 1. interior node has one child, P = parent, N = node, S = subtree (mirrored for large vs small)
  			___				  ___
  			| P |				  | P |
  		    _/_				_/_
  		    | N |		=>		| S |
  		 _/_
  		 | S |
  
  	 case 2: interior node has two children, , P = parent, N = node, L = smaller, left subtree, R = larger, right subtree.
  	 add the left subtree to the bottom left of the right subtree (mirrored for large vs small) 
  			___				  ___
  			| P |				  | P |
  		    _/_				_/_
  		    | N |		=>		| R |
  		 _/_  _\_		    _/_
  		 | L | | R |		    | L |"
  
  	chunk := parent.
  	smaller := self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: chunk.
  	larger := self fetchPointer: self freeChunkLargerIndex ofFreeChunk: chunk.
  	parent := self fetchPointer: self freeChunkParentIndex ofFreeChunk: chunk.
  	parent = 0
  		ifTrue: "no parent; stitch the subnodes back into the root"
  			[smaller = 0
  				ifTrue:
  					[self storePointer: self freeChunkParentIndex ofFreeChunk: larger withValue: 0.
  					 freeLists at: 0 put: larger]
  				ifFalse:
  					[self storePointer: self freeChunkParentIndex ofFreeChunk: smaller withValue: 0.
  					 freeLists at: 0 put: smaller.
  					 larger ~= 0 ifTrue:
  						[self addFreeSubTree: larger]]]
  		ifFalse: "parent; stitch back into appropriate side of parent."
  			[smaller = 0
  				ifTrue: [self storePointer: (chunk = (self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: parent)
  											ifTrue: [self freeChunkSmallerIndex]
  											ifFalse: [self freeChunkLargerIndex])
  							ofFreeChunk: parent
  							withValue: larger.
  						self storePointer: self freeChunkParentIndex
  							ofObject: larger
  							withValue: parent]
  				ifFalse:
  					[self storePointer: (chunk = (self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: parent)
  											ifTrue: [self freeChunkSmallerIndex]
  											ifFalse: [self freeChunkLargerIndex])
  						ofFreeChunk: parent
  						withValue: smaller.
  					 self storePointer: self freeChunkParentIndex
  						ofObject: smaller
  						withValue: parent.
  					 larger ~= 0 ifTrue:
  						[self addFreeSubTree: larger]]].
  	"if there's space left over, add the fragment back."
  	chunkBytes ~= nodeBytes ifTrue:
  		[self freeChunkWithBytes: nodeBytes - chunkBytes
  				at: (self startOfObject: chunk) + chunkBytes].
  	^self startOfObject: chunk!

Item was changed:
+ ----- Method: SpurMemoryManager>>classTableRootObj: (in category 'class table') -----
- ----- Method: SpurMemoryManager>>classTableRootObj: (in category 'accessing') -----
  classTableRootObj: anOop
  	classTableRootObj := anOop.
  	classTableFirstPage := self fetchPointer: 0 ofObject: classTableRootObj.
+ 	self assert: (self numSlotsOf: classTableRootObj) = self classTableRootSlots.
- 	self assert: (self numSlotsOf: classTableRootObj) = (1 << (self classIndexFieldWidth - self classTableMajorIndexShift)).
  	self assert: (self numSlotsOf: classTableFirstPage) - 1 = self classTableMinorIndexMask.
+ 	"Set classTableIndex to the start of the last used page (excepting first page).
+ 	 Set numClassTablePages to the number of used pages."
+ 	numClassTablePages := self numSlotsOf: classTableRootObj.
+ 	2 to: numClassTablePages - 1 do:
- 	"set classTableIndex to the start of the last used page"
- 	2 to: (self numSlotsOf: classTableRootObj) - 1 do:
  		[:i|
  		(self fetchPointer: i ofObject: classTableRootObj) = nilObj ifTrue:
+ 			[numClassTablePages := i.
+ 			 classTableIndex := (numClassTablePages - 1 max: 1) << self classTableMajorIndexShift.
- 			[classTableIndex := i << self classTableMajorIndexShift.
  			 ^self]].
  	"no unused pages; set it to the start of the second page."
  	classTableIndex := 1 << self classTableMajorIndexShift!

Item was added:
+ ----- Method: SpurMemoryManager>>classTableRootSlots (in category 'class table') -----
+ classTableRootSlots
+ 	"Answer the number of slots in the root of the class table."
+ 	^1 << (self classIndexFieldWidth - self classTableMajorIndexShift)!

Item was added:
+ ----- Method: SpurMemoryManager>>countNumClassPagesPreSwizzle: (in category 'class table') -----
+ countNumClassPagesPreSwizzle: bytesToShift
+ 	"Compute the used size of the class table before swizzling.  Needed to
+ 	 initialize the classTableBitmap which is populated during adjustAllOopsBy:"
+ 	<returnTypeC: #void>
+ 	| firstObj classTableRoot nilObjPreSwizzle |
+ 	firstObj := self objectStartingAt: newSpaceLimit. "a.k.a. nilObj"
+ 	"first five objects are nilObj, falseObj, trueObj, freeListsObj, classTableRootObj"
+ 	classTableRoot := self objectAfter:
+ 							(self objectAfter:
+ 									(self objectAfter:
+ 											(self objectAfter: firstObj
+ 												limit: freeOldSpaceStart)
+ 										limit: freeOldSpaceStart)
+ 								limit: freeOldSpaceStart)
+ 							limit: freeOldSpaceStart.
+ 	nilObjPreSwizzle := newSpaceLimit - bytesToShift.
+ 	numClassTablePages := self numSlotsOf: classTableRoot.
+ 	self assert: numClassTablePages = self classTableRootSlots.
+ 	2 to: numClassTablePages - 1 do:
+ 		[:i|
+ 		(self fetchPointer: i ofObject: classTableRoot) = nilObjPreSwizzle ifTrue:
+ 			[numClassTablePages := i.
+ 			 ^self]]
+ 	!

Item was changed:
  ----- Method: SpurMemoryManager>>doBecome:with:copyHash: (in category 'become implementation') -----
  doBecome: obj1 with: obj2 copyHash: copyHashFlag
+ 	((self isInClassTable: obj1)
+ 	 or: [self isInClassTable: obj1]) ifTrue:
- 	((self isClassInClassTable: obj1)
- 	 or: [self isClassInClassTable: obj1]) ifTrue:
  		[self halt].
  	(self numSlotsOf: obj1) = (self numSlotsOf: obj2)
  		ifTrue:
  			[self inPlaceBecome: obj1 and: obj2 copyHashFlag: copyHashFlag]
  		ifFalse:
  			[self outOfPlaceBecome: obj1 and: obj2 copyHashFlag: copyHashFlag]!

Item was added:
+ ----- Method: SpurMemoryManager>>ensureAdequateClassTableBitmap (in category 'class table') -----
+ ensureAdequateClassTableBitmap
+ 	"The classTableBitmap is used to reclaim unused and/or duplicate entries
+ 	 in the classTable.  As such it is notionally 2^(22 - 3) bytes big, or 512k,
+ 	 a little too large to be comfortable allocating statically (especially on small
+ 	 machines).  So make it big enough for the max classTableIndex's base 2 ceiling."
+ 	<inline: false>
+ 	| requiredSize |
+ 	requiredSize := (1 << numClassTablePages highBit)
+ 					* (self classTablePageSize / BitsPerByte).
+ 	self cCode:
+ 			[classTableBitmap ifNotNil:
+ 				[self free: classTableBitmap].
+ 			 classTableBitmap := self malloc: requiredSize.
+ 			 classTableBitmap ifNil:
+ 				[self error: 'could not allocate classTableBitmap'].
+ 			 self me: classTableBitmap ms: 0 et: requiredSize]
+ 		inSmalltalk:
+ 			[classTableBitmap := CArrayAccessor on: (ByteArray new: requiredSize)]!

Item was changed:
  ----- Method: SpurMemoryManager>>enterIntoClassTable: (in category 'class table') -----
  enterIntoClassTable: aBehavior
  	"Enter aBehavior into the class table and answer 0.  Otherwise answer a primitive failure code."
  	<inline: false>
  	| initialMajorIndex majorIndex minorIndex page |
  	majorIndex := classTableIndex >> self classTableMajorIndexShift.
  	initialMajorIndex := majorIndex.
  	"classTableIndex should never index the first page; it's reserved for known classes"
  	self assert: initialMajorIndex > 0.
  	minorIndex := classTableIndex bitAnd: self classTableMinorIndexMask.
  
  	[page := self fetchPointer: majorIndex ofObject: classTableRootObj.
  	 page = nilObj ifTrue:
  		[page := self allocateSlotsInOldSpace: self classTablePageSize
  					format: self arrayFormat
  					classIndex: self arrayClassIndexPun.
  		 page ifNil:
  			[^PrimErrNoMemory].
  		 self fillObj: page numSlots: self classTablePageSize with: nilObj.
  		 self storePointer: majorIndex
  			ofObject: classTableRootObj
  			withValue: page.
+ 		 numClassTablePages := numClassTablePages + 1.
  		 minorIndex := 0].
  	 minorIndex to: self classTablePageSize - 1 do:
  		[:i|
  		(self fetchPointer: i ofObject: page) = nilObj ifTrue:
  			[classTableIndex := majorIndex << self classTableMajorIndexShift + i.
  			 self storePointer: i
  				ofObject: page
  				withValue: aBehavior.
  			 self setHashBitsOf: aBehavior to: classTableIndex.
  			 self assert: (self classAtIndex: (self rawHashBitsOf: aBehavior)) = aBehavior.
  			 "now fault-in method lookup chain."
  			 self scanClassPostBecome: aBehavior
  				effects: BecamePointerObjectFlag+BecameCompiledMethodFlag.
+ 			 self ensureAdequateClassTableBitmap.
  			 ^0]].
  	 majorIndex := (majorIndex + 1 bitAnd: self classIndexMask) max: 1.
  	 majorIndex = initialMajorIndex ifTrue: "wrapped; table full"
  		[^PrimErrLimitExceeded]] repeat!

Item was added:
+ ----- Method: SpurMemoryManager>>expungeDuplicateClasses (in category 'class table') -----
+ expungeDuplicateClasses
+ 	"Bits have been set in the classTableBitmap corresponding to
+ 	 used classes.  Any class in the class table that does not have a
+ 	 bit set has no instances with that class index.  However, becomeForward:
+ 	 can create duplicate entries, and these duplicate entries
+ 		a) won't have a bit set on load (because there are no forwarders on load),
+ 		b) wont match their identityHash.
+ 	 So expunge duplicates by eliminating unmarked entries that don't occur at
+ 	 their identityHash."
+ 	1 to: numClassTablePages - 1 do:
+ 		[:i| | classTablePage |
+ 		"optimize scan by only scanning bitmap in regions that have pages."
+ 		classTablePage := self fetchPointer: i ofObject: classTableRootObj.
+ 		classTablePage ~= nilObj ifTrue:
+ 			[i << self classTableMajorIndexShift
+ 				to: i << self classTableMajorIndexShift + self classTableMinorIndexMask
+ 				by: 8
+ 				do: [:majorBitIndex| | byteIndex byte classIndex classOrNil |
+ 					"optimize scan by scanning a byte of indices (8 indices) at a time"
+ 					byteIndex := majorBitIndex / BitsPerByte.
+ 					byte := classTableBitmap at: byteIndex.
+ 					byte ~= 255 ifTrue:
+ 						[0 to: 7 do:
+ 							[:minorBitIndex|
+ 							(byte noMask: 1 << minorBitIndex) ifTrue:
+ 								[classIndex := majorBitIndex + minorBitIndex.
+ 								 classOrNil := self fetchPointer: (classIndex bitAnd: self classTableMinorIndexMask)
+ 												   ofObject: classTablePage.
+ 								 self assert: (self classAtIndex: classIndex) = classOrNil.
+ 								 "only remove a class if it is at a duplicate entry"
+ 								 (classOrNil ~= nilObj
+ 								  and: [(self rawHashBitsOf: classOrNil) ~= classIndex]) ifTrue:
+ 									[self storePointerUnchecked: (classIndex bitAnd: self classTableMinorIndexMask)
+ 										ofObject: classTablePage
+ 										withValue: nilObj.
+ 									 "but it should still be in the table at its correct index."
+ 									 self assert: ((self classAtIndex: (self rawHashBitsOf: classOrNil)) = classOrNil)]]]]]]]!

Item was added:
+ ----- Method: SpurMemoryManager>>expungeFromClassTable: (in category 'class table') -----
+ expungeFromClassTable: aBehavior
+ 	"Remove aBehavior from the class table."
+ 	<inline: false>
+ 	| classIndex majorIndex minorIndex classTablePage |
+ 	self assert: (self isInClassTable: aBehavior).
+ 	classIndex := self rawHashBitsOf: aBehavior.
+ 	majorIndex := classIndex >> self classTableMajorIndexShift.
+ 	minorIndex := classIndex bitAnd: self classTableMinorIndexMask.
+ 	classTablePage := self fetchPointer: majorIndex ofObject: classTableRootObj.
+ 	self assert: classTablePage ~= classTableFirstPage.
+ 	self assert: (self numSlotsOf: classTablePage) = self classTablePageSize.
+ 	self assert: (self fetchPointer: minorIndex ofObject: classTablePage) = aBehavior.
+ 	self storePointerUnchecked: minorIndex ofObject: classTablePage withValue: nilObj.
+ 	"If the removed class is before the classTableIndex, set the
+ 	 classTableIndex to point to the empty slot so as to reuse it asap."
+ 	classIndex < classTableIndex ifTrue:
+ 		[classTableIndex := classIndex]!

Item was changed:
  ----- Method: SpurMemoryManager>>formatOf: (in category 'object access') -----
  formatOf: objOop
  	"0 = 0 sized objects (UndefinedObject True False et al)
  	 1 = non-indexable objects with inst vars (Point et al)
  	 2 = indexable objects with no inst vars (Array et al)
  	 3 = indexable objects with inst vars (MethodContext AdditionalMethodState et al)
  	 4 = weak indexable objects with inst vars (WeakArray et al)
  	 5 = weak non-indexable objects with inst vars (ephemerons) (Ephemeron)
  	 6 unused, reserved for exotic pointer objects?
  	 7 Forwarded Object, 1st field is pointer, rest of fields are ignored
  	 8 unused, reserved for exotic non-pointer objects?
  	 9 (?) 64-bit indexable
+ 	 10 - 11 32-bit indexable	(11 unused in 32 bits)
+ 	 12 - 15 16-bit indexable	(14 & 15 unused in 32-bits)
+ 	 16 - 23 byte indexable		(20-23 unused in 32-bits)
+ 	 24 - 31 compiled method	(28-21 unused in 32-bits)"
- 	 10 - 11 32-bit indexable
- 	 12 - 15 16-bit indexable
- 	 16 - 23 byte indexable
- 	 24 - 31 compiled method"
  	^(self longAt: objOop) >> self formatShift bitAnd: self formatMask!

Item was added:
+ ----- Method: SpurMemoryManager>>inClassTableBitmapSet: (in category 'class table') -----
+ inClassTableBitmapSet: classIndex
+ 	| bit majorIndex |
+ 	self assert: (classIndex >= 0 and: [classIndex <= self classIndexMask]).
+ 	majorIndex := classIndex // BitsPerByte.
+ 	bit := 1 << (classIndex bitAnd: BitsPerByte - 1).
+ 	classTableBitmap
+ 		at: majorIndex
+ 		put: ((classTableBitmap at: majorIndex) bitOr: bit)!

Item was changed:
  ----- Method: SpurMemoryManager>>initialize (in category 'initialization') -----
  initialize
  	"We can put all initializations that set something to 0 or to false here.
  	 In C all global variables are initialized to 0, and 0 is false."
  	remapBuffer := Array new: RemapBufferSize.
+ 	remapBufferCount := extraRootCount := 0. "see below"
- 	remapBufferCount := extraRoots := 0. "see below"
  	freeListsMask := totalFreeOldSpace := lowSpaceThreshold := 0.
  	checkForLeaks := 0.
  	needGCFlag := signalLowSpace := scavengeInProgress := false.
  	becomeEffectsFlags := 0.
  	statScavenges := statIncrGCs := statFullGCs := 0.
  	statScavengeGCUsecs := statIncrGCUsecs := statFullGCUsecs := statGCEndUsecs := 0.
  	statSGCDeltaUsecs := statIGCDeltaUsecs := statFGCDeltaUsecs := 0.
  	statGrowMemory := statShrinkMemory := statRootTableCount := statTenures := statSurvivorCount := 0.
  	statRootTableOverflows := statSweepCount := statMarkCount := statSpecialMarkCount := statMkFwdCount := 0.
  	self flag: #temporary.
  	shrinkThreshold := 16r10000000. "something huge for now"
  
  	"we can initialize things that are virtual in C."
  	scavenger := SpurGenerationScavengerSimulator new manager: self; yourself.
  	segmentManager := SpurSegmentManager new manager: self; yourself.
  
  	"We can also initialize here anything that is only for simulation."
  	heapMap := self wordSize = 4 ifTrue: [CogCheck32BitHeapMap new].
  
  	"N.B. We *don't* initialize extraRoots because we don't simulate it."!

Item was changed:
  ----- Method: SpurMemoryManager>>initializeObjectMemory: (in category 'initialization') -----
  initializeObjectMemory: bytesToShift
  	"Initialize object memory variables at startup time. Assume endOfMemory is initially set (by the image-reading code) to the end of the last object in the image. Initialization redefines endOfMemory to be the end of the object allocation area based on the total available memory, but reserving some space for forwarding blocks."
  	"Assume: image reader initializes the following variables:
  		memory
  		memoryLimit
  		specialObjectsOop
  		lastHash
  	"
  	<inline: false>
  	| freeListObj |
  	"Catch mis-initializations leading to bad translations to C"
  	self assert: BaseHeaderSize = self baseHeaderSize.
  
  	segmentManager adjustSegmentSwizzlesBy: bytesToShift.
  	"image may be at a different address; adjust oops for new location"
  	self adjustAllOopsBy: bytesToShift.
  
  	segmentManager numSegments > 0 "false if Spur image bootstrap"
  		ifTrue: [specialObjectsOop := segmentManager swizzleObj: specialObjectsOop]
  		ifFalse: [self assert: bytesToShift = 0].
  
  	"heavily used special objects"
  	nilObj		:= self splObj: NilObject.
  	falseObj	:= self splObj: FalseObject.
  	trueObj		:= self splObj: TrueObject.
  
  	"In Cog we insist that nil, true & false are next to each other (Cogit generates tighter
  	 conditional branch code as a result).  In addition, Spur places the free lists and
  	 class table root page immediately following them."
  	self assert: nilObj = newSpaceLimit.
  	self assert: falseObj = (self objectAfter: nilObj).
  	self assert: trueObj = (self objectAfter: falseObj).
  	freeListObj := self objectAfter: trueObj.
+ 	self reInitializeClassTablePostLoad: (self objectAfter: freeListObj).
- 	self classTableRootObj: (self objectAfter: freeListObj).
  	self initializeFreeSpacePostLoad: freeListObj.
  
  	segmentManager collapseSegmentsPostSwizzle.
  
  	self initializeNewSpaceVariables.
  	self initializeOldSpaceFirstFree: freeOldSpaceStart. "initializes endOfMemory, freeStart"
  
  	"lowSpaceThreshold := 0.
  	signalLowSpace := false.
  	remapBufferCount := 0.
  	tenuringThreshold := 2000.  ""tenure all suriving objects if survivor count is over this threshold""
  	growHeadroom := 4*1024*1024. ""four megabytes of headroom when growing""
  	shrinkThreshold := 8*1024*1024. ""eight megabytes of free space before shrinking""
  
  	""garbage collection statistics""
  	statFullGCs := 0.
  	statFullGCUsecs := 0.
  	statIncrGCs := 0.
  	statIncrGCUsecs := 0.
  	statTenures := 0.
  	statRootTableOverflows := 0.
  	statGrowMemory := 0.
  	statShrinkMemory := 0.
  	forceTenureFlag := 0.
  	gcBiasToGrow := 0.
  	gcBiasToGrowGCLimit := 0.
  	extraRootCount := 0."!

Item was removed:
- ----- Method: SpurMemoryManager>>isClassInClassTable: (in category 'become implementation') -----
- isClassInClassTable: objOop
- 	| hash |
- 	hash := self rawHashBitsOf: objOop.
- 	hash = 0 ifTrue:
- 		[false].
- 	^(self classAtIndex: hash) = objOop!

Item was added:
+ ----- Method: SpurMemoryManager>>isInClassTable: (in category 'become implementation') -----
+ isInClassTable: objOop
+ 	| hash |
+ 	hash := self rawHashBitsOf: objOop.
+ 	hash = 0 ifTrue:
+ 		[false].
+ 	^(self classAtIndex: hash) = objOop!

Item was changed:
  ----- Method: SpurMemoryManager>>lastPointerOfWhileSwizzling: (in category 'snapshot') -----
  lastPointerOfWhileSwizzling: objOop 
  	"Answer the byte offset of the last pointer field of the given object.
  	 Works with CompiledMethods, as well as ordinary objects.
  	 Does not examine the stack pointer of contexts to be sure to swizzle
+ 	 the nils that fill contexts on snapshot.
+ 	 It is invariant that on image load no object contains a forwarding pointer,
+ 	 and the image contains no forwarders (see class comment)."
- 	 the nils that fill contexts on snapshot."
  	<api>
  	<inline: true>
  	<asmLabel: false>
  	| fmt numLiterals |
  	fmt := self formatOf: objOop.
  	self assert: fmt ~= self forwardedFormat.
  	fmt <= self lastPointerFormat ifTrue:
  		[^(self numSlotsOf: objOop) - 1 * BytesPerOop + self baseHeaderSize  "all pointers"].
  	fmt < self firstCompiledMethodFormat ifTrue: [^0]. "no pointers"
  
  	"CompiledMethod: contains both pointers and bytes"
  	numLiterals := coInterpreter literalCountOf: objOop.
  	^numLiterals + LiteralStart - 1 * BytesPerOop + self baseHeaderSize!

Item was changed:
  ----- Method: SpurMemoryManager>>mapExtraRoots (in category 'garbage collection') -----
  mapExtraRoots
  	self assert: remapBufferCount = 0.
  	1 to: extraRootCount do:
  		[:i | | oop |
  		oop := (extraRoots at: i) at: 0.
+ 		((self isImmediate: oop) or: [self isFreeObject: oop]) ifFalse:
- 		((self isImmediateObject: oop) or: [self isFreeObject: oop]) ifFalse:
  			[(self shouldRemapObj: oop) ifTrue:
  				[(extraRoots at: i) at: 0 put: (self remapObj: oop)]]]!

Item was changed:
  ----- Method: SpurMemoryManager>>postBecomeScanClassTable (in category 'become implementation') -----
  postBecomeScanClassTable
+ 	"Scan the class table post-become (iff a pointer object or compiled method was becommed).
+ 	 Note that one-way become can cause duplications in the class table.
+ 	 When can these be eliminated?  We use the classtableBitmap to mark  classTable entries
+ 	 (not the classes themselves, since marking a class doesn't help in knowing if its index is used).
+ 	 On image load, and during incrememtal scan-mark and full GC, classIndices are marked.
+ 	 We can somehow avoid following classes from the classTable until after this mark phase."
- 	"Scan the class table post-become (iff a pointer object or compiled method was becommed)"
  	(becomeEffectsFlags anyMask: BecamePointerObjectFlag+BecameCompiledMethodFlag) ifFalse: [^self].
  	
  	0 to: (self numSlotsOf: classTableRootObj) - 1 do:
  		[:i| | page |
  		page := self fetchPointer: i ofObject: classTableRootObj.
  		0 to: (self numSlotsOf: page) - 1 do:
  			[:j| | classOrNil |
  			classOrNil := self fetchPointer: j ofObject: page.
  			classOrNil ~= nilObj ifTrue:
  				[(self isForwarded: classOrNil) ifTrue:
  					[classOrNil := self followForwarded: classOrNil.
  					 self storePointer: j ofObject: page withValue: classOrNil].
  				 self scanClassPostBecome: classOrNil effects: becomeEffectsFlags]]]!

Item was added:
+ ----- Method: SpurMemoryManager>>reInitializeClassTablePostLoad: (in category 'class table') -----
+ reInitializeClassTablePostLoad: classTableRoot
+ 	self classTableRootObj: classTableRoot.
+ 	self expungeDuplicateClasses!

Item was added:
+ ----- Method: SpurMemoryManager>>tenuringIncrementalGC (in category 'plugin support') -----
+ tenuringIncrementalGC
+ 	"Do an incremental GC that tenures all surviving young objects to old space."
+ 	<api>
+ 	self flushNewSpace!

Item was changed:
  ----- Method: SpurMemoryManager>>vmEndianness (in category 'memory access') -----
  vmEndianness
+ 	<api>
- 	<cmacro: '() VMBIGENDIAN'>
  	"1 = big, 0 = little"
+ 	^self cCode: [VMBIGENDIAN] inSmalltalk: [self subclassResponsibility]!
- 	^self subclassResponsibility!



More information about the Vm-dev mailing list