[Vm-dev] VM Maker: VMMaker.oscog-eem.655.mcz

commits at source.squeak.org commits at source.squeak.org
Mon Mar 24 23:18:45 UTC 2014


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.655.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.655
Author: eem
Time: 24 March 2014, 4:16:19.738 pm
UUID: d6c00552-e909-49e0-a838-d6c3225dedda
Ancestors: VMMaker.oscog-eem.654

Spur:
Ugh, what a bug hunt.  Move the expungeDuplicateAndUnmarkedClasses:
send from markObjects to globalGarbageCollect where it belongs
and hence prevent allObjects and allInstancesOf from breaking the
class table.

Rationalize all but one of the various sends of runLeakCheckerForFullGC:-
excludeUnmarkedNewSpaceObjs:classIndicesShouldBeValid:
to runLeakCheckerForFullGC:

Fix printInstancesWithClassIndex: for simulation.

Make SpurMemoryManager's clas comment describe the class table
as it is, rather than as it might be (i.e. classTable pages /are/ strong).

=============== Diff against VMMaker.oscog-eem.654 ===============

Item was changed:
  CogClass subclass: #SpurMemoryManager
(excessive size, no diff calculated)

Item was changed:
  ----- Method: SpurMemoryManager>>allInstancesOf: (in category 'primitive support') -----
  allInstancesOf: aClass
  	"Attempt to answer an array of all objects, excluding those that may
  	 be garbage collected as a side effect of allocating the result array.
  	 If no memory is available answer the number of instances as a SmallInteger.
  	 Since objects are at least 16 bytes big, and the largest SmallInteger covers
  	 1/4 of the address space, the count can never overflow."
  	| classIndex freeChunk ptr start limit count bytes |
  	classIndex := self rawHashBitsOf: aClass.
  	(classIndex = 0
  	 or: [aClass ~~ (self classOrNilAtIndex: classIndex)]) ifTrue:
  		[freeChunk := self allocateSlots: 0 format: self arrayFormat classIndex: ClassArrayCompactIndex.
  		 ^freeChunk].
+ 	self runLeakCheckerForFullGC: false.
  	self markObjects. "don't want to revive objects unnecessarily."
  	freeChunk := self allocateLargestFreeChunk.
  	ptr := start := freeChunk + self baseHeaderSize.
  	limit := self addressAfter: freeChunk.
  	count := 0.
  	self allHeapEntitiesDo:
  		[:obj| "continue enumerating even if no room so as to unmark all objects."
  		 (self isMarked: obj) ifTrue:
  			[(self isNormalObject: obj)
  				ifTrue:
  					[self setIsMarkedOf: obj to: false.
  					 (self classIndexOf: obj) = classIndex ifTrue:
  					 	[count := count + 1.
  						 ptr < limit ifTrue:
  							[self longAt: ptr put: obj.
  							 ptr := ptr + self bytesPerSlot]]]
  				ifFalse:
  					[(self isSegmentBridge: obj) ifFalse:
  						[self setIsMarkedOf: obj to: false]]]].
  	self assert: self allObjectsUnmarked.
  	self assert: (self isEmptyObjStack: markStack).
  	self emptyObjStack: weaklingStack.
  	(count > (ptr - start / self bytesPerSlot) "not enough room"
  	 or: [limit ~= ptr and: [limit - ptr <= self allocationUnit]]) ifTrue: "can't split a single word"
  		[self freeObject: freeChunk.
  		 ^self integerObjectOf: count].
  	count < self numSlotsMask ifTrue:
  		[| smallObj |
  		 smallObj := self allocateSlots: count format: self arrayFormat classIndex: ClassArrayCompactIndex.
  		 0 to: count - 1 do:
  			[:i|
  			self storePointerUnchecked: i ofObject: smallObj withValue: (self fetchPointer: i ofObject: freeChunk)].
  		 self freeChunkWithBytes: (self bytesInObject: freeChunk) at: (self startOfObject: freeChunk).
  		 self beRootIfOld: smallObj.
  		 self checkFreeSpace.
  		 ^smallObj].
  	bytes := self largeObjectBytesForSlots: count.
  	start := self startOfObject: freeChunk.
  	self freeChunkWithBytes: limit - start - bytes at: start + bytes.
  	totalFreeOldSpace := totalFreeOldSpace - bytes.
  	self setOverflowNumSlotsOf: freeChunk to: count.
  	self set: freeChunk classIndexTo: ClassArrayCompactIndex formatTo: self arrayFormat.
  	self possibleRootStoreInto: freeChunk.
  	self checkFreeSpace.
+ 	self runLeakCheckerForFullGC: false.
  	^freeChunk
  	
  	!

Item was changed:
  ----- Method: SpurMemoryManager>>allObjects (in category 'primitive support') -----
  allObjects
  	"Attempt to answer an array of all objects, excluding those that may
  	 be garbage collected as a side effect of allocating the result array.
  	 If no memory is available answer the number of objects as a SmallInteger.
  	 Since objects are at least 16 bytes big, and the largest SmallInteger covers
  	 1/4 of the address space, the count can never overflow."
  	| freeChunk ptr start limit count bytes |
+ 	self runLeakCheckerForFullGC: false.
  	self markObjects. "don't want to revive objects unnecessarily."
  	freeChunk := self allocateLargestFreeChunk.
  	ptr := start := freeChunk + self baseHeaderSize.
  	limit := self addressAfter: freeChunk.
  	count := 0.
  	self allHeapEntitiesDo:
  		[:obj| "continue enumerating even if no room so as to unmark all objects."
  		 (self isMarked: obj) ifTrue:
  			[(self isNormalObject: obj)
  				ifTrue:
  					[self setIsMarkedOf: obj to: false.
  					 count := count + 1.
  					 ptr < limit ifTrue:
  						[self longAt: ptr put: obj.
  						 ptr := ptr + self bytesPerSlot]]
  				ifFalse:
  					[(self isSegmentBridge: obj) ifFalse:
  						[self setIsMarkedOf: obj to: false]]]].
  	self assert: self allObjectsUnmarked.
  	self assert: (self isEmptyObjStack: markStack).
  	self emptyObjStack: weaklingStack.
  	self assert: count >= self numSlotsMask.
  	(count > (ptr - start / self bytesPerSlot) "not enough room"
  	 or: [limit ~= ptr and: [limit - ptr <= self allocationUnit]]) ifTrue: "can't split a single word"
  		[self freeChunkWithBytes: (self bytesInObject: freeChunk) at: (self startOfObject: freeChunk).
  		 self checkFreeSpace.
  		 ^self integerObjectOf: count].
  	bytes := self largeObjectBytesForSlots: count.
  	start := self startOfObject: freeChunk.
  	self freeChunkWithBytes: limit - start - bytes at: start + bytes.
  	totalFreeOldSpace := totalFreeOldSpace - bytes.
  	self setOverflowNumSlotsOf: freeChunk to: count.
  	self set: freeChunk classIndexTo: ClassArrayCompactIndex formatTo: self arrayFormat.
  	self possibleRootStoreInto: freeChunk.
  	self checkFreeSpace.
+ 	self runLeakCheckerForFullGC: false.
  	^freeChunk
  	
  	!

Item was changed:
  ----- Method: SpurMemoryManager>>globalGarbageCollect (in category 'gc - global') -----
  globalGarbageCollect
+ 	self runLeakCheckerForFullGC: true.
- 	self runLeakCheckerForFullGC: true
- 		excludeUnmarkedNewSpaceObjs: false
- 		classIndicesShouldBeValid: false.
  	self assert: self validObjStacks.
  	self assert: (self isEmptyObjStack: markStack).
  	self assert: (self isEmptyObjStack: weaklingStack).
  
  	self markObjects.
+ 	self expungeDuplicateAndUnmarkedClasses: true.
  	self nilUnmarkedWeaklingSlots.
  	self freeUnmarkedObjectsAndSortAndCoalesceFreeSpace.
  
+ 	"Mid-way the leak check must be more lenient.  Unmarked classes will have been
+ 	 expunged from the table, but unmarked instances will not yet have been reclaimed."
  	self runLeakCheckerForFullGC: true
  		excludeUnmarkedNewSpaceObjs: true
+ 		classIndicesShouldBeValid: true.
- 		classIndicesShouldBeValid: false.
  
  	self compact.
  	self eliminateAndFreeForwarders.
  
  	self assert: self validObjStacks.
  	self assert: (self isEmptyObjStack: markStack).
  	self assert: (self isEmptyObjStack: weaklingStack).
  	self assert: self allObjectsUnmarked.
+ 	self runLeakCheckerForFullGC: true!
- 	self runLeakCheckerForFullGC: true
- 		excludeUnmarkedNewSpaceObjs: false
- 		classIndicesShouldBeValid: true!

Item was changed:
  ----- Method: SpurMemoryManager>>markObjects (in category 'gc - global') -----
  markObjects
  	<inline: false>
  	"Mark all accessible objects."
  	"If the incremental collector is running mark bits may be set; stop it and clear them if necessary."
  	self ensureAllMarkBitsAreZero.
  	self ensureAdequateClassTableBitmap.
  	self initializeUnscannedEphemerons.
  	self initializeMarkStack.
  	self initializeWeaklingStack.
+ 	self markAccessibleObjects!
- 	self markAccessibleObjects.
- 	self expungeDuplicateAndUnmarkedClasses: true!

Item was changed:
  ----- Method: SpurMemoryManager>>printInstancesWithClassIndex: (in category 'debug printing') -----
  printInstancesWithClassIndex: classIndex
  	"Scan the heap printing the oops of any and all objects whose classIndex equals the argument."
  	<api>
  	<inline: false>
  	self allHeapEntitiesDo:
  		[:obj|
  		 (self classIndexOf: obj) = classIndex ifTrue:
+ 			[coInterpreter printHex: obj; cr]]!
- 			[self printHex: obj; cr]]!

Item was changed:
  ----- Method: SpurMemoryManager>>scavengingGCTenuringIf: (in category 'gc - scavenging') -----
  scavengingGCTenuringIf: tenuringCriterion
  	"Run the scavenger."
  
  	self assert: remapBufferCount = 0.
  	(self asserta: scavenger eden limit - freeStart > coInterpreter interpreterAllocationReserveBytes) ifFalse:
  		[coInterpreter tab;
  			printNum: scavenger eden limit - freeStart; space;
  			printNum: coInterpreter interpreterAllocationReserveBytes; space;
  			printNum: coInterpreter interpreterAllocationReserveBytes - (scavenger eden limit - freeStart); cr].
  	self checkMemoryMap.
  	self checkFreeSpace.
- 	"coInterpreter printCallStackFP: coInterpreter framePointer"
- 
  	self runLeakCheckerForFullGC: false.
+ 
  	coInterpreter
  		preGCAction: GCModeScavenge;
  		"would prefer this to be in mapInterpreterOops, but
  		 compatibility with ObjectMemory dictates it goes here."
  		flushMethodCacheFrom: newSpaceStart to: newSpaceLimit.
  	needGCFlag := false.
  
  	gcStartUsecs := coInterpreter ioUTCMicrosecondsNow.
  
  	self doScavenge: tenuringCriterion.
  
  	statScavenges := statScavenges + 1.
  	statGCEndUsecs := coInterpreter ioUTCMicrosecondsNow.
  	statSGCDeltaUsecs := statGCEndUsecs - gcStartUsecs.
  	statScavengeGCUsecs := statScavengeGCUsecs + statSGCDeltaUsecs.
  	statRootTableCount := scavenger rememberedSetSize.
  
  	coInterpreter postGCAction: GCModeScavenge.
- 	self runLeakCheckerForFullGC: false.
  
+ 	self runLeakCheckerForFullGC: false.
  	self checkFreeSpace!



More information about the Vm-dev mailing list