[Vm-dev] VM Maker: VMMaker.oscogglue-eem.1035.mcz

commits at source.squeak.org commits at source.squeak.org
Wed May 6 19:53:19 UTC 2015


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscogglue-eem.1035.mcz

==================== Summary ====================

Name: VMMaker.oscogglue-eem.1035
Author: eem
Time: 6 May 2015, 12:51:20.423 pm
UUID: c241c497-2aea-44e9-8391-336ce99167f1
Ancestors: VMMaker.oscog-eem.1034

Fork from VMMaker.oscog-eem.1034 for stable Glue.

Changes from VMMaker.oscog-eem.1288 &
VMMaker.oscog-eem.1291:

Implement remembered table pruning via ref counts.
The algorithm selectively tenures objects to reduce
the remembered table, instead of merely tenuring
everything.

Be selective about remembering tenured objects;
actually scan their contents before remembering
willy-nilly.

Nuke obsolete copyToOldSpace: methods.

Change computeRefCountToShrinkRT to
- compute the ref counts and population in a single
  pass over the RT
- determine the ref count for tenuring based on
  half the population of remembered objects, /not/
  half the size of the RT.

=============== Diff against VMMaker.oscog-eem.1034 ===============

Item was added:
+ ----- Method: Spur32BitMemoryManager>>rtRefCountOf:put: (in category 'header access') -----
+ rtRefCountOf: obj put: refCount
+ 	"Set the rt reference count of obj; this is the three bit field comprised
+ 	 of isGrey,isPinned,isRemembered.  See computeRefCountToShrinkRT."
+ 	| header |
+ 	self assert: (refCount between: 0 and: MaxRTRefCount).
+ 	header := self longAt: obj.
+ 	header := header bitAnd: (7 << self rememberedBitShift) bitInvert32.
+ 	header := header + (refCount << self rememberedBitShift).
+ 	self longAt: obj put: header!

Item was added:
+ ----- Method: Spur64BitMemoryManager>>rtRefCountOf:put: (in category 'header access') -----
+ rtRefCountOf: obj put: refCount
+ 	"Set the rt reference count of obj; this is the three bit field comprised
+ 	 of isGrey,isPinned,isRemembered.  See computeRefCountToShrinkRT."
+ 	| header |
+ 	self assert: (refCount between: 0 and: MaxRTRefCount).
+ 	header := self longAt: obj.
+ 	header := header bitAnd: (7 << self rememberedBitShift) bitInvert64.
+ 	header := header + (refCount << self rememberedBitShift).
+ 	self longAt: obj put: header!

Item was changed:
  CogClass subclass: #SpurGenerationScavenger
+ 	instanceVariableNames: 'coInterpreter manager eden futureSpace pastSpace futureSurvivorStart rememberedSet rememberedSetSize previousRememberedSetSize rememberedSetRedZone rememberedSetLimit refCountToShrinkRT weakList ephemeronList tenureCriterion tenureThreshold tenuringClassIndex tenuringProportion numRememberedEphemerons statTenures'
- 	instanceVariableNames: 'coInterpreter manager eden futureSpace pastSpace futureSurvivorStart rememberedSet rememberedSetSize previousRememberedSetSize rememberedSetRedZone rememberedSetLimit weakList ephemeronList tenureCriterion tenureThreshold tenuringClassIndex tenuringProportion numRememberedEphemerons statTenures'
  	classVariableNames: ''
  	poolDictionaries: 'SpurMemoryManagementConstants'
  	category: 'VMMaker-SpurMemoryManager'!
  
  !SpurGenerationScavenger commentStamp: 'eem 9/30/2013 11:05' prior: 0!
  SpurGenerationScavenger is an implementation of David Ungar's Generation Scavenging garbage collection algorithm.  See
  	Generation Scavenging, A Non-disruptive, High-Performance Storage Reclamation Algorithm
  	David Ungar
  	Proceeding
  	SDE 1 Proceedings of the first ACM SIGSOFT/SIGPLAN software engineering symposium on Practical software development environments
  	Pages 157 - 167 
  	ACM New York, NY, USA ©1984 
  
  Also relevant are
  	An adaptive tenuring policy for generation scavengers
  	David Ungar & Frank Jackson
  	ACM Transactions on Programming Languages and Systems (TOPLAS) TOPLAS Homepage archive
  	Volume 14 Issue 1, Jan. 1992 
  	Pages 1 - 27 
  	ACM New York, NY, USA ©1992
  and
  	Ephemerons: a new finalization mechanism
  	Barry Hayes
  	Proceedings of the 12th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
  	Pages 176-183 
  	ACM New York, NY, USA ©1997
  
  See text below the variable definitions and explanation below for a full explanation of weak and ephemeron processing.
  
  Instance Variables
  	coInterpreter:					<StackInterpreterSimulator|CogVMSimulator>
  	eden:							<SpurNewSpaceSpace>
  	ephemeronList:					<Integer|nil>
  	futureSpace:					<SpurNewSpaceSpace>
  	futureSurvivorStart:				<Integer address>
  	manager:						<SpurMemoryManager|Spur32BitMMLESimulator et al>
  	numRememberedEphemerons:	<Integer>
  	pastSpace:						<SpurNewSpaceSpace>
  	previousRememberedSetSize:	<Integer>
  	rememberedSet:				<CArrayAccessor on: Array>
  	rememberedSetSize:			<Integer>
  	tenuringProportion:				<Float>
  	tenuringThreshold:				<Integer address>
  	weakList:						<Integer|nil>
  
  coInterpreter
  	- the interpreter/vm, in this context, the mutator
  
  manager
  	- the Spur memory manager
  
  eden
  	- the space containing newly created objects
  
  futureSpace
  	- the space to which surviving objects are copied during a scavenge
  
  futureSurvivorStart
  	- the allocation pointer into futureSpace
  
  pastSpace
  	- the space surviving objects live in until the next scavenge
  
  rememberedSet
  	- the root old space objects that refer to objects in new space; a scavenge starts form these roots and the interpreter's stack
  
  rememberedSetSize
  	- the size of the remembered set, also the first unused index in the rememberedSet
  
  previousRememberedSetSize:
  	- the size of the remembered set before scavenging objects in future space.
  
  numRememberedEphemerons
  	- the number of unscavenged ephemerons at the front of the rememberedSet.
  
  ephemeronList
  	- the head of the list of corpses of unscavenged ephemerons reached in the current phase
  
  weakList
  	- the head of the list of corpses of weak arrays reached during the scavenge.
  
  tenuringProportion
  	- the amount of pastSpace below which the system will not tenure unless futureSpace fills up, and above which it will eagerly tenure
  
  tenuringThreshold
  	- the pointer into pastSpace below which objects will be tenured
  
  Weakness and Ephemerality in the Scavenger.
  Weak arrays should not hold onto their referents (except from their strong fileds, their named inst vars).  Ephemerons are objects that implement instance-based finalization; attaching an ephemeron to an object keeps that object alive and causes the ephemeron to "fire" when the object is only reachable from the ephemeron (or other ephemerons & weak arrays).  They are a special kind of Associations that detect when their keys are about to die, i.e. when an ephemeron's key is not reachable from the roots except from weak arrays and other ephemerons with about-to-die keys.  Note that if an ephemeron's key is not about to die then references from the rest of the ephemeron can indeed prevent ephemeron keys from dying.
  
  The scavenger is concerned with collecting objects in new space, therefore it ony deals with weak arrays and ephemerons that are either in the remembered set or in new space.  By deferring scanning these objects until other reachable objects have been scavenged, the scavenger can detect dead or dying references.
  
  Weak Array Processing
  In the case of weak arrays this is simple.  The scavenger refuses to scavenge the referents of weak arrays in scavengeReferentsOf: until the entire scavenge is over.  It then scans the weak arrays in the remembered set and in future space and nils all fields in them that are referring to unforwarded objects in eden and past space, because these objects have not survived the scavenge.  The root weak arrays remaining to be scavenged are in the remembered table.  Surviving weak arrays in future space are collected on a list.  The list is threaded through the corpses of weak arrays in eden and/or past space.  weakList holds the slot offset of the first weak array found in eden and/or past space.  The next offset is stored in the weak array corpse's identityHash and format fields (22 bits & 5 bits of allocationUnits, for a max new space size of 2^28 bytes, 256Mb).  The list is threaded throguh corpses, but the surviving arrays are pointed to by the corpses' forwarding pointers.
  
  Ephemeron Processing
  The case of ephemerons is a little more complicated because an ephemeron's key should survive.  The scavenger is cyclical.  It scavenges the remembered set, which may copy and forward surviving objects in past and/or eden spaces to future space.  It then scavenges those promoted objects in future space until no more are promoted, which may in turn remember more objects.  The cycles continue until no more objects get promoted to future space and no more objects get remembered.  At this point all surviving objecta are in futureSpace.
  
  So if the scavenger does not scan ephemerons in the remembered set or in future space until the scavenger finishes cycling, it can detect ephemerons whose keys are about to die because these will be unforwarded objects in eden and/or past space.  Ephemerons encountered in the remembered set are either processed like ordinary objects if their keys have been promoted to futureSpace, or are moved to the front of the rememberedSet (because, dear reader, it is a sequence) if their keys have not been promoted.  Ephemerons encountered in scavengeReferentsOf: are either scanned like normal objects if their keys have been promoted, or added to the ephemeronList, organized identically to the weakList, if their keys are yet to be promoted.  Since references from other ephemerons with surviving keys to ephemeron keys can and should prevent the ephemerons whose keys they are from firing the scavenger does not fire ephemerons unless all unscavenged ephemerons have unscavenged keys.  So the unscavenged ephemerons (the will be at the beginning of the remembered set and on the ephemeronList) are scanned and any that have promoted keys are scavenged.  But if no unscavenged ephemerons have surviving keys then all the unscavenged ephemerons are fired and then scavenged.  This in turn may remember more objects and promote more objects to future space, and encounter more unscavenged ephemerons.  So the scavenger continues until no more objects are remembered, no more objects are promoted to future space and no more unscavenged ephemerons exist.!

Item was changed:
  ----- Method: SpurGenerationScavenger class>>initialize (in category 'class initialization') -----
  initialize
  	"SpurGenerationScavenger initialize"
  	TenureByAge := 1.
  	TenureByClass := 2.
+ 	TenureToShrinkRT := 3.
+ 	DontTenure := 4.
+ 	MarkOnTenure := 5.
+ 	MaxRTRefCount := 7 "The field comprised of {isGrey,isPinned,isRemembered}"!
- 	DontTenure := 3.
- 	DontTenureButDoUnmark := 4.
- 	MarkOnTenure := 5!

Item was added:
+ ----- Method: SpurGenerationScavenger>>allNewSpaceObjectsHaveZeroRTRefCount (in category 'remembered set') -----
+ allNewSpaceObjectsHaveZeroRTRefCount
+ 	manager allNewSpaceObjectsDo:
+ 		[:obj|
+ 		(manager rtRefCountOf: obj) > 0 ifTrue:
+ 			[^false]].
+ 	^true!

Item was added:
+ ----- Method: SpurGenerationScavenger>>computeRefCountToShrinkRT (in category 'remembered set') -----
+ computeRefCountToShrinkRT
+ 	"Some time in every scavenger's life there may come a time when someone writes code that stresses
+ 	 the remembered table.  One might conclude that if the remembered table is full, then the right thing
+ 	 to do is simply to tenure everything, emptying the remembered table.  Bt in some circumstances this
+ 	 can be counter-productive, and result in the same situation arising soon after tenuring everything.
+ 	 Instead, we can try and selectively prune the remembered table, tenuring only those objects that
+ 	 are referenced by many objects in the remembered table.  That's what this algorithm does.  It
+ 	 reference counts young objects referenced from the remembered set, and then sets a threshold
+ 	 used to tenure objects oft referenced from the remembered set, thereby allowing  the remembered
+ 	 set to shrink, while not tenuring everything.
+ 
+ 	 Once in a network monitoring application in a galaxy not dissimilar from the one this code inhabits,
+ 	 a tree of nodes referring to large integers was in precisely this situation.  The nodes were old, and
+ 	 the integers were in new space.  Some of the nodes referred to shared numbers, some their own
+ 	 unique numbers.  The numbers were updated frequently. Were new space simply tenured when the
+ 	 remembered table was full, the remembered table would soon fill up as new numbers were computed.
+ 	 Only by selectively pruning the remembered table of nodes that shared data, was a balance achieved
+ 	 whereby the remembered table population was kept small, and tenuring rates were low."
+ 	<inline: #never>
+ 	| population |
+ 	<var: 'population' declareC: 'long population[MaxRTRefCount + 1]'>
+ 	self cCode: [self me: population ms: 0 et: (self sizeof: #long) * (MaxRTRefCount + 1)]
+ 		inSmalltalk: [population := CArrayAccessor on: (Array new: MaxRTRefCount + 1 withAll: 0)].
+ 	self assert: self allNewSpaceObjectsHaveZeroRTRefCount.
+ 	self referenceCountRememberedReferents: population.
+ 	self setRefCountToShrinkRT: population
+ 
+ 	"For debugging:
+ 	(manager allNewSpaceObjectsDo: [:o| manager rtRefCountOf: o put: 0])"!

Item was changed:
  ----- Method: SpurGenerationScavenger>>copyToFutureSpace:bytes: (in category 'scavenger') -----
  copyToFutureSpace: survivor bytes: bytesInObject
  	"Copy survivor to futureSpace.  Assume it will fit (checked by sender).
  	 Answer the new oop of the object (it may have an overflow size field)."
  	<inline: true>
  	| startOfSurvivor newStart |
  	self assert: futureSurvivorStart + bytesInObject <= futureSpace limit.
  	startOfSurvivor := manager startOfObject: survivor.
  	newStart := futureSurvivorStart.
  	futureSurvivorStart := futureSurvivorStart + bytesInObject.
  	manager mem: newStart asVoidPointer cp: startOfSurvivor asVoidPointer y: bytesInObject.
+ 	tenureCriterion = TenureToShrinkRT ifTrue:
+ 		[manager rtRefCountOf: newStart + (survivor - startOfSurvivor) put: 0].
  	^newStart + (survivor - startOfSurvivor)!

Item was removed:
- ----- Method: SpurGenerationScavenger>>copyToOldSpace: (in category 'scavenger') -----
- copyToOldSpace: survivor
- 	"Copy survivor to oldSpace.  Answer the new oop of the object."
- 	<inline: true>
- 	| nTenures numSlots hash newOop |
- 	nTenures := statTenures.
- 	self flag: 'why not just pass header??'.
- 	numSlots := manager numSlotsOf: survivor.
- 	hash := manager rawHashBitsOf: survivor.
- 	newOop := manager
- 					allocateSlotsInOldSpace: numSlots
- 					format: (manager formatOf: survivor)
- 					classIndex: (manager classIndexOf: survivor).
- 	newOop ifNil:
- 		[manager growOldSpaceByAtLeast: 0. "grow by growHeadroom"
- 		 newOop := manager
- 					allocateSlotsInOldSpace: numSlots
- 					format: (manager formatOf: survivor)
- 					classIndex: (manager classIndexOf: survivor).
- 		 newOop ifNil:
- 			[self error: 'out of memory']].
- 	manager
- 		mem: (newOop + manager baseHeaderSize) asVoidPointer
- 		cp: (survivor + manager baseHeaderSize) asVoidPointer
- 		y: numSlots * manager wordSize.
- 	(manager hasPointerFields: survivor) ifTrue:
- 		[self remember: newOop].
- 	hash ~= 0 ifTrue:
- 		[manager setHashBitsOf: newOop to: hash].
- 	statTenures := nTenures + 1.
- 	^newOop!

Item was removed:
- ----- Method: SpurGenerationScavenger>>copyToOldSpace:bytes: (in category 'scavenger') -----
- copyToOldSpace: survivor bytes: bytesInObject
- 	"Copy survivor to oldSpace.  Answer the new oop of the object."
- 	<inline: true>
- 	| nTenures startOfSurvivor newStart newOop |
- 	nTenures := statTenures.
- 	startOfSurvivor := manager startOfObject: survivor.
- 	newStart := manager allocateOldSpaceChunkOfBytes: bytesInObject.
- 	newStart ifNil:
- 		[manager growOldSpaceByAtLeast: 0. "grow by growHeadroom"
- 		 newStart := manager allocateOldSpaceChunkOfBytes: bytesInObject.
- 		 newStart ifNil:
- 			[self error: 'out of memory']].
- 	manager checkFreeSpace.
- 	manager mem: newStart asVoidPointer cp: startOfSurvivor asVoidPointer y: bytesInObject.
- 	newOop := newStart + (survivor - startOfSurvivor).
- 	(manager hasPointerFields: survivor) ifTrue:
- 		[self remember: newOop].
- 	statTenures := nTenures + 1.
- 	^newOop!

Item was changed:
  ----- Method: SpurGenerationScavenger>>copyToOldSpace:bytes:format: (in category 'scavenger') -----
  copyToOldSpace: survivor bytes: bytesInObject format: formatOfSurvivor
  	"Copy survivor to oldSpace.  Answer the new oop of the object."
+ 	<inline: #never> "Should be too infrequent to lower icache density of copyAndForward:"
- 	<inline: true>
  	| nTenures startOfSurvivor newStart newOop |
  	self assert: (formatOfSurvivor = (manager formatOf: survivor)
  				and: [((manager isMarked: survivor) not or: [tenureCriterion = MarkOnTenure])
+ 				and: [tenureCriterion = TenureToShrinkRT
+ 					or: [(manager isPinned: survivor) not
+ 						and: [(manager isRemembered: survivor) not]]]]).
- 				and: [(manager isPinned: survivor) not
- 				and: [(manager isRemembered: survivor) not]]]).
  	nTenures := statTenures.
  	startOfSurvivor := manager startOfObject: survivor.
  	newStart := manager allocateOldSpaceChunkOfBytes: bytesInObject.
  	newStart ifNil:
  		[manager growOldSpaceByAtLeast: 0. "grow by growHeadroom"
  		 newStart := manager allocateOldSpaceChunkOfBytes: bytesInObject.
  		 newStart ifNil:
  			[self error: 'out of memory']].
  	"manager checkFreeSpace."
  	manager mem: newStart asVoidPointer cp: startOfSurvivor asVoidPointer y: bytesInObject.
  	newOop := newStart + (survivor - startOfSurvivor).
+ 	tenureCriterion >= (TenureToShrinkRT min: MarkOnTenure) ifTrue:
+ 		[tenureCriterion = TenureToShrinkRT ifTrue:
+ 			[manager rtRefCountOf: newOop put: 0].
+ 		 tenureCriterion = MarkOnTenure ifTrue:
+ 			[manager setIsMarkedOf: newOop to: true]].
- 	(manager isAnyPointerFormat: formatOfSurvivor) ifTrue:
- 		[self remember: newOop].
- 	tenureCriterion = MarkOnTenure ifTrue:
- 		[manager setIsMarkedOf: newOop to: true].
  	statTenures := nTenures + 1.
+ 	(manager isAnyPointerFormat: formatOfSurvivor) ifTrue:
+ 		["A very quick and dirty scan to find young referents.  If we misidentify bytes
+ 		  in a CompiledMethod as young we don't care; it's unlikely, and a subsequent
+ 		  scan of the rt will filter the object out.  But it's good to filter here because
+ 		  otherwise an attempt to shrink the RT may simply fill it up with new objects,
+ 		  and here the data is likely in the cache."
+ 		 manager baseHeaderSize to: bytesInObject - (survivor - startOfSurvivor) - manager wordSize by: manager wordSize do:
+ 			[:p| | field |
+ 			field := manager longAt: survivor + p.
+ 			(manager isReallyYoung: field) ifTrue:
+ 				[self remember: newOop.
+ 				 ^newOop]]].
  	^newOop!

Item was changed:
+ ----- Method: SpurGenerationScavenger>>growRememberedSet (in category 'remembered set') -----
- ----- Method: SpurGenerationScavenger>>growRememberedSet (in category 'store check') -----
  growRememberedSet
  	| obj numSlots newObj base |
  	<inline: false> "Don't ruin locality in remember:"
  	<var: #base type: #'sqInt *'>
  	obj := manager rememberedSetObj.
  	numSlots := manager numSlotsOf: obj.
  	self assert: numSlots >= 1024.
  	newObj := manager allocatePinnedSlots: numSlots * 2.
  	newObj ifNil:
  		[newObj := manager allocatePinnedSlots: numSlots + 1024.
  		 newObj ifNil:
  			[self error: 'could not grow remembered set']].
  	manager rememberedSetObj: newObj.
  	base := manager firstIndexableField: newObj.
  	0 to: rememberedSetSize - 1 do:
  		[:i| base at: i put: (rememberedSet at: i)].
  	"if growing in the middle of a GC, need to preserve marked status."
  	(manager isMarked: obj) ifTrue:
  		[manager
  			setIsMarkedOf: newObj to: true;
  			setIsMarkedOf: obj to: false].
  	manager freeObject: obj.
  	rememberedSet := base.
+ 	rememberedSetLimit := manager numSlotsOf: newObj.
+ 	self setRememberedSetRedZone!
- 	rememberedSetLimit := numSlots * 2.
- 	rememberedSetRedZone := rememberedSetLimit * 3 + 3 // 4!

Item was changed:
  ----- Method: SpurGenerationScavenger>>initializeRememberedSet (in category 'initialization') -----
  initializeRememberedSet
  	| obj |
  	obj := manager rememberedSetObj.
  	obj = manager nilObject ifTrue:
  		[obj := manager allocatePinnedSlots: 1024.
  		 manager rememberedSetObj: obj].
  	rememberedSet := manager firstIndexableField: obj.
  	rememberedSetSize := 0.
  	rememberedSetLimit := manager numSlotsOf: obj.
+ 	self setRememberedSetRedZone!
- 	rememberedSetRedZone := rememberedSetLimit * 3 + 3 // 4!

Item was added:
+ ----- Method: SpurGenerationScavenger>>referenceCountRememberedReferents: (in category 'remembered set') -----
+ referenceCountRememberedReferents: population
+ 	"Both reference count young objects reachable from the RT,
+ 	 and count the populations of each ref count, in a single pass."
+ 	<var: 'population' declareC: 'long population[MaxRTRefCount + 1]'>
+ 	<inline: true>
+ 	0 to: rememberedSetSize - 1 do:
+ 		[:i| | elephant |
+ 		elephant := rememberedSet at: i.
+ 		0 to: (manager numPointerSlotsOf: elephant) - 1 do:
+ 			[:j| | referent refCount |
+ 			referent := manager fetchPointer: j ofObject: elephant.
+ 			(manager isReallyYoung: referent) ifTrue:
+ 				[refCount := manager rtRefCountOf: referent.
+ 				 refCount < MaxRTRefCount ifTrue:
+ 					[refCount > 0 ifTrue:
+ 						[population at: refCount put: (population at: refCount) - 1].
+ 					 refCount := refCount + 1.
+ 					 manager rtRefCountOf: referent put: refCount.
+ 					 population at: refCount put: (population at: refCount) + 1]]]].!

Item was added:
+ ----- Method: SpurGenerationScavenger>>rememberedSetLimit (in category 'accessing') -----
+ rememberedSetLimit
+ 	<cmacro: '() GIV(rememberedSetLimit)'>
+ 	^rememberedSetLimit!

Item was changed:
  ----- Method: SpurGenerationScavenger>>scavenge: (in category 'scavenger') -----
  scavenge: tenuringCriterion
  	"The main routine, scavenge, scavenges young objects reachable from the roots (the stack zone
  	 and the rememberedTable).  It first scavenges the new objects immediately reachable from old
  	 ones (all in the remembered table), then the stack zone.  Then it scavenges those that are
  	 transitively reachable.  If this results in a promotion, the promotee gets remembered, and it first
  	 scavenges objects adjacent to the promotee, then scavenges the ones reachable from the
  	 promoted.  This loop continues until no more reachable objects are left.  At that point,
  	 pastSurvivorSpace is exchanged with futureSurvivorSpace.  Then any surviving weakArrays and
  	 weakArrays in the remembered set can be processed and their dead elements nilled.
  
  	 By default promotion (tenuring) is based on age and ammount of objects scavenged.  But
  	 tenuring can be based on e.g. a particular class.  The argument selects the tenuring criterion.
  
  	 Answer the limit of pastSpace, to allow the memory manager to bounds check survivors."
  	tenureCriterion := tenuringCriterion.
+ 	self strategizeToLimitRememberedTable.
  	self scavengeLoop.
  	self processWeaklings.
  	self computeTenuringThreshold.
  	self exchangeSurvivorSpaces.
  	^self initFutureSpaceStart!

Item was added:
+ ----- Method: SpurGenerationScavenger>>setRefCountToShrinkRT: (in category 'remembered set') -----
+ setRefCountToShrinkRT: population
+ 	"Choose a refCount that should shrink the rt by at least  half.
+ 	 i.e. find the maximum reference count that half the population have at least."
+ 	<var: 'population' declareC: 'long population[MaxRTRefCount + 1]'>
+ 	<inline: true>
+ 	| entirePopulation i count |
+ 	self assert: (population at: 0) = 0.
+ 	entirePopulation := 0.
+ 	1 to: MaxRTRefCount do:
+ 		[:j| entirePopulation := entirePopulation + (population at: j)].
+ 	count := 0.
+ 	i := MaxRTRefCount + 1.
+ 	[count < (entirePopulation // 2) and: [(i := i - 1) >= 0]] whileTrue:
+ 		[count := count + (population at: i)].
+ 	refCountToShrinkRT := i max: 0!

Item was added:
+ ----- Method: SpurGenerationScavenger>>setRememberedSetRedZone (in category 'remembered set') -----
+ setRememberedSetRedZone
+ 	| fudge |
+ 	<inline: true>
+ 	"fudge is a minimum below which we don't care about growing the RT.
+ 	 It is chosen so that with a default 4Mb new space, the RT is not considered
+ 	 full until it has from 512 to 1024 entries."
+ 	fudge := eden limit - eden start / manager wordSize // 1024.
+ 	rememberedSetRedZone := rememberedSetLimit * 3 // 4 max: fudge!

Item was changed:
  ----- Method: SpurGenerationScavenger>>shouldBeTenured: (in category 'scavenger') -----
  shouldBeTenured: survivor
  	"Answer if an object should be tenured.  The default policy tenuring policy
  	 is to use the tenuringThreshold to decide. If the survivors (measured in
  	 bytes) are above some fraction of the survivor space then objects below
  	 the threshold (older objects, since allocation grows upwards and hence
  	 new objects are later than old) are scavenged.  Otherwise, the threshold
  	 is set to 0 and no objects are tenured.  e.g. see
  		An adaptive tenuring policy for generation scavengers,
  		David Ungar & Frank Jackson.
  		ACM TOPLAS, Volume 14 Issue 1, Jan. 1992, pp 1 - 27.
  
  	 The other policies are for special purposes."
  	^tenureCriterion
  		caseOf: {
  			[TenureByAge]	->
  				[survivor < tenureThreshold]. 
  			[TenureByClass] ->
  				[(manager classIndexOf: survivor) = tenuringClassIndex].
+ 			[TenureToShrinkRT]	->
+ 				[(manager rtRefCountOf: survivor) >= refCountToShrinkRT]  }
- 			[DontTenureButDoUnmark]	->
- 				[manager setIsMarkedOf: survivor to: false.
- 				 false]  }
  		otherwise: [false]!

Item was added:
+ ----- Method: SpurGenerationScavenger>>strategizeToLimitRememberedTable (in category 'remembered set') -----
+ strategizeToLimitRememberedTable
+ 	<inline: true>
+ 	(tenureCriterion = TenureByAge
+ 	 and: [rememberedSetSize >= rememberedSetRedZone]) ifTrue:
+ 		[tenureCriterion := TenureToShrinkRT.
+ 		 self computeRefCountToShrinkRT]!

Item was added:
+ ----- Method: SpurGenerationScavengerSimulator>>computeRefCountToShrinkRT (in category 'remembered set') -----
+ computeRefCountToShrinkRT
+ 	self halt.
+ 	^super computeRefCountToShrinkRT!

Item was removed:
- ----- Method: SpurGenerationScavengerSimulator>>remember: (in category 'store check') -----
- remember: objOop
- 	(rememberedSetSize > 0
- 	 and: [(rememberedSet at: rememberedSetSize - 1) = objOop]) ifTrue:
- 		[self halt].
- 	^super remember: objOop!

Item was changed:
  SharedPool subclass: #SpurMemoryManagementConstants
  	instanceVariableNames: ''
+ 	classVariableNames: 'DontTenure MarkOnTenure MaxRTRefCount TenureByAge TenureByClass TenureToShrinkRT'
- 	classVariableNames: 'DontTenure DontTenureButDoUnmark MarkOnTenure TenureByAge TenureByClass'
  	poolDictionaries: ''
  	category: 'VMMaker-SpurMemoryManager'!

Item was changed:
  ----- Method: SpurMemoryManager>>initializeObjectMemory: (in category 'initialization') -----
  initializeObjectMemory: bytesToShift
  	"Initialize object memory variables at startup time. Assume endOfMemory at al are
  	 initialised by the image-reading code via setHeapBase:memoryLimit:endOfMemory:.
  	 endOfMemory is assumed to point to the end of the last object in the image.
  	 Assume: image reader also initializes the following variables:
  		specialObjectsOop
  		lastHash"
  	<inline: false>
  	| freeListObj |
  	"Catch mis-initializations leading to bad translations to C"
  	self assert: self baseHeaderSize = self baseHeaderSize.
  	self assert: (self maxSlotsForAlloc * self wordSize) asInteger > 0.
  	self bootstrapping ifFalse:
  		[self
  			initSegmentBridgeWithBytes: self bridgeSize
  			at: endOfMemory - self bridgeSize].
  	segmentManager adjustSegmentSwizzlesBy: bytesToShift.
  	"image may be at a different address; adjust oops for new location"
  	self adjustAllOopsBy: bytesToShift.
  	specialObjectsOop := segmentManager swizzleObj: specialObjectsOop.
  
  	"heavily used special objects"
  	nilObj		:= self splObj: NilObject.
  	falseObj	:= self splObj: FalseObject.
  	trueObj		:= self splObj: TrueObject.
  
  	"In Cog we insist that nil, true & false are next to each other (Cogit generates tighter
  	 conditional branch code as a result).  In addition, Spur places the free lists and
  	 class table root page immediately following them."
  	self assert: nilObj = oldSpaceStart.
  	self assert: falseObj = (self objectAfter: nilObj).
  	self assert: trueObj = (self objectAfter: falseObj).
  	freeListObj := self objectAfter: trueObj.
  	self setHiddenRootsObj: (self objectAfter: freeListObj).
  	markStack := self swizzleObjStackAt: MarkStackRootIndex.
  	weaklingStack := self swizzleObjStackAt: WeaklingStackRootIndex.
  	ephemeronQueue := self swizzleObjStackAt: EphemeronQueueRootIndex.
  	self assert: self validObjStacks.
  	self assert: (self isEmptyObjStack: markStack).
  	self assert: (self isEmptyObjStack: weaklingStack).
  
  	self initializeFreeSpacePostLoad: freeListObj.
  	segmentManager collapseSegmentsPostSwizzle.
  	self computeFreeSpacePostSwizzle.
  	self initializeOldSpaceFirstFree: freeOldSpaceStart. "initializes endOfMemory, freeStart, free space"
+ 	self initializeNewSpaceVariables.
+ 	scavenger initializeRememberedSet.
- 	"self bootstrapping ifFalse:
- 		["self initializeNewSpaceVariables.
- 		 scavenger initializeRememberedSet"]".
  	segmentManager checkSegments.
  
  	numCompactionPasses := CompactionPassesForGC.
  
  	"These defaults should depend on machine size; e.g. too small on a powerful laptop, too big on a Pi."
  	growHeadroom := 16*1024*1024.		"headroom when growing"
  	shrinkThreshold := 32*1024*1024.		"free space before shrinking"
  	self setHeapSizeAtPreviousGC.
  	heapGrowthToSizeGCRatio := 0.333333. "By default GC after scavenge if heap has grown by a third since the last GC"!

Item was changed:
  ----- Method: SpurMemoryManager>>rootTableCapacity (in category 'scavenger') -----
  rootTableCapacity
+ 	<cmacro: '() GIV(rememberedSetLimit)'>
+ 	^scavenger rememberedSetLimit!
- 	^self numSlotsOf: self rememberedSetObj!

Item was changed:
  ----- Method: SpurMemoryManager>>rootTableCount (in category 'accessing') -----
  rootTableCount
+ 	<cmacro: '() GIV(rememberedSetSize)'>
  	^scavenger rememberedSetSize!

Item was added:
+ ----- Method: SpurMemoryManager>>rtRefCountOf: (in category 'gc - scavenging') -----
+ rtRefCountOf: obj
+ 	"Answer the rt reference count of obj; this is the three bit field comprised
+ 	 of isGrey,isPinned,isRemembered.  See computeRefCountToShrinkRT."
+ 	^(self longAt: obj) >> self rememberedBitShift bitAnd: MaxRTRefCount!

Item was added:
+ ----- Method: SpurMemoryManager>>rtRefCountOf:put: (in category 'gc - scavenging') -----
+ rtRefCountOf: obj put: refCount
+ 	"Set the rt reference count of obj; this is the three bit field comprised
+ 	 of isGrey,isPinned,isRemembered.  See computeRefCountToShrinkRT."
+ 	self subclassResponsibility!

Item was added:
+ ----- Method: StackInterpreterSimulator>>primitiveVMParameter (in category 'system control primitives') -----
+ primitiveVMParameter
+ 	(self stackTop = (objectMemory integerObjectOf: 9)
+ 	 or: [self stackTop = (objectMemory integerObjectOf: 52)]) ifTrue:
+ 		[self halt].
+ 	^super primitiveVMParameter!

Item was changed:
  ----- Method: VMMaker class>>initialize (in category 'initialisation') -----
  initialize
  	"VMMaker initialize"
  	DirNames := Dictionary new.
  	DirNames
  		at: #coreVMDir put: 'vm';
  		at: #platformsDir put: 'platforms';
  		at: #pluginsDir put: 'plugins';
  		at: #sourceDir put: 'src'.
  
  	"Try and decide where the Cog source tree is.  Two configurations are likely.
  	 One is that the VMMaker image is running in the image directory in the
  	 source tree and hence everything will be at '..'.
  	 Another is where the source tree is at the same level as the VMMaker image,
  	 in which case it is likely called oscogvm or Cog."
+ 	#('oscogvm.glue/platforms' '../platforms' 'oscogvm/platforms' 'Cog/platforms')
+ 		with: #('oscogvm.glue' '../' 'oscogvm' 'Cog')
- 	#('../platforms' 'oscogvm/platforms' 'Cog/platforms')
- 		with: #('../' 'oscogvm' 'Cog')
  		do: [:dir :path|
  			(FileDirectory default directoryExists: dir) ifTrue:
  				[DirNames at: #sourceTree put: path.
  				 ^self]]!



More information about the Vm-dev mailing list