[Vm-dev] simulation in BitBltSimulation

David T. Lewis lewis at mail.msen.com
Sat Mar 15 20:24:13 UTC 2014


On Fri, Mar 14, 2014 at 06:44:59PM -0700, tim Rowledge wrote:
> 
> 
> On 14-03-2014, at 6:16 PM, Eliot Miranda <eliot.miranda at gmail.com> wrote:
> 
> > Hi Tim,
> > 
> >     I'm simulating bitblt code and have a situation where
> > 
> > (cmShiftTable at: 0) = 4294967280
> > (cmShiftTable at: 0) hex '16rFFFFFFF0'
> > 
> > Clearly this should be -4.
> > 
> > This breaks simulation because
> >         (sourcePixel bitAnd: (cmMaskTable at: 0)) bitShift: (cmShiftTable at: 0)
> > is trying to make a 4Gb LargePositiveInteger ;-).
> > 
> > How can we make it so?
> 
> Hmm, well if you need to make a 4Gb LPI I guess you?d best have plenty of memory?
> 
> Dunno; looks like cmShiftTable is supposed to use int32_t thingies in the C code.
> BitBltSimulation>loadColorMapShiftOrMaskFrom: looks like the culprit to me though -  (simulated)firstIndexableField: might be not doing the right thing? 
>

firstIndexableField is ok, this looks like a problem elsewhere.

Probably not helpful, but maybe it will point someone to a clue: The shifts array
in a color map is an IntegerArray, so must be signed 32 bit int. Somewhere along
the line, a perfectly valid entry in shift array with value -16 must have gotten
converted or cast to unsigned int, such that it was now being interpreted as an
integer with value 16rFFFFFFF0. I'm not very familiar with BitBlt, but this looks
to me like in inappropriated conversion from int to unsigned, or possibly something
related to the way an integer value is being extracted from an IntegerArray.

Dave
 


More information about the Vm-dev mailing list