WarpBlt from 16bpp to 32bpp

Andreas Raab andreas.raab at gmx.de
Fri Apr 27 04:41:24 UTC 2007


 > so it trys to install a color map from 32-bit regardless the actuall
 > depth of the source.  Why?

For precisely the reason you notice. The color map ensures that the 
averaged pixel can be mapped from the 32bit RGBA representation into 
whatever the destination depth is. Internally, when cellSize>1 we always 
treat the input as 32bit since we know that WarpBlt will average that 
way. But why this isn't working for 16->32 is a good question...

Cheers,
   - Andreas



Yoshiki Ohshima wrote:
>   Hello,
> 
>   I think that this is a known issue, but can't remember when was the
> last time we discussed this.  So, I'm bringing it up in a context I
> encountered.
> 
>   Here is a sample code to illustate the problem:
> 
> -------------------
> f _ (Form extent: 50 at 50 depth: 16) fillColor: Color green.
> g _ Form extent: 100 at 100 depth: 32.
> 
> (WarpBlt current toForm: g)
>   sourceForm: f;
>   cellSize: 2;
>   combinationRule: Form over;
>   sourceQuad: (f boundingBox) innerCorners destRect: g boundingBox;
>   warpBits.
> 
> "g fixAlpha."
> g colorAt: 50 at 50  
> -------------------
> 
>   First you fill a 16-bit form with green, and then warp blt it onto a
> 32-bit Form with the "over" rule.  You'd expect to get a green-colored
> form as a result, right?
> 
>   One of the problem is that the alpha channel is set to 0 so it
> results in "Color transparent".  You would imagine to fix it by "g
> fixAlpha" above.
> 
>   However, if you uncomment the line and execute, you get "(Color r:
> 0.94 g: 0.0 b: 0.0)" as the result.  This is wrong.
> 
>   A part of BitBltSimulation>>warpPickSmoothPixels: nPixels
>                               xDeltah: xDeltah yDeltah: yDeltah
>                               xDeltav: xDeltav yDeltav: yDeltav
>                               sourceMap: sourceMap
>                               smoothing: n
>                               dstShiftInc: dstShiftInc
> 
> says:
> 
> 	"normalize rgba sums"
> 	nPix = 4 "Try to avoid divides for most common n"
> 		ifTrue:[r _ r >> 2.	g _ g >> 2.	b _ b >> 2.	a _ a >> 2]
> 		ifFalse:[	r _ r // nPix.	g _ g // nPix.	b _ b // nPix.	a _ a // nPix].
> 	rgb _ (a << 24) + (r << 16) + (g << 8) + b.
> 
> 	"map the pixel"
> 	rgb = 0 ifTrue: [
> 		"only generate zero if pixel is really transparent"
> 		(r + g + b + a) > 0 ifTrue: [rgb _ 1]].
> 	rgb _ self mapPixel: rgb flags: cmFlags.
> 
> With the first assignment, the variable "rgb" holds a 32-bit pixel
> value.  We know the destination is also 32 bit, so we even doesn't
> have to map the pixel value.  However, the invocation of
> #mapPixel:flags: takes the rgb value as if it is 16-bit and mask and
> shift again.
> 
>   The right thing is to map the averaged rgb value back to the pixel
> value in the source depth and then map the color.  For a practical
> need, we can skip mapPixel:flags: if the dest is 32 and there is no
> color map specified.
> 
>   And, the workaround I use is to use cellSize: 1 for this... But here
> is another issue, actually.  WarpBlt>>cellSize: says:
> 
> cellSize: s
> 	cellSize _ s.
> 	cellSize = 1 ifTrue: [^ self].
> 	colorMap _ Color colorMapIfNeededFrom: 32 to: destForm depth.
> 
> so it trys to install a color map from 32-bit regardless the actuall
> depth of the source.  Why?
> 
> -- Yoshiki
> 
> 




More information about the Squeak-dev mailing list