Hey,
so I hit this really stupid problem with text rendering: If text is rendered on a bitmap,
what happens in my engine is that I first render it in software (using SDL_ttf), upload it to VRAM
and then simply render it ontop of the Bitmap like any other blending operation.
The problems start when drawing text with opacities <255. What I did until know was
render the text as usual, then draw it as a quad with the specified opacity. However, what happens
when I do this ontop of a cleared bitmap (as window contents always tend to be) is that the text
color is rendered at this opacity, and the resulting pixel itself also carries the opacity! This means
that what I see on the screen is actually at opacity^2, because the same opacity is reapplied during
the final render of the window contents sprite. (I'm sorry if this is hard to understand, I'm really bad
at explaining stuff..).
Anyway, this prompted me to look into how exactly the text drawing works in RMXP. Turns out it's
mostly the same process, except the text is blended using the same algorithm that blt and stretch_blt
employ (so probably the same DirectDraw functionality). The way this algorithm works makes the
"double opacity" problem never come up. Ok, enough blablah, here's what I got:
[Removed wrong algorithm]
(Would be cool if someone played with it a bit more and verified my results! :D)
And here's my giant problem with it: While the above is fairly easy to implement in software
(which I'm pretty sure is what DirectDraw does), it's really damn hard to do it in hardware because
at least with current gen GPUs, the blending stage is wired in fixed function hardware, and
it doesn't even look like that's going to change in the coming future. (On another note,
programmable blending is implemented on some mobile GPUs, but that's irrelevant).
This is a bit of a dilemma. I don't really want to keep around a shadow texture for each and
every Bitmap (to do the destination pixel reading from), so I'm thinking about how this could
be realized using other hacks. I know you guys use Direct3D, but we both use the same hardware
in the end so I'm pretty sure we're in the same boat here. The biggest problem is that I have
to somehow supply a "third" (uniform) alpha value (ab), but I can't use the source pixel
alpha because that would screw up the remaining color blending.
At least with text rendering I think I can salvage the situation because SDL_ttf provides me
with a software surface anyway, so I'm thinking about simply duplicating it, setting rgb to ab
and basically blending alpha and color components in two separate passes.
But as for the (stretch_/)blt situation, I have no fucking clue... I think I'll leave it aside for now as for
most games wrong blending doesn't seem to have much impact (most blt's are done with full opac).