I'm using the images as normal textures. The way OpenGL works is that you upload a texture image to a texture unit, which is then used to map between the coordinates related to the projection of the 3D model on the screen and a color. The coordinates are floating point numbers and they rarely fall on exact pixel centers, so interpolation is used.
For example, let's say that the texture is 2x1 pixels, left pixel is RGBA [0.5,0.5,0.5,0], right pixel is [0.5,0.5,0.5,1], and I need to know the color at coordinates (0.5, 0.5) (the center of the image). Interpolation will be done and result in [0.5,0.5,0.5,0.5]. Now if instead left pixel is [0,0,0,0], as Paint.NET would create it, the interpolated intermediary pixel becomes [0.25,0.25,0.25,0.5]. Even though the image looks the same at a 1/1 ratio, when interpolating it is different. The color information is needed here.
Strangely GPUs have no concept of color, blending, or even geometry. They are just big number crunchers. Thanks to this discussion I think I now know how to fix the problem (by premultiplying RGB with alpha after having loaded the images).
One last thing: in my example in my previous post, if I set the alpha to 1, shouldn't the color information be preserved?