Jump to content

jsg

Newbies
  • Posts

    2
  • Joined

  • Last visited

jsg's Achievements

Newbie

Newbie (1/14)

  • First Post
  • Week One Done
  • One Month Later
  • One Year In

Recent Badges

0

Reputation

  1. Thanks, Eric. After posting, I went back through some pixel samples and reached the conclusion that the MSBs were getting replicated into the LSB slots. (I was originally symied in looking for such a pattern by getting confused as I switched between R and B (with 3 bits eliminated and restored) vs. green (with 2 such bits).) I appreciate your point regarding white, though I'd bet that one would have to look very carefully in order to see such -- the eye naturally adjusts it's "white balance" based on what it's looking at, so I'd guess that, in almost all cases, when looking at a 16-bit only display, for most images, it would perceive as white those pixels that were originally white, even if they actually had a very slight green cast due to the extra bit. Anyway, does anybody know whether it is typical -- or even standard -- for 16-bit displays to render data in this fashion. That is, effectively inflating the data back to 24-bit, and using this kind of LSB stuffing algorithm, prior to rendering? That is, doing this may be a smart way for an app to convert a 16-bit picture to 24-bit, for use on 24-bit systems. But does it reflect what actual results are likely to be when rendering the 16-bit picture on a 16-bit display?
  2. Can you help me out with how this plugin actually functions, in terms of actually converting between 24-bit data and the 16-bit format? I took a sample image and saved it in rgb565 format. The size of the file seems correct for 16 bits per pixel. However, there's no obvious change in the image -- perhaps not that surprising at a high level. When I zoom in, I can see some very minor changes -- but there really appear to be very few pixels changed. But here's the strange thing. When I reload the stored rgb565 file, and use the color sample tool to check the pixel color values, I see values that don't seem to make sense for 565 data. i.e., I'm expecting to see that all the red and blue values are multiples of 8 (least significant 3 bits removed) and that the green values are all multiples of 4 (least significant 2 bits removed). But I see no indication that any bits have been removed. What am I missing here? Is some algorithm being used to stuff the least-significant bits so that they aren't just zeroes? (If so, what is the method, and doesn't that defeat the purpose of being able to see what the image will look like on a 16-bit display?) Thanks for any insight you can provide. Jonathan
×
×
  • Create New...