Thanks, Eric. After posting, I went back through some pixel samples and reached the conclusion that the MSBs were getting replicated into the LSB slots. (I was originally symied in looking for such a pattern by getting confused as I switched between R and B (with 3 bits eliminated and restored) vs. green (with 2 such bits).)
I appreciate your point regarding white, though I'd bet that one would have to look very carefully in order to see such -- the eye naturally adjusts it's "white balance" based on what it's looking at, so I'd guess that, in almost all cases, when looking at a 16-bit only display, for most images, it would perceive as white those pixels that were originally white, even if they actually had a very slight green cast due to the extra bit.
Anyway, does anybody know whether it is typical -- or even standard -- for 16-bit displays to render data in this fashion. That is, effectively inflating the data back to 24-bit, and using this kind of LSB stuffing algorithm, prior to rendering?
That is, doing this may be a smart way for an app to convert a 16-bit picture to 24-bit, for use on 24-bit systems. But does it reflect what actual results are likely to be when rendering the 16-bit picture on a 16-bit display?