Jump to content

Compare pixels for similarity to source pixel.


Recommended Posts

I am toying with writing a plugin that compares surrounding pixels to the source pixel.

I understand (I think) what it would take to do an exact comparison. However, what would I do to check if a pixel is similar to the source based on a user specified percentage? Would I sum the each of the channels and simply do the math?

This would assume that I already know which pixels I'm comparing.

 

Take responsibility for your own intelligence. 😉 -Rick Brewster

Link to comment
Share on other sites

well, I know I'm only concerned with RGB, not A. I actually think that I would want each channel to have an equal effect.

My problem is as follows

Source:

R= 125

G= 69

B= 78

Total = 272

Comparison Target:

R= 45

G= 215

B= 12

Total = 272

This leads me to believe that I would need to compare each channel separately...? I imaging that this would increase the processing time greatly.

Is there a quicker alternative? Is more information needed about the desired operation?

 

Take responsibility for your own intelligence. 😉 -Rick Brewster

Link to comment
Share on other sites

            int average=0, x2=0, y2=0;

           if(y!=selection.Top)
           {
               x2=x;
               y2=y+1;
               CurrentPixel = src[x2,y2];
           }

Trying look at the pixel above the actual "current pixel". Getting "Coordinates out of range" error. Is what I want to do possible?

 

Take responsibility for your own intelligence. 😉 -Rick Brewster

Link to comment
Share on other sites

yes, so long as y != selection.Bottom... GetBilinearSampleWrapped/Clamped(x,y) is the easy way out (a bit more processing but not terribly much), a bounds check is easy but repetitive.

As far as math goes:

Try finding limits on the average for the given percentage instead of finding the average then testing against limits-- i.e.

double lowerLimitR = src[,].R * (percent/100);

double upperLimitR = src[,].R * (1/ (percent/100));

Note that the values percent/100 and 1/(percent/100) can be calculated in onSetRenderInfo to save processing time. Also note that in the context of Paint.NET tolerance percent would have to be the user input percentage subtracted from 100.

Additionally, most processing time nowadays gets spent on synchronous memory calls -- allocating thousands of blocks of memory is waaay more time-consuming than calculating some math functions--for example, selection options become slower when a selection becomes complex (undefinable by a simple function) since every pixel location must be stored into memory. Thus, for something such as this (math-heavy), the processing isn't that huge an issue until you start getting to the gaussian blur level, which is on the order of radius^2 *k operations per pixel (k because I really don't remember what the code looked like when I saw it last).

If you want help writing code, I'd need to know what the result you're trying to achieve is. There's generally more than one way to do something, and your way likely isn't my way ;)

~~

Link to comment
Share on other sites

I'm really just trying to learn C# for making plugins. I should probably start by learning C# as a whole... ;-)

My desire is to build a kind of noise cancellation plugin that does the following:

1. Averages all of the pixels surrounding src[x,y].

//  X X X
//  X O X
//  X X X
//  O = src[x,y]
//  X = pixels to consider in average (if they exist).

2. Read a user specified decimal value (as Amount1).

3. Replace src[x,y] with with:

(src[x,y]*(1-Amount1))+(value of averaged data*Amount1))
//not quite pseudo-code...

I'm not the kind of person that has to do everything myself, so if someone can make this plugin work using simple code that I would likely understand, I will not be mad... I am more likely to learn from examples than personal failures... ;-)

 

Take responsibility for your own intelligence. 😉 -Rick Brewster

Link to comment
Share on other sites

Sorry for double post...

The post above may seem like a different course than what I was originally describing.

This is what I wanted to do originally:

Test each of the 8 pixels immediately surrounding the src (if !null) for similarity to the src based on user defined percent/decimal value.

Keep track of how many of the surrounding pixels = !null.

Allow user to set a requirement for how many of the surround pixels (adjusted to account for the null pixels) the src has to match for it not to be considered noise.

Pixels failing the second test are replaced with the average of the pixels they did not match.

--

Anyway, the idea from my previous post came about in an attempt to simplify things for now as I "learn the ropes".

 

Take responsibility for your own intelligence. 😉 -Rick Brewster

Link to comment
Share on other sites

Here's your "original" source:

#region UICode
int Amount1=20;	//[0,100]Tolerance
int Amount2=1;    //[1,8]Matching pixels
#endregion

void Render(Surface dst, Surface src, Rectangle rect)
{
   ColorBgra CurrentPixel;
   byte B, G, R;
   int totalB, totalG, totalR;
   int matches;
   double lowerLimitR, lowerLimitG, lowerLimitB;
   double upperLimitR, upperLimitG, upperLimitB;
   double percent, percentInverse;
   ColorBgra[] pixels = new ColorBgra[8]; // Array for containing surrounding pixels. Indices:
   // 0 1 2
   // 3 x 4 (x is the current pixel)
   // 5 6 7
   percent = (100-Amount1)/100;
   percentInverse = 1/percent;
   for (int y = rect.Top; y < rect.Bottom; y++)
   {
       for (int x = rect.Left; x < rect.Right; x++)
       {
           //Set limits:
           lowerLimitR = src[x,y].R * percent;
           lowerLimitG = src[x,y].G * percent;
           lowerLimitB = src[x,y].B * percent;
           upperLimitR = src[x,y].R * percentInverse;
           upperLimitG = src[x,y].G * percentInverse;
           upperLimitB = src[x,y].B * percentInverse;
           //Clear the loop variables
           totalB = 0;
           totalG = 0;
           totalR = 0;
           matches = 0;
           // Get our pixels.  Technically, you could use a kernel here,
           // But it's not worth it for 8 pixels.
           CurrentPixel = src[x,y];
           pixels[0] = src.GetBilinearSampleClamped(x-1, y-1);
           pixels[1] = src.GetBilinearSampleClamped(x  , y-1);
           pixels[2] = src.GetBilinearSampleClamped(x+1, y-1);
           pixels[3] = src.GetBilinearSampleClamped(x-1, y  );
           pixels[4] = src.GetBilinearSampleClamped(x+1, y  );
           pixels[5] = src.GetBilinearSampleClamped(x-1, y+1);
           pixels[6] = src.GetBilinearSampleClamped(x  , y+1);
           pixels[7] = src.GetBilinearSampleClamped(x+1, y+1);
           // Loop through the array and...
           foreach(ColorBgra s in pixels)
           {
               // Test if the pixel matches,
               if (s.B < upperLimitB && s.B > lowerLimitB &&
                   s.G < upperLimitG && s.G > lowerLimitG &&
                   s.R < upperLimitR && s.R > lowerLimitR)
                   matches++;
               // and add to our total that we will average if needed.
               totalB += s.B;
               totalG += s.G;
               totalR += s.R;
           }
           // Test for matches
           if (matches >= Amount2)
               dst[x,y] = CurrentPixel;
           else // Average if needed.  Using src[,].A so stuff doesn't look goofy if it is antialiased.
               dst[x,y] = ColorBgra.FromBgra(Int32Util.ClampToByte(totalB/8), Int32Util.ClampToByte(totalG/8), Int32Util.ClampToByte(totalR/8), CurrentPixel.A);
       }
   }
}

Now, notice the effect of this--it leaves a ring around any noise that is of the noise's intensity averaged with 7 pixels surrounding it. To combat this, you could, for example, make a public (technically static but codelab assumes static) boolean array (outside the Render function so that all instances of Render() can access it) of the dimensions of the selection and set that array[x,y] to true if you found noise, and then make sure that you don't repeat the dst[x,y] = ColorBgra.FromBgra(...) line if any of the surrounding pixels are true.

Edit:

Also.. you had wonderful bits about null. Remember always that null and not defined are two different things--you can define a variable as null if its type has support for a null value, however, attempting to access a variable that isn't in memory (such as something outside the bounds of an array) directly will crash the program. GetBilinearSampleClamped saves you any of the mess that is testing for constraints by simply returning the pixel closest to the point you're trying to access that exists on the given surface (think of lines going out from the defined surface that are the same pixel value as where they start on the edge).

~~

Link to comment
Share on other sites

Wow, this is great to see in real life...even if it doesn't do much. ;-)

I now know that my attempts were nowhere close to a working effect. Thank you very much for the code and the helpful tips/comments! I have learned quite a bit just following the effect's workflow.

I'm gonna play with this code and see if I can get it to dynamically grow the comparison region; turn it into a blur effect. lol.

EDIT:

Now, notice the effect of this--it leaves a ring around any noise that is of the noise's intensity averaged with 7 pixels surrounding it. To combat this, you could, for example, make a public (technically static but codelab assumes static) boolean array (outside the Render function so that all instances of Render() can access it) of the dimensions of the selection and set that array[x,y] to true if you found noise, and then make sure that you don't repeat the dst[x,y] = ColorBgra.FromBgra(...) line if any of the surrounding pixels are true.

Is this something that would require me to loop through the src twice? Once to identify and store the location of the noise, then a second pass to remove it without using any of the noise pixels in the averaging of surrounding pixels...

 

Take responsibility for your own intelligence. 😉 -Rick Brewster

Link to comment
Share on other sites

Ultimately to optimize the algorithm you would have to do the matching test for the current pixel AND the 8 pixels surrounding, figure out which pixel had the least matches surrounding it, and do the noise removal (average of the pixels surrounding it) on that pixel and not any of the others. What I originally proposed is more or less stupid in that it only half-works.

~~

Link to comment
Share on other sites

This is a really good idea! One of the problems with Eigen Blur is that it doesn't correctly handle pixels that "don't exist"... i.e. those which are white and fully transparent. I was thinking about this a week or two ago, and I presumed that the problem could be solved by weighting the contribution of each pixel by the product of the Gaussian filter's weight at that location and the pixel's alpha value, keeping track of the sum of all such weights over the window, and then dividing by the sum at the end. This would essentially be an extension of what Illnab1024 has done above. I haven't actually tested it though. I think I decided that it would yield poor results in some corner case, but I can't remember the details. You should keep experimenting with this!

By the way, a good size for 2D convolution kernels is 5x5. If you make them any larger, they will be very slow unless you separate them into a pair of 1D filters, which is a little difficult to do. Performance in the 5x5 case is roughly the same whether the filter is separated or not... I think that many things hardcode a 5x5 kernel size for this reason.

And if the size is fixed, and relatively small, you can unroll it, as Illnab1024 has done in the code above. The unrolled version will be very fast, and is still easy to understand. I'm not sure how much unrolling the compiler will do, it might depend on the machine.

Segment Image : Average Color (HSL) : Eigen Blur : []

Cool, new forum!

Link to comment
Share on other sites

It would make sense that as long as the amount of cycles in a loop weren't based on a dynamic variable optimization would unroll it fully.

Also, I was messing with convolution filter code a few days ago, and I don't exactly remember what I did as far as summing goes.. it was like.. ensure the sum of all elements in the kernel are 1 and then for each nonzero element, do math (summation), and then clamp to byte (either by modulo 256 or by <0 => 0, >255 => 255).

Anyway, gaussian filter code is fun. Ha.

~~

Link to comment
Share on other sites

"Pixels that don't fully exist"

That would be any color with an alpha value of zero. If your code is correct, they will all be equivalent. But, the math gets pretty cranky especially when you're accumulating many color values to calculate 1 output color. Look at the various overloads for ColorBgra.Blend(). They will do the alpha weighting correctly.

The Paint.NET Blog: https://blog.getpaint.net/

Donations are always appreciated! https://www.getpaint.net/donate.html

forumSig_bmwE60.jpg

Link to comment
Share on other sites

Pixels that don't exist should simply not be a part of the signal. If only I had a napkin to draw a poor 3D graph of a signal.. Oh, wait,

"Song of Solomon" by Animals as Leaders spectrogram:

songofsolomon.png

The places of no signal would have no influence in the calculation of the waveform, were we to calculate the waveform, right?

I dunno.

~~

Link to comment
Share on other sites

Illnab1024: The dark values in the spectrogram correspond to a low-amplitude coefficient in a frequency-domain representation of a certain block (maybe window?) of the original (time-domain) signal. Precise reconstruction of the waveform requires all of the coefficients. If those values aren't explicitly defined, we might implicitly assume that they're zero... but that would be rather similar to an ubiquitous form of lossless compression :(

Rick: Thanks for the tip! The BlendColorsWFP method is close to what I'm looking for.

barkbark00: You've identified a very important issue here: how can we run a filter on points near the boundary of the defined portion of a signal which isn't defined everywhere? My Eigen Blur effect essentially makes up values beyond the edge of the image. I treat the image as a periodic function and continue it a little ways past the bounds so that I can keep the filter well-behaved. Apparently this is a common thing to do. Some other effects that I've seen assume that those values are zero or use a nearby value from the image. Some avoid handling pixels near the boundary altogether. Others modify the filter near the edges to avoid having to make up values, as you propose. It's hard to read, but I think that Rick's Gaussian Blur does this. Each of these different approaches probably makes sense in at least one interpretation :D

Edit: Found a better link for link #2.

Segment Image : Average Color (HSL) : Eigen Blur : []

Cool, new forum!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...