# 2D Image Scaling Algorithms

## Recommended Posts

Hi folks, the next update is out there, where I implemented the XBR algorithms as requested. But I'm still getting different ouput even though I converted the original C source code to C#. Check it out yourself and if someone sports the differencies in the code and how to fix them, pls feel free to contact me.

##### Share on other sites

Hi folks, the next update is out there, where I implemented the XBR algorithms as requested. But I'm still getting different ouput even though I converted the original C source code to C#. Check it out yourself and if someone sports the differencies in the code and how to fix them, pls feel free to contact me.

Hi,

Sorry, I couldn't answer you before. I´ve seen your PM just now. (That's because my child is taking all my time these days ).

So, could you post a picture comparing your result with mine?

Besides, I think some code could change in your implementation. I've forgotten to say some functions in that Kega implementation were hacked (from original version) to speed things up. Since your software doesn't need such optimizations, I think you can use my original functions.

Here, in your code, you can exchange this:

private static int RGBtoYUV(sPixel A) {
var r = A.Red;
var g = A.Green;
var b = A.Blue;
var y = ((r << 4) + (g << 5) + (b << 2));
var u = (-r - (g << 1) + (b << 2));
var v = ((r << 1) - (g << 1) - (b >> 1));
return (y + u + v);
}

private static int df(sPixel A, sPixel  {
return (Math.Abs(RGBtoYUV(A) - RGBtoYUV());
}

private static bool eq(sPixel A, sPixel  {
return (df(A,  < 155);
}



By this:

int eq(unsigned int A, unsigned int
{
unsigned int r, g, b;
unsigned int y, u, v;

b = abs(((A & pg_blue_mask )>>16) - ((B & pg_blue_mask )>> 16));
g = abs(((A & pg_green_mask)>>8 ) - ((B & pg_green_mask )>> 8));
r = abs(( A & pg_red_mask 	) - ( B & pg_red_mask 	));

y = abs(0.299*r + 0.587*g + 0.114*;
u = abs(-0.169*r - 0.331*g + 0.500*;
v = abs(0.500*r - 0.419*g - 0.081*;

return ((48 >= y) && (7 >= u) && (6 >= v)) ? 1 : 0;
}

float df(unsigned int A, unsigned int
{
unsigned int r, g, b;
unsigned int y, u, v;

b = abs(((A & pg_blue_mask )>>16) - ((B & pg_blue_mask )>> 16));
g = abs(((A & pg_green_mask)>>8 ) - ((B & pg_green_mask )>> 8));
r = abs(( A & pg_red_mask 	) - ( B & pg_red_mask 	));

y = abs(0.299*r + 0.587*g + 0.114*;
u = abs(-0.169*r - 0.331*g + 0.500*;
v = abs(0.500*r - 0.419*g - 0.081*;

return 48*y + 7*u + 6*v;
}


You just need to convert this to your C#

That RGBtoYUV isn't necessary in this second snippet.

##### Share on other sites

Left is my result, right is the reference image you provided.

XBR2x (Checkerboards do not match with yours, as well as 2*2 pixel blocks)

XBR3x (Your reference image did not have any interpolated pixels, poss. wrong ref. image ?)

XBR4x (this seems to be identical with your version)

The biggest difference here I see in the 2x/3x is at the checkboard texture. Don't know why this happens (yet)

Edited by Hawkynt

##### Share on other sites

Thanks for the pics. I'll look into the codes and see if I figure out what's different. Anyways, that 3xBR I think I already know, because I have two implementations for it (one that blends colors and other that don't).

##### Share on other sites

OK I released the next version. I reverse engineered how your version without blending works and added three more filters to the library which are basically yours without blending.

Xbr2x(NoBlend)

Xbr3x(NoBlend)

Xbr4x(NoBlend)

##### Share on other sites

Haha. The noblend only works well when you choose an odd scale factor (3x, 5x, 7x, ...). That's why those 2x and 4x noblend pictures you posted aren't good (they have some bad artifacts, jaggies). Anyways, you did a great job! I specially liked the 3x reversed version more than mine.

I'm looking at the 2X issue, that's the only difference from your implementation I didn't have time to figure out yet.

Edited by Hyllian

##### Share on other sites

I'm seeing you've changed the "eq" and "df" functions. Where's the implementation of "IsLike" and "IsNotLike"? I'm afraid you're using a "hard decision implementation" instead a soft one.

Edited by Hyllian

##### Share on other sites

Yeah but these Jaggies may look nice sometimes depending on image material and resolution (like with every filter of course).

I also like the 3x version more than your original because it renders sharper 90° edges.

Looking forward to see where the bug in the 2x implementation is

PS: The implementation for these is is sPixel.cs, they're mathematically equal to yours and were also used in Hqnx filters and all the others I upgraded.

Edited by Hawkynt

##### Share on other sites

I have an implementation that renders sharper 90º too, but it causes some small artifacts in some pictures. I'm curious to see if yours don't cause them.

Can you scale this image using your 3x version and post here?

EDIT: Sorry, I posted the wrong image. Now it's corrected and it's a png lossless image.

Edited by Hyllian

##### Share on other sites

XBR3x

XBR3x(NoBlend)

PS:You could also resize the image on your own with the executable from here 2dimagefilter@googlecode.

Edited by Hawkynt

##### Share on other sites

Yes I can do that. Could you provide the image as a png to avoid having JPEG artefacts in it ?

Done. Look again at the image.

##### Share on other sites

Yes, the same artifacts are there. Look at the Yoshi's Belly, there's a square block appended. Yoshi's Nose has a square block too. My 90º sharp version presents those too. They're minor artifacts, though.

This image is my test sample. If you try using any other algorithm in your software, you'll see none comes close to xBR in accuracy.

Edited by Hyllian

##### Share on other sites

The white block is his belly button and the green one on his nose is just the nostril from the other side *just kidding*

But still it looks good. So what do we want to do with the 2x checkerboard glitch ?

The accuracy of your filter is really good, I agree, but the most important: you forgot Sonic in your test image

Your filter has the widest matrix for it's kernel from all algo's I implemented, so it could be theoretically possible to detect circles not only lines. This is why I'm using the other test-pattern. It has straight lines, 90° corners, 45° lines, 22.5° lines which form a circle (the smilie), a checkerboard and gradients. All of these are tasks a perfect filter should master. Your matrix has potential for that. If you have time in the future to invent a new filter, this should be the directions where you could improve your solid base from the xbr on.

If you still rely on your (more related to practice then theory) test image, you could try to detect Boo's round shape or the head of Toad. These should be similar to my (more theoretical) test-pattern.

Btw: Where are you from ?

Edited by Hawkynt

##### Share on other sites

But still it looks good. So what do we want to do with the 2x checkerboard glitch ?

I don't know yet. But it seems the glitch in 2x is affecting 4x version too, though not easily noticeable. Look at the cross (right down side) in that 4X comparison you posted earlier, the contour is a bit different. Probably some of those alphablend function implementations differ from mine.

##### Share on other sites

The accuracy of your filter is really good, I agree, but the most important: you forgot Sonic in your test image

Your filter has the widest matrix for it's kernel from all algo's I implemented, so it could be theoretically possible to detect circles not only lines. This is why I'm using the other test-pattern. It has straight lines, 90° corners, 45° lines, 22.5° lines which form a circle (the smilie), a checkerboard and gradients. All of these are tasks a perfect filter should master. Your matrix has potential for that. If you have time in the future to invent a new filter, this should be the directions where you could improve your solid base from the xbr on.

If you still rely on your (more related to practice then theory) test image, you could try to detect Boo's round shape or the head of Toad. These should be similar to my (more theoretical) test-pattern.

Btw: Where are you from ?

What Sonic image is that? I have lots of test images here, majority of them are pixel arts I got from pixeljoint.

Ah, I'm from Brazil.

Edited by Hyllian

##### Share on other sites

I don't know yet. But it seems the glitch in 2x is affecting 4x version too, though not easily noticeable. Look at the cross (right down side) in that 4X comparison you posted earlier, the contour is a bit different. Probably some of those alphablend function implementations differ from mine.

You mean 3x, right ? Because I can not see any differencies in 4x's cross down right.

The alpha blend was modified with the code you gave me when we talked about performance hacks. From this time on, I realized what you were doing and simply used methods I already had. It seems to me that the kernel code is affected and switches wrong. Or it does not combine the right pixels.

Btw: This is Sonic from the SEGA console systems:

And I'm from germany/berlin

Edited by Hawkynt

##### Share on other sites

You mean 3x, right ? Because I can not see any differencies in 4x's cross down right.

The alpha blend was modified with the code you gave me when we talked about performance hacks. From this time on, I realized what you were doing and simply used methods I already had. It seems to me that the kernel code is affected and switches wrong. Or it does not combine the right pixels.

Btw: This is Sonic from the SEGA console systems:

And I'm from germany/berlin

No, I mean 4x. If you zoom it using nearest neighbor, you'll see that pixel values in the cross contours have different rgb values. I use GraphicsGale to analyze pixel values. (Only four pixels have different values.)

Ok, I'll get that sonic pic and add to my collection.

So you're from Berlin. I vIsited it in 2008. Loved it!

Edited by Hyllian

##### Share on other sites

I used PaintDotNet to find the differencies in our results. Hope this helps to identify which conditions in source could be different from yours.

2x->ref

3x(NoBlend)->ref

4x->ref

##### Share on other sites

For me, the differences are in the implementation of AlphaBlend functions. Your implementation is identifying correctly the pixels which need interpolation. But, then, when it uses its AlphaBlend functions to interpolate, some discrepancies appear.

##### Share on other sites

I zoomed that cross from 4xBR pic comparison you made earlier. See how these pixels are different:

Yours:

Mine:

As you can see, there's some discrepancies. The result is because your AlphaBlend functions are not exactly like mine.

##### Share on other sites

But in the 3x version everything is done without interpolation ....

##### Share on other sites

I don't see a problem with your 3x noblend implementation. The only difference is that you do not change pixels when sharp edges are found. It's not an issue. It's just another way to solve the problem.

But, I think you should investigate those AlphaBlend functions to see why they perform slightly different than mine.

##### Share on other sites

OK could you tell me which weights you use for blending ?

formula is

c=(a*n+b*m)/(n+m)

Just need to know the values n and m for each blend case to fix that.

Your code was optimized so I thought these were yours (could you pls confirm?):

32W 7:1

64W 3:1

128W 1:1

192W 1:3

224W 1:7

##### Share on other sites

Could you please send me the source code from your standalone app to compare sources ?

##### Share on other sites

Could you please send me the source code from your standalone app to compare sources ?

I'm afraid you wouldn't understand. The sources are a real mess. There's many unuseful or only test code. That's my code playground .

Anyway, you can get here