Jump to content

Rick Brewster

Administrator
  • Posts

    20,661
  • Joined

  • Last visited

  • Days Won

    378

Posts posted by Rick Brewster

  1. I would also lastly like to repeat here that if mr rick leader guiy of paint.net would DARE tO SEND ME A PHOTO OF HIS REAL LIFE SELF i would love to draw him. If he is shy like a camel i would be happier than jesus to keep the photo confidential like I did with chris and I would only show people the cartoon version. I do not think that my previous drawings of him are accurate enough to produce some kind of accurate description of his real life appearace please rick! I will even send you some candy by mail if it it will persuade you.

    ... actually the earlier likeness you had of me wasn't too far off. You know, the one from the "versus squirrel" and/or "George" images.

  2. Something I know is that many times when blurring, there is no need to blur the alpha channel. A way to figure if the alpha channel of the whole image is all 255, 128, et cetera, would save a bit of time, although this could become useless for small images.

    So you're proposing to make two passes through the image, thus incurring twice the memory bandwidth. I doubt that would be faster; even if you were clairvoyant and could just assume that the image was all 255 alpha, you would only save up to 25% of the computations, but you'd still have the same memory bandwidth requirements. So it wouldn't really be that much faster.

    These days, memory is often relatively quite slow compared to the insatiable demands of the code that's executing. And then the hard drive is glacially slow. Which is why developers like me abhor disk access and cherish L1/L2 cache hits :)

  3. Paint.NET may or may not be able to achieve Photoshop-level speeds ... I don't know what algorithm they're using, or if it's even compatible with our effect framework. It's probably under heavily guarded lock and key :) Paint.NET's blurring seems very slow in comparison, it's just a fact.

    The algorithm you describe is a general convolution filter, and is how Paint.NET v2.1 and before implemented Gaussian Blur. The way we do this in v2.5 and later is to do a weighted vertical sum and then use those values to do a weighted horizontal sum. Many of these computations can be carried over from one pixel to the next, so we don't have to recompute a lot of stuff. There's some inefficiency I'm seeing that would be interesting to try and eliminate, what with a bunch of extranneous memory copying, I'd be curious to find out how much more speed I can wring out of this code.

  4. C'mon man, have you even looked at BlurEffect.cs in the source code? The code I wrote implements all sorts of tricks and wizardry to be fast, you're underestimating me :) It has linear efficiency with respect to blur radius; it's O(n) and not O(n^2), where n is the blur radius. Of course things are precalculated. There's no way I'd be calculating square roots and exponents for each output pixel. It's all integer math. (It is, however, 64-bit math -- if you're on a 64-bit CPU + 64-bit OS, you will see a 300% speedup, http://blogs.msdn.com/rickbrew/archive/ ... 34394.aspx).

    The Photoshop guys just have some proprietary magic sauce that makes things super fast even with a large blur radius.

  5. Alpha Blending is an option of Paint .NET which, if activated allows half (or at least not completely) transparent drawing over the picture without removing the existing colors, just modifying the alpha value.

    Bah, I can't explain that one very well :wink:

    Why not just quote the help file :)http://www.eecs.wsu.edu/paint.net/doc/2 ... Tools.html

    When you choose a color, you may specify an Alpha component. This part of the color determines how to blend it in with the pixels that are already on the layer when you draw. An alpha value of 0 means completely transparent, whereas an alpha value of 255 means completely opaque. Antialiasing also uses alpha blending for the edges of shapes and brushes.

    However, there are times when you do not want to blend a color in to the existing pixels, and instead want to replace the color on the canvas with exactly the color and alpha values you have specified.

    It's not the easiest one to understand, but not having it would be an enormous loss of functionality, trust me :) I use it a lot, especially when doing per-pixel editing.

  6. Apps like Photoshop and The GIMP can work with large images because they implement sophisticated and custom tiling schemes. It's essentially the same as virtual memory paging, except that they get the split the image into arbitrary rectangular tiles. In Paint.NET, we rely on the OS's built-in virtual memory / paging, but that does 4 KB at a time and can only split the image into vertical stripes of tiles (that's simplifying though). Rectangular tiles can result in better locality, as if I draw something on the left side of a very lage image I don't end up paging in stuff all the way over on the right of the image.

    The tradeoff is development simplicity: I can write less code that does more stuff with fewer bugs and publish it sooner, but in return it requires more of your system's memory at any given moment. Or, on 32-bit systems, it just may not work because of the limitations of the available virtual addressing.

    On that Xeon 5150 I really hope you are running 64-bit Windows :) Paint.NET runs about 40% faster on that chip in 64-bit mode (on the Athlons and Opterons we get a 60% bonus).

  7. That bug/annoyance won't be fixed for the v2.6x releases. I call it a bug/annoyance because it feels like a bug, but is really two conscious designs that clashed: "ctrl" makes the Move Selected Pixels tool copy rather than move, and "ctrl+arrow" moves in increments of 10 pixels instead of 1. So it's a design bug, it's just not high enough priority to fix for a v2.6x service release. I have filed a bug for v3.0 though.

    And the new 16x16 icon is awesome: it's much easier to see than the casually resized larger version. You can actually tell what it's supposed to be.

  8. Why would it take 50-60 MB of memory?

    A 23MB JPEG decompresses to an enormous image. Memory usage is proportional to the number of pixels in an image, not its compressed file size. For example, for a 1000 x 1000 image, memory usage is proportional to 1000 x 1000 x number_of_layers. (Each layer has 1000x1000 pixels). For single layer images we also have a relatively large memory usage penalty, as we also have a composition and scratch buffers. So really memory usage is proportional to (width x height x (layers + 2)). Also, multiply by 4 because each pixel takes 4 bytes of memory (red, green, blue, alpha).

    A 1000x1000 image takes the same amount of memory after decompression whether it was saved to disk as a low-quality 50KB JPEG or a high-quality 500KB JPEG.

    A 23MB JPEG is an enormous image. That's like what, 20000x15000 or something? Enormous images take enormous amounts of memory to load, even if the JPEG codec was able to squash it to a much smaller file on disk.

  9. A 23MB JPEG still has to be decompressed and stored in memory, and then takes up much more than 23 MB. We store the entire image in memory along with a few extra buffers for compositing and other purposes. We do not use a foreful tiling scheme like Photoshop or The GIMP, and instead have relied on the slow transition to 64-bit to provide more virtual address space for the users who need to work with enormous images. This is a core part of Paint.NET's architecture, and it was purposefuly designed this way.

×
×
  • Create New...