Jump to content

Rick Brewster

Administrator
  • Posts

    20,657
  • Joined

  • Last visited

  • Days Won

    378

Everything posted by Rick Brewster

  1. btw in PDN v5.1, I'm hoping to include a HistogramEffect2 that I've just got working. Instead of outputting a normalized array of floats for the chosen channel, it will output an array of Vector256<long> with direct value counts for all channels. It will also work on images up to 1M x 1M px (so, 1 terapixel 😂). I'm using this for porting over Auto-Level, Levels, and Curves over to GPU rendering.
  2. 5.0.13 included a fix to enable this specific plugin to work again.
  3. It's also just a bad strategy to use if you're lobbying for a feature. I'm never going to add something because someone was grouchy and argumentative at me. @Tactilis has successfully lobbied for several things and uses a straightforward evidence-and-logic-based discussion strategy.
  4. Paint.NET is not the right software to use if you require a CMYK-based workflow. Simple as that.
  5. Just because you don't know the reason doesn't mean there isn't a reason. And nobody's forcing you to do anything. "There is no reason..." is a cheap debate tactic that attempts to shift the burden of proof to the other side, and I really recommend avoiding it. Software is complicated and things take time. Just because something is missing doesn't mean it's an intentional/desired design choice.
  6. Also the #1 reason for signatures not verifying is that your system date/time is wrong. I’d check that first. The #2 reason is that you haven’t been installing Windows Updates. They do send down updates to certificate stuff from time to time, and Paint.NET is now using a newer certificate authority (it’s one of Microsoft’s, via Azure Code Signing). So make sure you’re caught up with all system updates.
  7. You can also use the Pan (hand) tool. I tend to use Space + Left Mouse Button for panning, which is available when any tool is active and when View -> Zoom to Windows is not enabled (it's always enabled after opening an image). After opening an image, press Ctrl+B to toggle off View -> Zoom to Window, then Space + LMB for panning.
  8. Sounds like Effects -> Blur -> Surface Blur ?
  9. Looks like a mixup between linear and sRGB colors. Should be a straightforward fix.
  10. This is a small update that fixes some bugs, adds a new Latvian translation, and updates the bundled AvifFileType plugin. Get the Update There are two releases of Paint.NET: Microsoft Store release (recommended) You can purchase it here. This helps fund development and is an alternative or supplement to sending in a donation. If you already have it installed, the update should happen automatically once Microsoft certifies the update, usually within the next day or so. To get the update immediately (once it's certified), you can follow the instructions listed here. Classic Desktop release Download the installer from the website. This is the recommended download if you don't have Paint.NET installed. It can also be used to update the app. If you already have it installed, you should be offered the update automatically within the next few days, but you can also get it immediately by going to ⚙ Settings -> Updates -> Check Now. Offline Installers and Portable ZIPs are available over on GitHub. Change Log Changes since 5.0.12: New: Latvian (lv) translation Fixed the Colors window sometimes showing up at weird sizes if the system scaling (DPI) was changed between sessions Fixed a crash in the Simulate Color Depth plugin (reported by @toe_head2001) Updated the list of libraries/contributors in the About dialog with some libraries that got missed (mostly from Microsoft) Fixed some clipped/invisible text in the installer when going through the Custom flow Added the GPU driver version to Diagnostics info Fixed HistogramEffect's Bins property https://forums.getpaint.net/topic/124026-getting-an-error-when-trying-to-use-histogrameffect-codelab-sample/?do=findComment&comment=618040 Updated the bundled AvifFileType plugin to version 1.1.30.0 (thanks @null54!)
  11. Okay thanks; I've filed a bug for this, but I don't have an idea when I'll be able to look at it
  12. Already answered by @Tactilis above. Upgrade your PC so it has more memory (RAM). This may require buying a new PC, since it looks like you're using a laptop and those can't always have their RAM upgraded. I should probably fix PDN itself so it reports this to the user (you) as Out Of Memory instead of File Is Corrupted.
  13. I got it down to 9.5s by calculating 3 px at a time 😎
  14. Looks like a GPU driver or GPU configuration (control panel) problem, or you have some other software that’s interfering or causing the glitch. Like recording/broadcasting or overclocking software.
  15. (Correction to data above: I've been using a 12K x 8K image for performance testing, not 18K x 12K)
  16. I was able to convert this to a compute shader that calculates 2 pixels at a time: 10.5 seconds 😁 Increasing that to 4 pixels reduced performance, likely because of occupancy spillage.
  17. If you're using the MSI then you need to supply properties (DESKTOPSHORTCUT=0 included) with whatever syntax it is that msiexe uses for specifying properties.
  18. Did you read the instructions? https://getpaint.net/doc/latest/UnattendedInstallation.html And the MSI isn't extractable because it's provided separately now. https://github.com/paintdotnet/release/releases In general, unless you're doing a deployment across a large network of PCs and know exactly what you're doing, it's not a good idea to use the MSI.
  19. The number of iterations is currently fixed, but that's an interesting idea
  20. It may also be worth having PDN use smaller tiles in this case. I'm not sure whether it should be an option specified in OnInitializeRenderInfo(), or if PDN should somehow auto-detect that the effect is running "too slow" and automatically adjust downwards. I think both should be used in this case. Using either of the two (multiple rendering passes, or smaller tiles) will help a lot, but lower-end hardware will really need both. Here's how the tile size is calculated, based on the total image size:
  21. I experimented with converting to/from linear space (e.g. WorkingSpaceLinear) -- and the results were substantially worse than with WorkingSpace. This is definitely an algorithm that should execute "within" the original color space.
  22. I've been able to optimize this further vs. the original shader (at full sampling) in this post, cutting down the execution time by about 42% -- without using a compute shader (although that's my next step!) and while also improving quality. Here's how I did it: Instead of starting at a pivot point of c=0.5, I use the output of shaders that calculate the min and max for the neighborhood square kernel area. This then establishes the traditional lo, hi, and pivot values for the binary search. This is less precise than taking the min/max of the circle kernel area, but can execute substantially faster (linear instead of quadratic) because these are separable kernels. This has two side effects: 1) it increases precision in areas that have a smaller dynamic range, and 2) it supports any dynamic range of values, not just those that are [0,1]. Binary search provides 1-bit per iteration. I also implemented 4-ary, 8-ary, and 16-ary. I kept only 4-ary enabled because it has the best mix of performance and can reach 8-bits of output in 4 iterations (instead of 8 iterations w/ binary search). The 8-ary can hit 9-bits in 3 iterations, which is more than we need. The 16-ary can hit 8-bits in 2 iterations but because it's using so many registers it actually runs slower due to reduced shader occupancy. The search now produces the wrong result when percentile=0 because they can only output the value from the localized min shader, which is often providing the min value for a pixel outside of the circular kernel. This means you get "squares" instead of "circles" in the output. I special-case this to use a different shader that finds the minimum value within the circular kernel. It's possible to incorporate this logic into the regular n-ary shader methods, but it significantly reduces performance. For my performance testing, I used an 18K x 12K 12K x 8K image. I set radius to 100, percentile to 75, and then used either "Full" sampling (w/ your original shader), or the default iteration count (for my shaders). Your original shader took 30.7 second, while my 4-ary implementation takes 17.8 seconds (with higher quality!). The next steps for optimization would seem to be using a compute shader, which could calculate multiple output pixels at once. This should be able to bring that 17.8 down even further, meaning this might even be shippable as a built-in PDN effect! And a quality slider that chooses full vs. half vs. etc. sampling would also enable faster performance (like your shader does). I'd also like to separate each iteration of the algorithm into its own rendering pass. This would definitely require a compute shader, as it would need to write out 2 additional float4s in order to provide the hi/lo markers (so the output image would be 3x the width of the input image, and then a final shader would discard those 2 extra values). This would enable the effect to run without monopolizing the GPU as much and would help to avoid causing major UI lag. I don't think it would improve performance, but I need to see how it goes. Here's the code I've got so far. It's using some PDN internal stuff (like PixelShaderEffect<T>), but you can still translate it to not use the internal stuff.
×
×
  • Create New...