Jump to content

Rick Brewster

Administrator
  • Posts

    20,661
  • Joined

  • Last visited

  • Days Won

    378

Posts posted by Rick Brewster

  1. As long as an IndirectUI effect is using the set of standard built-in types, and isn't trying to use raw objects in its properties, it should be auto-serializable.

     

    IndirectUI plugins can't do this on their own; their UI is declarative, with no room for customization beyond what's available in-box. So what works for Planetoid will not work for them.

     

    The reason I'd be targeting IndirectUI effects is that they are by far the vast majority of effect/adjustment plugins out there (and the majority of built-ins as well).

  2. The problem there is you're marshaling data through a managed array (int32[]). You can use something like NativeMemory to allocate unmanaged/unsafe memory and copy to/from directly using NativeMemory.Copy().

     

    My preferred method would be:

    1. Allocate your output buffer using NativeMemory.Alloc()

    2. Wrap it in a class that holds the pointer and calls NativeMemory.Free() in its finalizer. I would just derive from SafeHandleZeroOrMinusOneIsInvalid, the runtime has special support for this class (for SafeHandles in general). This makes it substantially less likely you'll leak memory, and makes it very clear how to free the memory (a naked pointer has no allocator attached to it).

    3. Use layer.Surface.AsRegionPtr() to obtain a RegionPtr<ColorBgra>, which is a low-level primitive kind of like Span<T> except for 2D bitmap-like regions of memory (pointer, width, height, stride).

    4. Create a RegionPtr<ColorBgra> for your output buffer, being sure to pass the object from 2 in as the "owner"

    5. Use CopyTo() to copy between the RegionPtr's

    6. Return that wrapper class you created in step 2 (instead of the int32[])

     

    Edit: See next comment. But, the general idea here is still the right one: don't manually copy pixel by pixel. Use a bulk-copy mechanism. And be sure to respect the stride of a bitmap.

  3. You don't need to add System.Drawing.Common. Paint.NET already brings in the .NET runtime+framework, and the Windows Desktop frameworks. Just make sure you're referencing the WindowsDesktop package with these lines at the top of your .csproj

    <PropertyGroup>
        <UseWPF>true</UseWPF>
        <ImportWindowsDesktopTargets>true</ImportWindowsDesktopTargets>
        <UseWindowsForms>true</UseWindowsForms>
    </PropertyGroup>

     

    Also if you need to load PNGs, don't bother with System.Drawing. It's garbage.

     

    Use IPngFileType instead https://paintdotnet.github.io/apidocs/api/PaintDotNet.IPngFileType.html . @null54 is using this in one of his plugins. It lets you use PDN's built-in PNG FileType, which does A LOT of things with regards to handling metadata and other weird things you don't want to get buried in figuring out.

     

    You get IPngFileType through your Services property, e.g. this.Services.GetService<IPngFileType>()

  4. 13 hours ago, xoxococo said:

    I just want to say I'm pissed about having to go to a sketchy virus-laden third-party download source for a version of Paint.NET that works on Win8.1.

    No one's forcing you to do anything. 

     

    13 hours ago, xoxococo said:

    This is a specialized rig, not a daily driver machine. Updating the OS is not an option for me in the context I need it because of certain software compatibilities.

    Then the problem isn't with Paint.NET, it's with your other software or hardware configuration. You can't blame Paint.NET here, that's just silly. You have had over 8 years to update your OS to something like Windows 10, and to upgrade/update/rewrite or find alternatives for your other software, and you have chosen not too. That isn't our problem.

     

    13 hours ago, xoxococo said:

    Please consider keeping at least one old version hosted here that is safe and works with old OSes. Even if it's with a disclaimer that it's as-is and doesn't have any support. 

    No, because that's literally supporting older versions of the app and unsupported OSes. Put another way, it's already been considered and decided against. This policy will not be changing.

     

    You're demanding that we support your use of an older version of the app on an older, outdated, unsupported version of Windows. Sorry but you don't get to make demands like that. Our time is our time, not your time.

  5. 1 hour ago, HenryH said:

     

    It's in the file.     In both examples above, i copy-pasted the files.

     

    (i'm afraid of posting the huge Diag file here because it might contain personal info about me)

     

    It does not contain personal info about you.

     

    It's just text -- you can read through it yourself and if you see something you don't want to include, edit it.

  6. 4 hours ago, Tactilis said:

    Could you also paste here the text from: :Settings: Settings | Diagnostics | Copy to clipboard

     

    We still need this ^^^

     

    I'm not able to reproduce this. It could be an issue with your GPU. I recommend you, 1) update to the latest version of your GPU driver, 2) make sure you've got all the Windows Updates installed (to make sure any DirectX updates are installed), and 3) if all else fails, switch to CPU rendering in Settings -> Graphics. This last one is only if you're desperate, and should only be used temporarily, as it can result in some effects taking a VERY long time to run.

  7. 6 hours ago, _koh_ said:

    That's what I get when Ctrl+Shift+C in PDN right? Each layer blended in integer.

    A bit difficult to explain, but when a layer having opacity = 16, it's like the layer using 12bit color instead of 8, and when I use excessive parameters I can see the differences between FP blended images and INT blended images. But I agree this is "Why you even need this?" kind of thing.

    Yes it's the same as if you had used Ctrl+Shift+C

     

    The two benefits here are 1) your input pixels will match what PDN shows the user, and 2) you won't copy every layer to the GPU. If there's a lot of layers, #2 matters a LOT. Most people do not have 24GB GeForce 4090s or 48GB RTX 6000s 😂 Eventually I'll be increasing the precision of this, adding linear blending, etc. but that's not today. If you use Document.Image then you'll get any of those upgrades for free, so that's benefit #3.

     

    You can also do this by using "streaming" images, which won't copy it to the GPU, it'll stream on demand and not use any extra GPU memory. Performance will be lower however. So if you really want to do the layer compositing yourself, and in linear space, and want to save GPU memory, I can show you how to do that.

     

  8. 50 minutes ago, _koh_ said:

    Do you need full stack trace?

    No, I know exactly the typo -- the wrapper's property is giving you an Int32 property accessor, but it needs to be a Int32-As-UInt32 property accessor. The underlying native effect property is UInt32, so I need to tell Direct2D that when retrieving the property before reinterpret-casting it to Int32. It's a tiny change, it'll be in the next update.

  9. Also, this:

    private float3 If(bool3 c, float3 a, float3 b) => new(c.R ? a.R : b.R, c.G ? a.G : b.G, c.B ? a.B : b.B);

    should be replaceable with just:

    c ? a : b

    ... At least, that's how it can be done in HLSL directly (IIUC). But I don't think we have that support in ComputeSharp.D2D1's C#->HLSL transpiler yet. I'll send the request over to @sergiopedri to see if we can figure out a way to provide this.

  10. Yes, if your shader needs the output from HistogramEffect, this is the correct way to do it. You are certainly using a ton of memory though! You could reduce the compatible DC to DevicePixelFormats.Pbgra32 which probably won't affect results much but will reduce the memory use of that compatible DC by 75%. (You can also experiment with Prgba64Float but I'm renaming that in the next update so I'd advise against releasing a plugin with it, but you can use new DevicePixelFormat(DxgiFormat.R16G16B16A16_Float, AlphaMode.Premultiplied) directly instead.). Shaders will still execute at Float32 precision, it's only command list output precision that should be affected.

     

    For much more memory savings, instead of creating your own composition image with Compo(), consider using Environment.Document.Image instead. That's a pre-made composition of the whole image (aka Document), taking into account per-layer blending and opacity. It does all of the composition on the CPU in the exact same way it's done in the canvas -- which means non-linear blending (your code is doing everything in linear, which is great, but produces a different result). This will save an ENORMOUS amount of GPU memory if you are working with multiple layers, since this only needs to be stored on the GPU as a single BGRA32 image (converted on-the-fly to RGBA128F).

     

    It looks like Dec() is an sRGB->Linear conversion, and Enc() is Linear->sRGB. You can use SrgbToLinearEffect and LinearToSrgbEffect instead. Simiarly, Mul() is PremultiplyEffect and Div() is UnPremultiplyEffect. Instead of handling these yourself, you could build an effect chain and split your shader into multiple shaders.

     

    So your graph would be:

     

    input -> UnPremultiplyEffect -> shader1 (Pad) -> SrgbToLinear -> shader2 (HDR, LDR) -> LinearToSrgb -> shader3 (Fit) -> PremultiplyEffect

     

    Only shader1 would need both inputs.

     

    This would result in two small simple shaders instead of 1 bigger one, and is probably a more natural way to express this in D2D's effect graph system. Plus then you don't have to implement Srgb<->Linear or (Un)Premultiply yourself, and can rely on D2D's and my optimizations in this area (for both perf and quality).

     

×
×
  • Create New...