Jump to content

_koh_

Members
  • Posts

    159
  • Joined

  • Last visited

Posts posted by _koh_

  1. 3 hours ago, Rick Brewster said:

    It's not about whether the content itself is sRGB or not. It's the presentation surface itself -- if it's Flip mode and FP16 or R10G10B10A2, which is what triggers Windows Advanced Color, then the brightness management is different than other (e.g. classic) presentation surfaces.

     

    I was a bit surprised that browsers switching some kind of modes depending on the contents, especially when everything is dynamically loaded. I didn't know you can switch them on the fly.

     

    edit:

    Is PDN doing the same worth it. Like using INT8 just for sRGB.

  2. I can reproduce this. While I've been using HDR mode for a while, never changed this setting.

    Seems like everything but tool bar and status bar having wrong color, but only in capture, not visible on the screen.

    This setting doesn't change SDR brightness on the screen. SDR white always mapped to the same brightness (80 nits maybe?). It only changes relative HDR brightness.

     

    image.png.26243cbf1620417187b3fc674e10a4c7.png

  3. Just BitmapSourceEffect2 -> CreateImageFromBitmap()
    source code + dll LuminanceBlender.zip


    This B -> G gradation this effect created doesn't look linear to me, but when converted to the luminance, it's linear in sRGB values.
    So while it's working as intended, my assumption which is visually linear in luminance makes gradation visually linear is a bit questionable.
    At least it keeps the brush size consistent in this way.


    image.png.24662e21fa7ecdee03569f70a36c0c7d.pngimage.png.4999d96fd4dcd5384c933c4171e89983.png

     

    edit:

    If I use RGB average instead of the luminance, it matches sRGB blending for black -> white gradation, and matches linear blending for R -> G -> B gradation, but it loses consistent brush size.

     

    gray(0.2126f, 0.7152f, 0.0722f) -> gray(1/3f, 1/3f, 1/3f)

  4. 2 hours ago, Rick Brewster said:

    BitmapSourceEffect2 is a "low level" effect and will not handle certain cases, such as large bitmaps.

     

    Ah, thanks.

    When I switched to WorkingSpace + straight alpha, this one felt more explicit but didn't know about other limitations.

     

    Feels like what I actually want in this case is LayerInfo.UncachedImage which respects RenderInfo.

  5. Use scRGB for composition instead of the linearized color.
    Need to use scRGB to calculate luminance anyway, so this is better.
    Color space conversions per layer 12 -> 9. LuminanceBlender.zip


    Feels like I can do the same for other cases but scRGB could have non 0-1 range values so not always going to work.
    Weighted average works fine. So in this case, normal blending does work but some blend modes don't.
     

    protected override IDeviceImage OnCreateOutput(PaintDotNet.Direct2D1.IDeviceContext DC) {
        var gamma = (bool)Token.GetProperty(PropertyNames.Gamma).Value;
        var compo = (IDeviceEffect)new EmptyEffect(DC);
        using var enc = Environment.Document.ColorContext.CreateRef();
        using var dec = DC.CreateColorContext(DeviceColorSpace.ScRgb);
        foreach (var o in Environment.Document.Layers) if (o.Visible) {
            using var input = compo;
            using var image = (IDeviceEffect)new BitmapSourceEffect2(DC); image.SetValueRef(BitmapSourceProperty.BitmapSource, o.GetBitmapBgra32());
            using var blend = o.BlendMode != LayerBlendMode.Normal ? new MixEffect(DC, compo, image, o.BlendMode.ToMixMode(), MixAlphaMode.Straight) : image;
            using var layer = o.Opacity != 1 ? new HlslBinaryOperatorEffect(DC, blend, HlslBinaryOperator.Multiply, new Vector4(1, 1, 1, o.Opacity)) : blend;
            using var lcode = new ColorManagementEffect(DC, layer, enc, dec, ColorManagementAlphaMode.Straight);
            using var ccode = new ColorManagementEffect(DC, compo, enc, dec, ColorManagementAlphaMode.Straight);
            using var lconv = gamma ? Convert(lcode, dec, enc) : lcode;
            using var cconv = gamma ? Convert(ccode, dec, enc) : ccode;
            using var merge = new MixEffect(DC, cconv, lconv, LayerBlendMode.Normal.ToMixMode(), MixAlphaMode.Straight);
            using var mconv = gamma ? Convert(merge, enc, dec) : merge;
            using var mcode = new ColorManagementEffect(DC, mconv, dec, enc, ColorManagementAlphaMode.Straight);
            compo = mcode.CreateRef();
        }
        return compo;
    }
    
    private IDeviceImage Convert(IDeviceImage input, IDeviceColorContext src, IDeviceColorContext dst) {
        var gray = (float R, float G, float B) => new Matrix5x4Float(R,R,R,0, G,G,G,0, B,B,B,0, 0,0,0,1, 0,0,0,0);
        using var lumin = new ColorMatrixEffect(DC, input, gray(0.2126f, 0.7152f, 0.0722f), ColorMatrixAlphaMode.Straight);
        using var gamma = new ColorManagementEffect(DC, lumin, src, dst, ColorManagementAlphaMode.Straight);
        using var image = Shader<Render>([input, lumin, gamma]);
        return image.CreateRef();
    }

     

  6. Use gamma curve of the image's color space to encode / decode the luminance.
    With this, grayscale blending will look the same as the original in any color space.
    source code + dll LuminanceBlender.zip


    It's a bit difficult to recreate this image with the built-in brush, so this effect may have some use after all.
    add new layer -> draw something -> run this effect -> merge it down
    image.png.afc96fcd8d744076ca13adf597c6f138.png

  7. Use gamma encoded luminance / linear just for normal blending.
    Going by the formula, blend modes other than normal work better in gamma encoded RGB anyway.
    source code + dll LuminanceBlender.zip

     

    Yeah... too many color space conversions.

    protected override IDeviceImage OnCreateOutput(PaintDotNet.Direct2D1.IDeviceContext DC) {
        var space = (int)Token.GetProperty(PropertyNames.Space).Value;
        var compo = (IDeviceEffect)new EmptyEffect(DC);
        using var enc = Environment.Document.ColorContext.CreateRef();
        using var dec = DC.CreateLinearizedColorContextOrScRgb(enc);
        using var cnv = DC.CreateColorContext(DeviceColorSpace.ScRgb);
        foreach (var o in Environment.Document.Layers) if (o.Visible) {
            using var input = compo;
            using var image = (IDeviceEffect)new BitmapSourceEffect2(DC); image.SetValueRef(BitmapSourceProperty.BitmapSource, o.GetBitmapBgra32());
            using var blend = o.BlendMode != LayerBlendMode.Normal ? new MixEffect(DC, compo, image, o.BlendMode.ToMixMode(), MixAlphaMode.Straight) : image;
            using var layer = o.Opacity != 1 ? new HlslBinaryOperatorEffect(DC, blend, HlslBinaryOperator.Multiply, new Vector4(1, 1, 1, o.Opacity)) : blend;
            using var lcode = new ColorManagementEffect(DC, layer, enc, dec, ColorManagementAlphaMode.Straight);
            using var ccode = new ColorManagementEffect(DC, compo, enc, dec, ColorManagementAlphaMode.Straight);
            using var lconv = space == 1 ? Convert(lcode, dec, cnv, ConvertGammaMode.LinearToSrgb) : lcode;
            using var cconv = space == 1 ? Convert(ccode, dec, cnv, ConvertGammaMode.LinearToSrgb) : ccode;
            using var merge = new MixEffect(DC, cconv, lconv, LayerBlendMode.Normal.ToMixMode(), MixAlphaMode.Straight);
            using var mconv = space == 1 ? Convert(merge, dec, cnv, ConvertGammaMode.SrgbToLinear) : merge;
            using var mcode = new ColorManagementEffect(DC, mconv, dec, enc, ColorManagementAlphaMode.Straight);
            compo = mcode.CreateRef();
        }
        return compo;
    }
    
    private IDeviceImage Convert(IDeviceImage input, IDeviceColorContext dec, IDeviceColorContext cnv, ConvertGammaMode mode) {
        var gray = (float R, float G, float B) => new Matrix5x4Float(R,R,R,0, G,G,G,0, B,B,B,0, 0,0,0,1, 0,0,0,0);
        using var iconv = new ColorManagementEffect(DC, input, dec, cnv, ColorManagementAlphaMode.Straight);
        using var lumin = new ColorMatrixEffect(DC, iconv, gray(0.2126f, 0.7152f, 0.0722f), ColorMatrixAlphaMode.Straight);
        using var gamma = new ConvertGammaEffect(DC, lumin, mode, 1, ConvertGammaAlphaMode.Straight);
        using var image = Shader<Render>([input, lumin, gamma]);
        return image.CreateRef();
    }
  8. Maybe these samples are better.
    sRGB / linear / gamma encoded luminance

    image.png.390dc8b2dacf580625d59964dfc1ac71.png

    image.png.b65da6d6062d52cac75fa90a2870b631.png

    image.png.14760f193099477e59d3a429dd3dd32a.png

     

    Basically it adjusts linear RGB vector size to match human perception, but still keeps vector direction. One downside I can think of is while luminance stays 0-1 range trough out the process, RGB values won't. RGB(0, 0, 1) being something like (0, 0, 4.2). Normal blending has no problem with this, but it breaks some other blend modes.

     

    Added the UI and linear color blend mode. You still need a layer for each color because built-in brush works in gamma encoded RGB, but this one may have some non drawing use.
    source code + dll LuminanceBlender.zip

  9.   

    On 9/3/2024 at 3:40 PM, _koh_ said:

    Feels like
    when R:G:B is consistent, sRGB gradation looks linear
    when R+G+B is consistent, linear gradation looks linear
    this... kinda making sense.

     

    I finally understanded what I want blending to do in this case, so I made an effect which blends layers in a gamma encoded luminance color space instead of a gamma encoded RGB one. In this way, it will create visually linear gradation for any combination of colors.

     

    sRGB / linear / gamma encoded luminance

    image.thumb.png.6408b6fc60250b1e51128407da68d307.png

    image.thumb.png.d042e54101359881a112377d5682543b.png

    image.thumb.png.6e8678a61306f20255024d9304a95754.png

    These samples are like best / worst case scenarios for sRGB / linear, and this effect could handle them fairly well. Actually it 100% pixel matches sRGB for gray scale blending.

     

    source code

    protected override void OnInitializeRenderInfo(IGpuImageEffectRenderInfo renderInfo) {
        renderInfo.ColorContext = GpuEffectColorContext.ScRgb;
        base.OnInitializeRenderInfo(renderInfo);
    }
    
    protected override IDeviceImage OnCreateOutput(PaintDotNet.Direct2D1.IDeviceContext DC) {
        var gray = (float R, float G, float B) => new Matrix5x4Float(R,R,R,0, G,G,G,0, B,B,B,0, 0,0,0,1, 0,0,0,0);
        var image = (IDeviceImage)new EmptyEffect(DC);
        foreach (var o in Environment.Document.Layers) if (o.Visible) {
            using var idst = image;
            using var iimg = DC.CreateImageFromBitmap(o.GetBitmapBgra32(), null, BitmapImageOptions.DoNotCache);
            using var ilum = new ColorMatrixEffect(DC, iimg, gray(0.2126f, 0.7152f, 0.0722f));
            using var imul = new LinearToSrgbEffect(DC, ilum);
            using var ivec = Shader<Render>([iimg, ilum, imul]);
            using var imix = o.BlendMode != LayerBlendMode.Normal ? new MixEffect(DC, idst, ivec, o.BlendMode.ToMixMode()) : ivec;
            using var isrc = o.Opacity != 1 ? new OpacityEffect(DC, imix, o.Opacity) : imix;
            image = new MixEffect(DC, idst, isrc, LayerBlendMode.Normal.ToMixMode());
        }
        using var ovec = image;
        using var olum = new ColorMatrixEffect(DC, ovec, gray(0.2126f, 0.7152f, 0.0722f));
        using var omul = new SrgbToLinearEffect(DC, olum);
        using var oimg = Shader<Render>([ovec, olum, omul]);
        return oimg.CreateRef();
    }
    
    [D2DInputCount(3), D2DInputSimple(0), D2DInputSimple(1), D2DInputSimple(2)]
    [D2DShaderProfile(D2D1ShaderProfile.PixelShader50), D2DGeneratedPixelShaderDescriptor]
    internal readonly partial struct Render() : ID2D1PixelShader {
        public float4 Execute() => D2D.GetInput(0) * Hlsl.Max(D2D.GetInput(2), 1e-6f) / Hlsl.Max(D2D.GetInput(1), 1e-6f);
    }

     

    dll + full source code LuminanceBlender.zip
    You need PDN 5.1 to build & run this one.

  10. 13 hours ago, Rick Brewster said:

    Using companded values does work very well for certain calculations, but not for blending or sampling.

     

    While I understand linear blending gives me physically accurate results, when I use a brush, I'm trying to change the image color by perceivable amount, and usually have intended result in mind. So getting the physically accurate average for given colors / alphas isn't my priority, it only needs to function in a predictable / controllable manner.

     

    sRGB scale / linear scale

    image.png.bed539863feda63b31104a9c9fed4fbc.pngimage.png.367202619b90ed55cbb537dc85ec6262.png


    Our eyes are more sensitive to the darker tones, and gamma encoded color spaces are adjusted for that.
    Using a brush is like adjusting sliders along these scales. So it's more easier to get the intended color with sRGB brush than linear one.

    I may change my mind when I actually used linear brush, but this is how I see things for now.

     

    edit:

    Some tests. Umm, I can't decide.

     

    B -> R gradation looks better in linear. sRGB(0.5, 0, 0.5) looks visibly darker than (1, 0, 0) or (0, 0, 1) as expected.

     

    sRGB / linear

    image.png.8d15cf684f97010a15ca7f91fe9ae476.pngimage.png.14422e890a951ea1e5c81d8445af9097.png

    Brush size looks consistent in sRGB.

     

    sRGB / sRGB inverse

    image.png.830e29a3bd8be95bfbfd5c5f3483fe3a.pngimage.png.d24cbdd71d2b38eea2aa846d5501f26e.png

     

    linear / linear inverse

    image.png.1386e31b9d6f4fcf0fef94809a270029.pngimage.png.0595e29e1e580fb6c6f6e0cd1f04f59f.png

     

    edit2:

    Feels like
    when R:G:B is consistent, sRGB gradation looks linear
    when R+G+B is consistent, linear gradation looks linear
    this... kinda making sense.

     

    I wonder how Lab gradation will look. Covers both cases?
    I know its purpose but never actually tested.
     

    • Upvote 1
  11. 7 minutes ago, Red ochre said:

    No, I can't?... I don't see that option?... that's why I asked.

     

    Umm. I have that property in my CodeLab.

     

    image.png.f6e658b1e16a7a6f6db525843cf46e1c.png

     

    8 minutes ago, Red ochre said:

    Yes, I could write the formula for a Bitmap ellipse but I'm trying to use D2D?

     

    I'm saying you can do this to clamp sample coordinate.

     

    int mcxi = (int)Math.Clamp(mcx, selection.Left, selection.Right);
    int mcyi = (int)Math.Clamp(mcy, selection.Top, selection.Bottom);
    • Thanks 1
  12. 2 hours ago, Red ochre said:

    How do I rotate an ellipse?

     

    You can use deviceContext.Transform for this.

    But this transforms whole image so setting the center of the ellipse (0,0) and moving the ellipse through the transform matrix might be simpler.

     

    Also you can use Math.Min(), Math.Max(), Math.Clamp() for if (x > a) x = a kind of things.

  13. On 8/31/2024 at 4:32 AM, Rick Brewster said:

    this does come at the cost of reducing the quality and correctness of blending and sampling.

     

    How brush color interact image color is a human perception thing in my view, so using sRGB brush is my preferable choice so far.
    One thing I'm sure is manually using linear brush would be a bit cumbersome. Using sRGB brush is kinda like using log scale for convenience.

  14. 36 minutes ago, Red ochre said:

    I did have problems with the colour
    change from 'companded' to 'linear' but have found a conversion that seems to give results equal to the original code

     

    Never used GpuDrawingEffect but having this method in your class sets the color space original instead of the linear I believe.
    And yeah, if you want to mimic built-in brush and layer blending, you have to use the original color space.

     

    protected override void OnInitializeRenderInfo(IGpuDrawingEffectRenderInfo renderInfo) {
        renderInfo.ColorContext = GpuEffectColorContext.WorkingSpace;
        base.OnInitializeRenderInfo(renderInfo);
    }
    • Thanks 1
  15. On 8/29/2024 at 6:35 AM, Red ochre said:

    4. Can GpuDrawingEffects be added to a BitmapEffect and does this need to done in a separate method?
    ... say you wanted to mess with the colour of the Src pixel by pixel then 'draw' something on top using Direct2D.

     

    Classic Effect + GDI drawing -> Bitmap Effect + D2D drawing is almost 1 to 1 conversion.

     

    Surface -> IBitmap

    ColorBgra -> ColorBgra32

    GDI drawing -> D2D drawing

     

    BoltBait's tutorial covers the rest so just GDI -> D2D part.

     

    GDI+

    surface ??= new Surface(src.Size);
    using var gfx = new RenderArgs(surface).Graphics;
    using var pen = new Pen(color, width);
    surface.Clear();
    gfx.SmoothingMode = SmoothingMode.AntiAlias;
    gfx.DrawCurve(pen, points);

     

    D2D

    bitmap ??= Environment.ImagingFactory.CreateBitmap<ColorPrgba128Float>(Environment.Document.Size);
    using var bdc = Services.GetService<IDirect2DFactory>().CreateBitmapDeviceContext(bitmap);
    using var brush = bdc.CreateSolidColorBrush(color);
    using (bdc.UseBeginDraw()) {
        bdc.Clear();
        bdc.DrawCurve(points, brush, width);
    }

     

    D2D only takes pre-multiplied bitmap so you need to use either ColorPrgba128Float or ColorPbgra32.

     

    You still need to blend the result with the src and copy to the dst, but I don't know if there is a simple way to do this when the pixel formats are different.

    This is a relatively simple thing to do in a GPU effect, but I was doing per pixel blending as a part of the per pixel rendering in my CPU effect.

    • Thanks 1
×
×
  • Create New...