• Content Count

  • Joined

  • Last visited

Community Reputation

  1. another new update: changes: some various bug fixes (including an issue which caused notable artifacts at lower quality levels, ...); layers may now be auto-cropped to omit transparent pixels; switched to a purely fixed-point RDCT (improves performance and makes it more deterministic, may also be in prior version, I forget); more experimental features; minor tweaks to alternate-VLC format (added inline meta-tagging); ability to blend transparent areas with cyan, such that the base image can have a transparent cyan background (also tested with magenta, but magenta resulted in larger output than cyan, hence why cyan was chosen), note that this process is not strictly lossless; ... experimental features: range coder (a variant of an "arithmetic coder"), which may reduce file sizes (vs Huffman), but does not yet work reliably; support for alternate block-filters derived from PNG, which may help improve compression for certain types of image-data (particularly involving large flat-colored areas and sharp lines, edges, or text, which are handled poorly by the DCT transform), this is done per-block, so the codec may choose which filter does a better job on a block-by-block basis (in my tests, this has helped matters notably on a few test images). incomplete experimental features: more compact encodings for Huffman and Quantization tables (so the tables may be themselves entropy-coded, albeit with Rice-codes); was working on incorporating a few of the video features from the C codec into the C# codec, granted PDN doesn't deal with video.
  2. although... a person can just trace with the paintbrush, and close the loop by, you know, the lines touching... apart from the little line on the end, there is no other significant difference, and otherwise the line is either straight and ugly, or the lines are pretty much touching anyways, so it isn't really clear what difference it makes. provided the region is closed, things like bucket-fill, ... will work either way. granted, if the tool became something more useful, for example, a set of control-points with a line following a spline, well then, this might be a little more useful (then a person can grab and move control-points to more accurately trace something).
  3. new update again... status: changed DLL and namespace names (to something hopefully a little more sensible); some number of minor tweaks; added a few "experimental" features, most of which had a problem of "not exactly working very well"; thing now has license terms (MIT); ... experimental features: basically, I was fiddling with ways to try to make the compression better, both via 64x64 "megablocks", and also experimenting with various alternate schemes for encoding the blocks (mostly for sake of supporting a larger value-range). neither led to favorable results (both resulted in larger output files).
  4. I had wished for alpha-masking before... then I just thought of it, and checked and found that there is a plugin for this, which is good (though, it could possibly be a little better as a layer-effect though...). much better than trying to cut out the image using the eraser tool at least.
  5. in-case anyone is looking here and not seen the newer thread (which has a newer version), it is here:
  6. I might use it if it weren't so lame... (the paintbrush can do mostly the same thing...). there are many more useful tools, and tools I can sort of wish that I could have instead (airbrush, variable eraser, and smudge-tool, anyone?...).
  7. status update again: changes for today: layer visibility preservation; basic save-time settings UI; lossless mode enabled, and a few image encoding settings; some tweaks to lossy-mode quality handling (changes to the algo for generating quantization tables); a few other bug fixes; ... issues: I really don't know how to properly implement the cancel button, as-is, it just throws an exception. I also would like to know if there is any way to distinguish Save from Save-As (so that I don't need to pop up the settings form again for plain Save). the save-settings box currently looks very unprofessional IMO. ... probably needed: code cleanups, more sensible project name, ... tested: earlier, the lossless mode compresses a little worse than JPEG-XR, but at least generally produces smaller images than naive PNG. for the images I have tested, for lossless coding, optimized PNGs seem to compress smaller than either my format or JPEG-XR. (well, yeah, I know, it all probably looks kind of like pointless to people, but oh well sometimes...). (edited/corrected: something was written here which didn't make sense, spell-check botch?...).
  8. status update: changes: basically now handles layers (can save/restore images with multiple layers, in which case the base-image stores a down-rendered version); flipped the images (the prior version was actually storing the images upside down it seems). issues: doesn't seem to work correctly on some images; loading/saving images with lots of layers is slow; doesn't currently remember layer settings (kind of annoying); some issues with poor image quality and artifacts; ... hopeful: add an option (UI) for saving images losslessly (and choosing options); have it remember layer settings (visibility, blending, ...); something to crop away unused parts of the image; ... I will have to look around some, as I am not currently sure how the save-options UIs are typically implemented.
  9. yes, this looks helpful, I am working on this now... admittedly, I am not entirely sure how much point other people will see in a JPEG variant with alpha and layers (and optional lossless encoding), but it is probably at least useful for what sorts of things I am doing. also, I am probably going to make "::" be special in the layer names, partly because the format has two levels of layers: "tag layers" and "component layers". a tag-layer is a conceptual layer (or, a typical graphics-editing layer), whereas a component layer is more related to extended component graphics, mostly for things like normal-maps and luma-maps (mostly related to 3D rendering). partly this choice is because PDN seems to allow me to do this (and it probably isn't too likely to be ran into by accident). for example, "My Layer::Norm" means "Normal-map for My Layer". a layer without any component-layer indication will be interpreted as an RGBA tag-layer (no suffix will be used for the RGBA components, and the RGBA-layer will essentially be required, probably with the list of RGBA layers being used to know which tag-layers exist). I am thinking about whether or not the bump-map will be better put in its own layer in the editor (for editing, it makes more sense to have the bump-map as its own layer). note: this is for a domain-specific format, after all... not sure if there is any way to indicate additional information for layers or images (basically, like textual globs). I will assume probably not (no big deal here).
  10. fair enough, but then again, I am a lone developer as well, but wrote most of my own codecs (for my 3D / game-engine project). then again, I am mostly developing off in C land, where it is much more common for people to write their own code (for pretty much everything...). I basically noted that .NET provided built-in image codecs mostly while throwing together a plugin for "JPEG+alpha channels" ( mentioned here: ), and made the observation: PNG support is provided by .NET, and has the same characteristic "not very good"-ness, leading to the prediction that PDN is probably using it. the mystery then is why MS made such a not-very-good PNG codec?... (maybe squeezing another 20-40% off the file-sizes just wasn't really a big priority?...). but, alas...
  11. EDIT: newer version, as of 2012-12-01: Original Post: an experimental plugin has been written and is available here: it comes with source-code as well... (I will for now give permission for people to "do whatever with it"). the original codec code was written in plain C, I just sort of grabbed a big chunk of it and crudely reworked it into C# (I tried at first using C++/CLI, but couldn't get Paint.NET to recognize it). this was more quick-and-dirty, so the code admittedly kind-of sucks. it was mostly done between last-night and earlier today (as-of 2012-11-11). basically, it is a currently very limited subset of the extensions described here: http://cr88192.dyndn.../index.php/JPEG the format support, as-available, is basically just JPEG but with alpha-channel support (as an additional embedded mono JPEG layer). if/when I figure out how layers work in PDN, I could possibly also add features for supporting the extended layers. currently, all options are hard-coded, and if/when a GUI is added, I may also add in more options (ability to choose lossy quality level, and/or be able to enable lossless encoding). the main thing it can do is have alpha channels while being typically smaller than PNG (depends somewhat on the image though), and in my case is mostly a format intended for things like texture-maps and animated textures. edit/add (clarification): the format is still basically compatible with existing JPEG decoders, they just won't see any of the additional layers. though there is a minor issue related to file-extensions, basically, if ".jpg" is used, PDN seems to use the default decoder regardless of which decoder is selected in the open box, so ".btj" is used instead. second note: an idea here is also that it is possible for the base-image to be used essentially as a "rendered down" version of the rest of the image-contents (a multi-layer image being able to show up as a flattened version of itself if fed into a standard decoder), whereas a simpler single-layer image would just use the base-image as its basic RGB layer. likewise, being an extension to the existing format, the preexisting images are still valid. also, a lossless encoding is supported by the format, which is also "basically mostly compatible" with existing decoders. it is based on the SERMs RDCT and RCT, and basically is based on this paper: http://www.eecs.qmul...CSP04-iJPEG.pdf questions / comments?...
  12. based on further investigation, I suspect MS may be to blame for the lackluster PNG compression?... (with PDN using the .NET provided "System.Drawing" APIs, rather than implementing the PNG-saving code itself ?...). if so, fair enough...
  13. an experimental PDN plugin has been written and is available here: it comes with source-code as well... (I will for now give permission for people to "do whatever with it"). basically, after a failed attempt to write a plugin using C++/CLI (couldn't get Paint.NET to recognize it), I ended up basically doing a hack-job rewrite of a lot of the encoder/decoder code into C# (the code was originally written in C, and was more of a quick-and-dirty rewrite, no never-mind if it isn't exactly "good style"). currently, a lot of stuff doesn't work, but it would require figuring out how to work with loading/saving layers in PDN. I make no claims about if/how-well it works, only that it seems to be working on my end... currently, things like the quality-level and similar are also hard-coded (would need to make some sort of GUI or something for choosing options).
  14. I am using it already... the issue though is, why should we need a plugin for something which should otherwise be presumably handled (at least moderately well) by the default encoder?... like, it doesn't need to produce the smallest possible files, but ideally shouldn't be spitting out files around 2x larger than what other comparable (not-very-fancy) encoders are spitting out. this implies that there is actually a problem in the encoder somewhere (or, the use of a low compression-level setting for deflate...).
  15. I am not sure how many have noticed (or if this issue has already been addressed), but the default PNG compression in Paint.NET is pretty bad. (both my own PNG encoder, and also GIMP, manage to produce PNG files often around 1/2 the size). (my encoder is nothing really fancy either, it usually just defaults usually to Paeth filtering, and the deflate encoder in my case uses greedy-search, so nothing fancy here either). not sure the reason here, but can whatever is the issue here be addressed?... yes, I know about the OptiPNG plugin, but this isn't really the issue, rather, there would seem to be an issue with the default encoder, as there is no "good" reason I know of why the output should be this large. if it is something simple, like turning up the compression level for the deflater or similar, well then, this is the request.