Jump to content

verdy_p

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by verdy_p

  1. You might want to look at the code a little closer. I do test the firstTime flag both inside and outside the critical section. The outside test is simply for efficiency.As far as the claim it must be static, I'm not so sure that's true. While I only have a single core system, so I can't test it myself, I believe non-static variables defined at the class level are properly shared between different cores. Non-static member variables will be shared ONLY if the render class is not instanciated several times. Nothing warranties in PaintDotNet that there will not be separate instances of this class, one for each worker thread (even if the flag may be reset in the OnSetRenderInfo even handler). we have no clear info about how many renderer class instances will be allocated to process our effects, notably on multicode CPUs or on multi-CPU systems (including distributed systems that may run on several hosts). We lack a specification for getting a strong unique reference to a whole task to process on a single source image. We don't have such reference to create per-task variables, the render object only has members for isolated worker threads... And there's currently no mechanism to make the task unique object glbally visible (including through proxies to synchronized remote instances). Given that .Net is not made to run only in a single host (the virutalized environment is independant of the host itself which may be multiple), we are in some gray area here (so we can just suppose that PaintDotNet DLLs will not be used in multihosted VMs... but this may be simply wrong if PaintDotNet is run itself from a virtual OS: a .Net VM in a guest OS VM running in a multihost application server using MS VirtualPC, MS Virtual Server, Sun Virtual Box, Citrix, or other OS virtualization software (the host OSes supporting the guest OS where PaintDotNet is started may even be something else than Windows, including Linux, MacOSX, Solaris, BSD, possibly over an heterogeneous network, working together to offer a multi-cpu and multicode virtual environment with virtual shared memory and various levels of memory barriers for read/write order consistency... A thread may even start running on a host, then be yielded to continue on another host : the VM will manage the constistant transport of the current state of this thread across several guest OSes, possiblyt virtualized themselves, guest OS processes, guest OS threads, .Net VM processes, and .Net VM threads, and even possibly across other lightweight threads like fibers and Win7 tasks, provided that the code does not use "unsafe" bindings to native memory or threads/processes). As long as there's no such warranty, all that can be done is to use a static class member variable and synchronize it within the safe (virtual) .Net memory model (things will be even more complicate if you bind your code to native memory or binary code, which will prohibit the .Net environment from safely moving one .Net thread to another CPU or guest OS). Clearly, each worker thread that runs our render function cannot state anything about which task they are part from, all we can track is the references of the source and destination images, something that is not really and correctly identifying a distinct task made of several threads for separate ROIs. The fact that PaintDotNet was designed to hide such linkage/tracking info in our render routines is, in my opinion, a API DESIGN ERROR: we should have a parameter in the Render() routine pointing to the object containing the description of the task effectively shared by the multiple render threads, and a way to control its lifetime independantly of the lifetime of worker threads (the OnSetRenderInfo is not enough as it just tracks the initialization of several threads that are just supposed to run in the same .Net process, but this is not warrantied). Finally, note that the main Render() routine has the vocation of being recompiled at run-time to effectively run in a GPU core instead of the CPU, because it would be much faster there, and could benefit from more massive parallelization (typical GPUs today include about 32 parallel processing units, usable for example from DirectX, Direct3D, OpenCL, OpenGL, nVidia's PhysX, and why not accessible automatically to virtual .Net environments as well...). This could allow our render DLLs to perform real-time effects, including for animations or for video editing (if PaintDotNet integrates the suppport for multiframed images and frame numbering by synchronous timestamps or by frame counters, all in the same media format, either in a file or in a real-time stream).
  2. In fact, what you try to compute is the center of vertices of a polygon on the unit circle, by first converting its coordinates from polar to cartesian coordinates, and then converting this center back to polar to get the angle. I think that you can be much faster, because the actual hue value in HSV is not a true angle but a position measured along the perimeter of an hexagonal color wheel. So instead of using a circle, convert the angles to hexagonal perimeter values: no trigonometry (sin, cos, atan) is needed for that, just linear algebra and range tests. Why an hexagon : look at the color cube as it is effectively distorted slightly by the CIE color model and then rotate it as if you were hanging it with a thin thread linked to the white corner (the black corner laying vertically at the lowest altitude) : the lightness is the altitude of points in this distorted dice. Rotate this distorted dice around the thread, when you look at it horizontally on the plane passing through the dice center: the saturated colors will all pass in front of your eyes as the points nearest from you. These points are effectively disposed along an hexagonal on your horizontal plane. The rotation angle of the dice is what you generally call the "hue" angle (but it is NOT the very approximated "angle" used in HSV which is really a series of tangent values for distinct ranges of the effective angles). But you absolutely don't need to compute the angles in order to project (vertically) any color point in the distorted dice on your horizontal plane: you can use the projected positions on that plane to compute a mean point in this plane, and its position can be used to compute any color in the color dice that has the same hue: this is just a half plane delimited by the thread and passing by this mean point (you cannot define this half plane if the mean point is on the thread: it is simply grey, and it can have any arbitrary hue value). All this computing does not require any trignonometry, just linear transforms to compute the means you want (and you don't even need to use several ranges). However, you may need to use non linear transforms to effectively distort the color cube when taking the gamma value into account: this requires logarithms or exponentials, depending on the direction of transforms (between RGB linear energy to the CIE-based perceptual lighting model). The usual assumption that colors form a perfect circular wheel is wrong given that it does not respect the effective gammut of the sRGB color model which is more restricted. In fact it is really a sort of ellipsoid (when considering 3 color pigments) which is notably distorted by the exponential gamma factor (usual display devices or rendering surfaces use a 1.2 gamma value, not 1.0, but they are even more distorted by some electronic adjustments in order to create near black values, using a small linear transform instead. If you applied the same transform to the theoretical sRGB color cube, you would get a sort of "patatoid" with some curved facets of a parallelogram and additional facets with different curvatures created by the linear adjustments near black, but still curved by the perceptual gamma). For this reason, I absolutely don't like the color circle and what it produces ("hue" pseudo-angles). I much prefer the color hexagon which is more exact but is still much simpler to compute than the color circle: this is what is used effectively in the YUV (or related YCbCr based on another set of fundamental pigments) color models, which also use other white and black points within the color dice, to align the vertical grey thread on which the distorted dice (used by the rendering device) is hanging. ---- However, the exact behavior of your "mean hue" function should really weight the hue values according to their significance: colors along the grey line (from black to white, as they are defined in the chosen rotated color cube depending on the CIE-based color model you choose) should be weighted 0 as the hue is not significant at all (so they can be safely ignored), and colors with the maximum saturation should be weighted 100%. One simple way of computing the weight of hue values is to girst normalize S between 0%=0 and 100%=1, so that it can be used directly as the weight for computing the weighted average of hue values (it can be used in your method that remaps angles to cartesian positions on a circle, or with a faster method using the square directly without computing angles). But you can use S values directly if they are already in a constant range (independant of the other color components), because, anyway, you'll not only have to sum the weighted coordinates of hue points, but also the weights (S), before dividing the sum of weighted coordinates by the sum of weights. The only case where this weighted average will not return a point is when the sum of weights is null. This can only happen if all pixels in your sample are grey (with saturation zero, note that there can't be negative saturation, zero is its strict minimum) and even in this case, the hue value can be arbitrary given that the computed mean color will also be grey... I intuitively think that you'll get more meaningful results than just computing a non weighted "mean hue" from only the H components: the number of black/grey/white points in your sample will have absolutely no influence on the computed mean hue, and the most saturated colors will play the most important role in this mean, even if there's only 1 pixel with a non-grey color and all other pixels are grey. One good test of the algorithm would be to blur an image with this HSV mean, computed over a spot with small radius, on an image that displays for example an antialiased polygon filled with plain blue, and drawn over a plain white or black background: you should not see any random colors along the border of this polygon (because of antialiasing pixels that have very small saturation or because of the white or black background), the computed mean hue values should remain blue in all cases along the polygon border; such defect is something that can happen very easily with your algorithm, which is also not numerically stable. Another test must also be done with an antialiased thin line using near grey colors, and with a antialiased black-filled polygon, both drawn on a plain white surface: here also, you should not see any visible instable colors along the line or polygon borders after applying the same blur function.
  3. That pseudo-code is incorrect : you MUST NOT test the "firsttime" variable before betting the exclusive lock! And if you test it outside the lock (to see if you need to get it) you test it again in the critical section. The correct code is: if (!myLock.averageComputed) lock (myLock) { if (!myLock.averageComputed) { // Compute average color on full source image myLock.averageColor = average(sourceSurface); myLock.averageComputed = true; } } } // draw in destination ROI in destSurface using the mylock.averageColor Then you must make sure that there's only one "mylock" object. It must be declared as a static, initialized in its constructor with the averageComputed field or property set to false. If you want your effect to be used several times on distinct regions, you'll need a "reset" or "run now" button in the parameters dialog that, when it is pressed will simply rest this "mylock.averageComputed" flag to false, because static objects remain in their current state in memory, as long as the effect DLL is loaded in PaintDotNet.
  4. Note that your intuitive average function is wrong, independantly of the color space in which you want to compute it (RGB or HSV). Suppose that the color space was just a single greyspace, and you are computing the average of 4 pixels, what you are ttrying to compute is: average(average(average(average(average(g[0,0],g[0,0]),g[0,1]),g[1,0]),g[1,1]) What you'll get from such formula is not the linear average of greys, but the weighted average, where the last pixel takes half the weight, the 3 previous one together sharing the other half of the total weight. In other words, the two first pixels will have weight 1, the third will have weight 2, the last one will have weight 4. On a much larger image, you'll see that the resulting image will have the weight depending mostly from the color of the most lower left corner of the image: change *only* that bottom-right pixel to white or to black and you'll see that it will have a *very* large effect on your final average. Now change only the top-left pixel between white and black, and it will have absolutely no effect on the computed average. Suppose for example that the 4 grey values are (in your prefered colorspace): 16,32,64,128 in that order, the average you get from them will be successively: 16, 24, 44, and finally 86. But if pixels are in the reverse order, you'll get successively: 128,96,64, and finally 40. The result is very different because pixels are not weighted identically. In fact, with your false method for computing averages, the more you have pixels, the more this different will be important! In fact, when the computed average value must fit into an 8-bit integer, ONLY the last 8 pixels on the left of/ the last bottom row will have an importance, and all the rest of the source image will be completely ignored: if the source image is a large and high image almost completely plain white, with only the last 8 pixels plain black, the average you'll get will be plain black ! This will clearly not be an "average". A true average needs to be done differently: you must just sum the color components separately within their planes, and must count the number of pixels for which you take the components. You cannot use the intermediate average unless you keep its relative weight (which grows linearily when you scan the image from top-left to bottom-right, while the weight of newly added pixels do not grow but remains constant.) So what you really want is: - convert all pixels from RGB to HSV (or to any color space in which you want to compute the average) - sum all pixels you want to take in separate planes, with one summing varaible for each color component (sumH, sumS, sumV) - finally divide each component by the number of pixels taken in the sum, you'll get the HSV color: (sumH/count, sumS/count, sumV/count) - ONLY then, convert that final HSV color to RGB. But looking at the HSV<=>RGB conversion code you use, it looks like it is completely a linear transform, which is basically a rotation and skew within a 3D space: computing the average in a linearily transformed intermediate space will not be different from computing it directly from the original RGB space: The only way the HSV average would be different would be when your transform is not fully linear: for example you use gamma transform, or the HSV calues are clipped to remain a constrained space, using if() tests or min() or max(). but here, you don't seem to use any gamma transform (in fact, in true photography or TV colorspaces, the gamma function is used only within a subrange of the component values, and the bordering values (near the min and/or max values of the range) are just transformed by a linear transform (to avoid saturation effects). But even in this case, you'll have some saturation, and the component will be clipped into the validity domain: this really happens in your case because you're in fact using a part of the YCbCr colorspace transform (typically used in videos on DVD and HDTV). But your main problem is elsewhere: your code is not warranteed to use a precomputed average, before rendering a band: you cannot control the order in which the various threads will run, and some will start running, then will be delayed, then another one will start and will finish by filling its ROI using a non-computed average, then the first thread will come back to terminate the average computing and will use it. In fact you'll see that some threads will use the computed "average", and some won't, using only partially computed averages stored in your global class. Your example is the perfect example of why Render() routines MUST NOT depend on the result computed by other concurrent threads: each call to Render() MUST be able to compute its ROI completely by computing its results ONLY from the parameters it receives: the full source image and custom plugin parameters. It MUST only draw within the dest rectangle. All it has performed will have NO impact on the run in concurrent thread. You plugin is the perfect example of why we need multipass effects, because otherwise each thread will have to perform the same computing several times from the same source image: each one MUST compute the full image average before writing the result only in their rectangle of interest of the destination image. So the first solution given to your problem is the good one (except that it does not use the colorspace transform and uses simple blending, using unnecessarily unsafe code, just to try making it faster: but it effectively computes the same final average/blended color by computing it several times, i.e. once on the full image within each thread). One way to go is then for each thread to try getting a lock in order to decide which one will be the main thread in charge of computing the average. Others will wait for that exclusive lock, and when they'll get it, they will see if the average has been computed and if so will take the computed average and will release their lock (allowing other threads to do the same to get the same average value). Then all threads will draw into the ROI of the destination image using the average that they got from their critical section. What you'll see: nothing in any thread, as long as the first thread that took the lock has not finished computing the average. When this thread has finished, it must store the average, and set a "already computed" flag to inform others that they don't need to compute it (when they'ell get the lock) and then only release the lock and start drawing concurrently with other threads that will also use the same computed value. You should look at the other forum thread that discusses about how to create multipass effects because this is exactly what you need. So you have at least three things to debugs: - correctly managing the multiple passes and avoiding unnecessary multiple computing of the same average - computing the average correctly (which you don't as you use variable pixels with very different weights, producing only weighted averages in which the most bottom-right pixels counts as much as ALL the rest of the image). - correcting your colorspace transform. In a simple first approach, you should first start by not trying to compute the average only once: yes it will be slower (about 50 times for the average color computing time, but not so long because the other part of the computing time is in filling the destinatoon ROIs that you won't perform several times), but this is the first thing to correct. Then correct your colorspace transform (use the value range clipping, or gamma correction (i.e. exponential<=>logarithm), including the linear segments near the extreme values): RGB to YCbCr (or YUV), and YCbCr to RGB; the effective transforms are NOT just applying a simple linear multiplicatation of a vector by a matrix, because YCbCr and YUV color models are not the result of just a simple linear matric transform (there are distinct ranges of values, and gamma correction changes things radically; but you don't need to use HSV mapping which just uses some polar coordinates and will need some slow trigonometric functions that don't always respect our human perception of colors). Then write the mutex code to avoid multiple concurrent threads to compute the same FULL image weight.
  5. A minor thing to change immediately in CodeLab would be to have a "Pause" button, so that the code will not be executed as we are editing it. Sometimes, I type code, and the fact that it runs immediately, despite it is not complete, can cause the code to turn into an infinite loop. This caused CodeLab and in fact all of PaintDotNet to become unresponsive, and I had to kill the process. For this reason, I generally don't edit code in CodeLab itself, but in an safer external editor, and I use full copy/paste operations from the external editor into the CodeLab window. But this is irritating (note that when pasting code in CodeLab, there's a lengthy scrollbar operation running, just to recolor the text (note one minor bug in this coloring: it recolors keywords found in comments.) One way to avoid the immediate execution is to start editing after leaving a syntax error, for example a "=" sign alone on the first line at the begining of the script. To effectively run the script, just remove that equal sign. I wonder if CodeLab could not simply just monitor the content of a filename (edited and saved from an external editor) and then automatically refresh itself by loading it after it has been closed by the other application, without bothering displaying and coloring it in its window. It will just perform the compilation and execution task in this case: in this case, if the code immediately crashes or hangs in an infinite loop, nothing is lost if we kill PaintDotNet from the task manager. However, the status window should first display the correct line numbers in case of compilation errors (not the line numbers after the automatically generated and hidden lines, so that they can be easily found in the external editor), and we won't even need the code frame in this case : the status frame could take the full height of the window below the menu, and the CodeLab window could be reduced in height; or the code window could be replaced by the automatically generated parameters controls, if there's no compilation error.
  6. Possibly this could be something quite similar to the Multimedia codecs API (audio/image/video producers and sinks, that you can also structure into an acyclic directed graph). Actually, it is not fully acyclic because there can be feedback streams, however it remains acyclic because feedback is only possible between separate synchronisation points, generally a fixed number of time ticks or some number of frames, so within a single synchronisation point, the directed graph if acyclic, and connected to the next processing loop after crossing the sync point to make some outputs become the inputs of the next step...
  7. Given that Effect DLL's are exporting their entry points, nothing should prohibit automating them in a custom script, as long as these DLLs can resolve their own entries in the PDN library (which also exports some public APIs that Effect renderers can use to get some other environment properties). I wished that there was effectively some effect description language, where you decompose it as a series of surfaces with given dimensions and relative positions (plus possibly some tiling mode for coordinates outside of the color planes width), and with the given number of color planes (not just the 32-bit BGRA color model). Then we would describe the renderers that we want to use: how many surfaces and which type of color model (how many color planes and which bitdepth) they accept, as well as their list of parameters. The described scene would then combine a series of input surfaces making the scene, into a ordered graph taking some surfaces on input, positioning them relative to each other, and would supply the reference to the effect or combinator to apply. The output would be one or more surfaces. The graph would be directed and acyclic. Then the script would order the rendering and would submit the tasks to each effect renderer, managing the various surfaces and splitting them according to CPU/GPU ressources and a list of working threads. The input and output ends of the acyclic graph, could be left open to create a new composite effect, fully automated. The script language could also provide some facilities to compute the parameters: dimensioning and positioning the images; aligning them according to some constraints; computing the parameters of some effects, or allowing them to be computed from elements located in some of the source surfaces, or from general input parameters. It should allow describing these parameters, in order to be able to build a GUI for parameters dialogs. Some parameters should also be bindable by default to some objects in the PaintDotNet interface (such as the current selection region, the color chooser, the ordered list of loaded images, or to a new image that PaintDotNet will allocate and prepare in its GUI. Some other facilities should be given, notably the possibily to bind some input or output surfaces to files named from string parameters; and the possibility to bing string parameters (such as filenames) to command line parameters (just assighn them a distinctive symbolic name) or to a file selector control ("Load image", "Save image as") if the string value of the named parameter is not specified. Much later probably, it should also be possible to assemble a series of images/surfaces into a video or an icon file or an animated GIF or PNG, or some other file formats that contain ordered series of images, and insert/embed some other metadata information where it is needed by these formats, including possibly some synchronized audio streams. This would result into a great tool for assembling or treating videos, for example to assemble a scrolling text, or subtitles, or inserting logos on top of all images, or applying color transformations or cleaning/resizing filters, or to build composite mosaics of videos. And may be the script language could be XML-based, for later inclusion on a web page (with a browser plugin). It could use other web standards as well such as CSS and Javascript (including other languages that can work on the XML DOM and retrieve and post various ressources from/to the web, such as some of the developed ML languages that allow working with 3D models, using the capabilities of the local GPU if available). The available effects would specified with an URL, and could even be implemented as a web service. May be I'm dreaming too much here ! This would be very far from the initial PaintDotNet application, but PaintDotNet would be an element in that universe of graphic services, for working on local resources, but at least the best rendering effects should be standardized for use in various graphic tools (this will be possible only if we can build generic effects or renderers and image producers or image output feeds without linking them necessarily to some version of the PaintDotNet API). Ideally, the interface provided in effects should also be independant of the language or platform used to access them (so we could as well use some great effects available in other graphics apps supporting such interface, such as PaintShop, The GIMP, video editing studios, or much simpler media file editors/correctors/animators/...). But the start point would be to generalize the API for custom effects: Render(Surface[] in, Surface[] out, Rectangle rectScene, Rectangle ROI); And conceptually: Describe(Properties[] in, Properties[] out) Where the output Properties would be used to indicate the number of required parameters and their type (surfaces, strings, filenames/URLs), indicate the number of additional work surfaces and the number of output surfaces, and describe the tree of composition, as well as a list of method entries implementing the Render() interface above and that would need to be instanciated for use in worker threads, as well as a list of rectangles in the described scene (passed as the rectScene parameter, which would indicate the valid coordinates within all the input surfaces), PaintDotNet making the job of mapping these scene coordinates to the actual coordinates in the input images (or using some composition mode like tiling, mirroring or a default color outside according to the scene description). The custom renderer would then compute the ROI area within each output surfaces. Also the surfaces should offer more color mapping modes than just the 32-bit BgraColor (PaintDotNet would perform itself the necessary conversions needed to connect one ouput surface format from a given effect to the input of another effect, using its own sets of effects. No temporary surface would need to be passed: you would just need to be able to describe the effects DLL so that it will describe several chained effects, including special effects that would just add some other empty surfaces, or drop one, in wich case, the Render() entry would perform absolutely nothing itself.
  8. For those that were ennoyed to always see the too long source code, I've hidden it by default. You need to unroll it now. This can help browsing between messages. Anyway, I've fixed also minor details in the source, notably after publishing the fully inlined version of the fast line() routine, when I saw that the early version, despite correct, did not use the usual rounding mode: when two candidate pixels at equal distance from the theoretical line, the one which was chosen was not always chosen as the one with the highest coordinate, i.e. always the one at the bottom or left here (this rounding rule generally used in most 3D renderers, as it is also coherent with the binary rounding mode when using floatting points coordinates): the early code was still correct (lines were also drawn with the same pixels, independantly of the direction of drawing) but it used another rounding rule in one symetric pair of octants. Some 2D/3D engines however may use the another floating-point rounding mode, such as the IEEE's recommended mode (rounding half-integers to their nearest even integer); this is not used here as it would complicate the test to perform when e==0; but this rounding mode is typically used when rendering mathematical objects or statistics (as it better distributes and balances the quantization errors). But if you use this fast renderer to build a antialiasing renderer, you won't notice any visible difference, as it will just have a extremely small impact on the overal quantization error, and only within small differences of the alpha channel, and only on very few positions in the final image where the theoretical diagonal lines will pass through the exact middle between quantized pixel centers. This IEEE rounding mode is generally not chosen in 2D images as it will show some "granularity" (in 4x4 pixels square cells) of frequently used lines such as those with exact slope 1/2 or 2 (but this granularity will be invisible when using it to produce antialiased images). It may be used when the image is in fact the quantization of an analog signal: this rounding mode may enhance a little (i.e. increase) the signal/noise ratio (the effect is a bit less than 1dB in the worst case) by allowing more precise reconstruction of the analog signal's amplitude (it may be used for example to feed a JPEG/MPEG image compressor with less data losses and with a slightly better compression rates and with a slightly higher dynamic and precision of contrasts).
  9. First delete the font from the Fonts control folder to uninstall it. Make sure that you don't have a program (such as a web browser in its fonts preference settings or a word processor, or a word processor extension in your email agent) holding a lock on it: close the browser, the email agent and any graphics application that includes a font selection menu. Make sure that this font is not used in your current theme (revert to a standard Windows Theme to make sure that it is not used by window captions, or as a dialog font). Also exit your instant messenger completely: some of its themes may use various fonts. Close the programs running with a icon in the system tray: some of them use various fonts automatically, only according to their metrics or characteristics, or can use any installed font as a possible fallback. Some applications are very unfriendly, and maintain a lock on ALL fonts they've seen once while enumerating the fonts in the Windows fonts folder, as long as they running, even if they haven't even needed to use them to render text. Make sure you exit those apps (generally these are the applications that maintain a permanent menu for selecting fonts, instead of generating the menu items on the fly, only when needed, and maintaining a lock only on the visible menu items. This is supposed to speedup the display of these menus, but actually it just makes the startup of those apps very long, and has the very negative effect of using too many GDI or memory ressources as long as they are running, so the system will actually be slower: upgrade those apps, because Windows is smarter and can use a dynamic LRU cache for fonts) If the font is required by Windows or by the display driver (because it is one of the core fonts needed by the shell or the default Windows Theme, or because this is one of the few default fonts used as fallbacks when a font does not contain some character, or because it is used by the Shell such as the special font containing some window button icons), you won't be able to delete it: these are normally only only the few core fonts owned by Microsoft and preinstalled on you version of Windows (and updated only through Windows Updates). Once you have successfully deleted that font, you may need to reboot in order to completely unlock it with some older version of Windows (due to the GDI font cache): this is generally needed for most legacy bitmap fonts (in the deprecated .FON format) Then install the new font. Avoid all fonts in the legacy .FON format (because they are actually DLL's, which you may have difficulties to remove from memory, due to the system's DLL cache, which maintains these DLL's open for possible later reuse unless you have tweaked your system with registry settings to automatically close DLLs that are no longer in use in any application... Some of these fonts include the fonts used in the DOS-compatible text Console : close any console windows if it's open; note that some hidden consoles may be used in some background applications such as those running in your taskbar).
  10. I don't know what are "OpenFont" fonts, did you mean "OpenType" fonts ? Do you mean that OpenType fonts do not work ? And why ? Is that because most of them embed restrictions that restricts Windows of enumerating their glyphs? Well, there does exist OpenType fonts without those restrictions. The only difference being that they can support many more glyph formats and can embed additional tables (such as "feature" tables enabling fine typography or simply absolutely required to support some complex scripts which need contextual ligatures and complex glyph positioning.) Does this mean that PaintDotNet cannot use these ligature tables, and then cannot access to their glyphs if they are not directly mapped with a Unicode codepoints ? Or does this mean that PaintDotNet will not use fonts that have more than 256 glyphs mapped without reference to Unicode? Or does this need that PaintDotNet does not know how to process and render glyphs contained in OpenType fonts and that contain Postscript-style Bezier splines, and that it will only accept curves described with TrueType-style Bezier splines? Note: it you can process cubic splines, you immediately can process quadradic splines, because all quadratic splines are a subset of cubic splines: * the conversion from quadratic to cubic is exact, you don't need to add additional control points to compute the geometry as they can be infered automatically. You don't need to convert these fonts. * the reverse conversion requires an approximation by splitting some curves with heavy curvature by adding more control points and then simplifying it by changing pairs of control points into a single one: the glyphs using only quadratic splines will require to compute the geometry of more vertices and control points, and the converted font files will be larger in size). * for this reason, most modern font files are made now with cubics instead of quadratics. * some more recent font formats (used in some font design tools) use NURBS which provide more consistant results and that are much easier to position precisely and fit the wanted metrics when designing the glyph shapes : NURBS are later approximated to cubics for building compatible fonts. Very basic OpenType fonts can also contain bitmap glyphs: as they look very bad except at a single resolution, this is just used for compatibility by converting automatically some old and unsupported font formats; but they exist in Windows anyway, notably for Chinese and Japanese users that can rapidly produce their own ideographic shapes in a basic bitmap editor, and save them in a personal font directly usable with their IME and keyboard layout). Many old bitmap fonts have been converted automatically into approximative vector fonts (TrueType or OpenType). OpenType may also contain hinting instructions: these hints, inserted in the glyph instructions, are generally not enumerated by Windows (their use in a font requires a licence to implement these hints in a renderer, because their definition and interpretation in a renderer is patented, notably the hinting instructions for ClearType), but with some fonts you cannot get consistent results for showing glyphs at small sizes with good-looking shape (and typographical "color", i.e. consistant weights and metrics) if you ignore these hints. I suppose that PaintDotNet will not process these hinting instructions, but it should not prohibit using these fonts to extract unhinted glyphs and build drawing shapes from them (for example to build Direct3D or OpenGL object templates, or simple 2D geometric transforms like rotated text in GDI+, which also ignores these hinting instructions when glyphs are rotated at arbitrary angles or when the text is scaled at sizes larger than what the OpenType renderer cache will maintain in memory for prerendered glyphs). So can you be more specific about which font capabilities Paint.NET does not already support though direct calls to GDI+ or though the .Net graphics environment and libraries? Is that because of limitations in some version of the .Net CLR for some older versions of Windows?
  11. If you close this thread, I will not post an antialiasing version of this fast renderer (I have one, but it is tuned differently, for filling polycurves, including various splines, including Bezier of second and third degree and NURBS, and including non convex areas or areas with holes using the even/odd filling rule, as in SVG; it also supports 3D with orthoptic or perspective projections, and it can fill with bitmap brushes; however it still does not support MIPMAPs, and it is NOT a full 3D engine as it does not use z-buffers, and does not compute hidden surfaces, and performs no animation, which is done elsewhere on top of the renderer, with much more complex code or that can use hardware vertex shaders for computing the geometry). [initially my code was written in Java (posting this code to C# or J# for DotNet is trivial), because both Java2D/Java3D and GDI/GDI+/DirectDraw/Direct3D lack some graphic primitives (for common shapes) and both don't have a good enough precision (or have too limited numeric ranges), and they do cause problems when rendering, for example, very complex vector graphics (such as high precision SVG geographic maps, or CAD models), or mathematical functions (too many approximations produce completely wrong images).] Antialising is another unrelated problem: any non-antialiasing rendering engine can be used to create antialiased images: this is a basic transform, which can be optimized only to avoid unneeded works when the fill colors (or brushes) are plain, but when using complex brushes such as scaled bitmaps or computed shades, you have no other choice than fully rendering the image at higher resolution, possibly working in small bands to reduce the memory used for the intermediate high-resolution image, and then rescaling it to the final resolution (including the alpha plane). So you'll still need to use a very fast non-antiliasing renderer (as fast as possible) to render the intermediate image which is MUCH bigger than your final display resolution (a good and precise antialiaser for imagery applications will use intermediate images that are 256 times bigger, i.e. 16 times in each dimension, for all color/alpha planes; popular antialiasers only support 4x4 or 8x8 subsampling, because they are too slow or too CPU intensive to compute the intermediate image, exactly because their non-antiliasing engine is not fast enough, or is underoptimized, or uses too many external memory ressources).
  12. This is not the subject of this thread. This is a basic tutorial explaining some examples about what can be done when creating an extension. By itself it is not an extension as it contains no control for parameters. This is code to play with, that you can compile immediately yourself using the CodeLab plugin (which also does nothing by itself: you have to write your code). Almost everything in this forum contains no plugin: you're in the wrong forum for that. If you don't know what is CodeLab or what is the PaintDotNet API, look elsewhere. There's absolutely nothing complicate here to understand what it is about. And yes, I need "superfast" renderers (and you also need it too when you have to build complex images using many filters that can combine each other). But once again, restart from the initial message : it does not speak really about performance, but about "rectangles of interest" and why it is important to not draw outside of it : many bugs in effects are caused by lack of understanding of why they are important, and it shows what they are, then gives a code that can work correctly as expected, but that will not work with some minor modifications explained but that you could incorrectly assume. Then it proposes a basic (but generic) Renderer class, that can be used to build shapes (using self-explained methods similar to the terminology used in SVG: you certainly know what is SVG, but you can't do anything with it without "programming" the shapes you want into a XML script (so you'll write the SVG code manually, or you'll use an visual editor to build compelx things like geographic maps). But speed is not the only subject of this class: you also need precision in the result. This code is precise up to the half-pixel and tested in all the particular cases (within the range limits of 32-bit integers, much more than what you need in an actual image). It is followed by an example which is very simple but can be used to test it fully. All this is supposed to be compiled and experimented immediately from the CodeLab plugin where you get the effect running instantly with the modifications you make in its editor. You don't need a separate DLL, and you don't even need a complete development kit. CodeLab is enough to compile this into a working DLL (but you also don't need to build this as a permanent DLL : this is just code that you can play with). (And I don't like distributing executable DLLs, which are potentially unsafe; I prefer sources for security, unless the author is clearly identifiable and does not want to be known as a malware author.) In fact almost all free effects should be distributed as CodeLab source scripts, and can be posted here as sources instead of DLLs hosted on random sites: I won't download and use almost all these discussed (possibly bogous and malicious) DLLs.
  13. Wrong! this code coes not use the FPU, everything is integers only, the multiplications and divisions is between integers and returns integers only, and the dotNet compiler already optimizes integer divisions or multiplications by constant powers of 2 using binary shifts. You won't gain any CPU cycle here. So I used simple multiplications just to avoid obscuring the code in this basic tutorial, when there's absolutely no benefit here. This is DotNet code, not a non-optimizing basic C compiler.
  14. Neither I, nor BoltBait, used all these words. So "shut up" yourself too (why did you need to insult both of us?). I just wanted to show that there's improvement possible in the sample code posted by BoldBait, and he admits that. I have definitely not bashed him. (and in fact such code has another use in my own implementation, that computes many more complex geometries with really lots of lines and curves in the same scene). In the background, at the lowest implementation level where most of the CPU time is spent in an internal loop, you need such extra optimization (which will be beneficial as well when implementing antialising on top of the basic renderer: this is why it is written in a separate class for easier reusability), but of course you won't need it to draw a simple shape. He says that the code will not be useful without antialising. But antialising renderers are always built on top of a non-antialising renderer, that does not necessarily need any modification in itself, because you can build another class using it. So the non-antiliasing renderer should be really fast in order to build other bricks on top of it. And antialising was not the subject of this thread. The code was posted in order to demonstrate something else, with enough facilities to allow easy modifications.
  15. I agree that your line drawing code is much better than my thin line drawing code (MUCH faster). But, my point is... there is not much call to draw aliased anything (lines or text). So, any useful code must include antialiased drawing. It doesn't matter how fast your code is, if no one's using it. May be I'll post a modified code (simplified a lot) to render antialiased thin lines (using a wide surface for each 1-pixel-high scan line), but I definitely cannot post here a complete code which computes the geometry of filled polygons, or that optimizes the rendering time by subsampling only the pixels that really need to be subsampled.
  16. I'd rather have good looking slow lines than ugly fast lines. Lines computed with Bresenham algorithm are definitely NOT ugly (and even in your case, they are much better than your really ugly "thin line" implementation). They are perfect up to the half pixel precision (and even lower if you use subpixel samping for antialiasing. I did not post here the antialising code, because computing it really fast requires much more advanced technics than just needlessly computing a larger image and downsampling it after (this would require lots of memory, and would be really slow). Yes, but changing the width of the line (possibly with subpixel widths if antialising is enabled) requires another algorithm to fill polygons (whose geometry must first be computed). You currently use brushes to make thick lines, and using brushes is also really slow, as your plugin demonstrates it. You can use the Bresenham algorithm to compute the perfect edges of polygons (in fact you'll have to compute and maintain a list of trapezoids and detect when a current edge needs to be terminated and when new edges need to be added in the active list of trapezoids. The most difficult part is to compute the line caps when they are rounded: to make perfect fit, it requires complex maths and lots of tuning for the optimization. The simplest case occurs when line caps are squares, then the next simple case is with zero-miters, then with arbitrary miters (but there's an additonal requirement: you should respect the miter limit, i.e. their maximum length from the vertex position, relative to the line width). More complex cases also occur with dashed lines, for which you must subdivide them in series of rectangles, and also for which the miters have to be dashed as well, creating non convex polygons... All this is required if you want to fully implement SVG graphics (supporting dashed lines or rounded caps is not required for SVG in the simplest rendering profile). Finally, you'll want to support fill brushes with patterns stored in external images (and make sure that they are synchronized: this is a lot of complex code to write, and without GPU support it will be really slow unless you use the Bresenham algorithm as well for subsampling or supersampling, if you want to avoid floatting points and still maintain perfect pixel accuracy or perfect subpixel alpha values). This could not be posted here in this forum. But using GDI+ to draw thick lines is really slow, and your implementation for thin solid lines and without antialiasing is really ugly and slow too. In addition, it does not work as expected when your polyline color uses alpha transparency (the above code too: it does not really apply the alpha transparency and simply overrides the pixels, ignoring their previous content, but if it computed the transparency, some pixels would be covered multiple times, and the polyline could result in variable transparency, depending on the number of edges are passing through the same pixel). A full implementation that draws line (including thin lines) correctly requires a polygon filler that computes the edges geometry, instead of just relying on the central position of vertice. That's something that I could not post here (it would be much too long and would no longer be a tutorial).
  17. This example on how to draw stars or regular thin polygons is extremely bogous (it does not respect the rectangles of interests and can produce banding effects due to multithreading) and is much slower than what with the Renderer class I gave above. Even the author admits these limitations, and he could rewrite it much more simply with this basic Renderer class. Then the code that follows to draw a line is really slow as it uses floatting point instructions (and it does not render the lines the same way depending on directions or precision limits, and it also draws too many pixels, so some diagonal lines will have variable visual width). The Bresenham algorithm, for drawing any polynomial curves with small finite degrees, is know at least since the 1960's (and seeing that someone uses a floating point loop based on the line generic parametric equations {x=x(t), y=y(t)} that can be fully solved to eliminate the t variable completely, is extremely poor).
  18. Note that using external clipping regions in the current graphics object will make your renderer much slower. And it does not work when writing directly to the surface using the indexed array syntax. (Also I've found that the antialias feature of System.Drawing2D gives unreliable/unpredictable results which varies across versions and has various precision limits, even though it may hardware acceleration from the GPU if you have the correct display drivers, which do not have their own limitations). The number of bugs explodes when using Direct3D because of these display drivers that incorrectly compile your 2D/3D code into GPU instructions: the precision requirements are not always honored, and even those minimum requirements are not sufficient for some effects (so the effect will work on some systems, and not on some others.) The most serious bugs found in hardware accelerations made by display drivers, is often in the geometry, on precise positioning of pixels and in the computed alpha values for antialiasing, and when computing splines and arcs (this can give spurious pixels on borders when polygons that are supposed to be perfectly jointive have their edges not perfectly matched or redrawn several times with alpha transparency enabled). Antialiasing for text is also often inconsistant as it depends on user's settings of ClearType. For imagery applications, it is not very fun and can prohibit computing images on multiple heterogeneous systems, unless they use the same version of Windows, DirectX, display drivers and the same GPU hardware.
  19. I am not sure that this had to go there, because it is basically part of tutorials and makes uses of an existing plugin (CodeLab). which is also described in the origin forum with other simple tutorials where I posted my message (which shows another aspect that will be useful even for the most basic custom effects built with CodeLab). I did not use the beginner's tutorial forum however, and this was mostly related to the creation (according to the forum description which was about how to create some objects or effects). What do you think of the suggested improvements for PaintDotNet or CodeLab? And why even the smallest rectangles are banded in multiple threads, even on single-core single-CPU systems? Is it to make sure that effect testers will effectively use the bounding rectangle correctly and see what bad effects can be produced if it is ignored? Is there projects to include GPU support by converting effect renderers into hardware GPU pixel shaders? Does it require a new interface for CodeLab and others?
  20. I did not want to show a complete code, but a demonstration only of what can happen when programming some effects and ignoring the clippping rectangle (third parameter of the Render() routine). This is a code to play with, because I've noted that various effects posted in these forums forget to text it, and are either slow, or show unexpected "banding" effects. And this also includes some of the basic examples shown with the documentation of CodeLab, which are needlessly slow (and also because I wonder if PaintDotNet can be improved to use hardware GPU pixel shaders to recompile the effects, or if CodeLab can automatically assert that concurrent threads will not be able to read from or write to surface areas controlled by each others).
  21. This is for example how you can optimize the line() primitive in the Renderer class, which avoids testing each individual pixel to see if they are in the drawing rectangle. In this version, all the modifications in the destination image, and clipping tests are within rectFill(), which is also used directly for drawing horizontal and vertical segments. For simplicity and brievity, the calls to rectFill() are not inlined here, and you may experience worse performance than with the previous version, mostly because of the overhead of method calls, but also because rectFill() still needs to read some class members which do not change while drawing lines, and that could be cached in local variables (the reference to the dest surface and the 4 coordinates of the drawRect). a true optimization would have to inline these 19 calls to RectFill(), or at least the 8 calls within the each for() loop used to draw diagonal segments in one of the 8 octants. Hidden Content: public void line(PaintDotNet.ColorBgra c, int x, int y, int x2, int y2) { int n, e, o; if ((x2 -= x) == 0) rectFill(c, x, y, x + 1, y2); else if (x2 > 0) if ((y2 -= y) == 0) rectFill(c, x, y, x + x2, y + 1); else if (y2 > 0) if (x2 >= y2) { for (o = x, y2 *= 2, x2 = (n = e = x2) * 2; n > 0; --n) { x++; if ((e -= y2) <= 0) { e += x2; rectFill(c, o, y, o = x, ++y); } } rectFill(c, o, y, x, y + 1); } else { for (o = y, x2 *= 2, y2 = (n = e = y2) * 2; n > 0; --n) { y++; if ((e -= x2) <= 0) { e += y2; rectFill(c, x, o, ++x, o = y); } } rectFill(c, x, o, x + 1, y); } else if (x2 >= (y2 = -y2)) { for (o = x, y2 *= 2, x2 = (n = e = x2) * 2; n > 0; --n) { x++; if ((e -= y2) < 0) { e += x2; rectFill(c, o, y, o = x, --y); } } rectFill(c, o, y, x, y - 1); } else { for (o = y, x2 *= 2, y2 = (n = e = y2) * 2; n > 0; --n) { y--; if ((e -= x2) <= 0) { e += y2; rectFill(c, x, o, ++x, o = y); } } rectFill(c, x, o, x + 1, y); } else if ((y2 -= y) == 0) rectFill(c, x, y, x + x2, y + 1); /* Note: the tests of sign of e MUST invert the condition where it's zero, otherwise some pixel positions will depend on the direction of drawing */ else if (y2 > 0) if ((x2 = -x2) >= y2) { for (o = x, y2 *= 2, x2 = (n = e = x2) * 2; n > 0; --n) { x--; if ((e -= y2) <= 0) { e += x2; rectFill(c, o, y, o = x, ++y); } } rectFill(c, o, y, x, y + 1); } else { for (o = y, x2 *= 2, y2 = (n = e = y2) * 2; n > 0; --n) { y++; if ((e -= x2) < 0) { e += y2; rectFill(c, x, o, --x, o = y); } } rectFill(c, x, o, x - 1, y); } else if ((x2 = -x2) >= (y2 = -y2)) { for (o = x, y2 *= 2, x2 = (n = e = x2) * 2; n > 0; --n) { x--; if ((e -= y2) < 0) { e += x2; rectFill(c, o, y, o = x, --y); } } rectFill(c, o, y, x, y - 1); } else { for (o = y, x2 *= 2, y2 = (n = e = y2) * 2; n > 0; --n) { y--; if ((e -= x2) < 0) { e += y2; rectFill(c, x, o, --x, o = y); } } rectFill(c, x, o, x - 1, y); } } Note that in the previous version, as well as this one, the last point of the segment is not plotted. When you use successive lineTo() in the main function using that class, that final pixel will be plotted when the next segment will plot its first pixels. When the polyline is closed by closePath(), it does not need to plot the last pixel because it will close automatically on the first pixel of the first drawn segment (so all pixels of the polygon will be effectively plotted); you may also use an additional lineTo() to join the first vertice, but it is not needed because the position of the first vertice is remembered. If your polyline is not closed, you may use endPath() instead of closePath() to terminate the path and plot the last pixel (if you want it). The polyline paths need to be open using moveTo() prior to any call to lineTo(), closePath(), or endPath(); otherwise all succeeding calls to lineTo() calls will be ignored up to next moveTo() call (this may be used to hide completely a polyline, by making the call to moveTo() conditional). A fully inlined version (for those that are interested) follows, which includes the full optimisation (it also includes other comments): Hidden Content: private void line(PaintDotNet.ColorBgra c, int x, int y, int x2, int y2) { int n, e, o; // Implementation note: we MUST NOT draw outside this bounding box // (otherwise random bands will be visible) int X1 = this.rectDraw.Left, X2 = this.rectDraw.Right, Y1 = this.rectDraw.Top, Y2 = this.rectDraw.Bottom; PaintDotNet.Surface s = this.dstSurf; // UL - U UR // \ 3 | 2 / // - 4 \ | / 1 // L ------0------ R // 5 / | \ 8 + // / 6 | 7 \ // DL D + DR // When processing diagonals, e is the double offset from theoretical // line minus one, measured along the normal to the main progress axis, // multiplied by the length of segment projected on the main axis. If // two pixel candidates are at equal distance from the theoretical line, // always use the one with the highest normal coordinate for rounding it // up (so if e is zero, according tooctant, either increment this normal // coordinate with the slowest progression, or don't decrement it). if ((x2 -= x) == 0) { // moving vertically (D/0/U) if (X1 <= x && x < X2) if (y <= y2) { // moving vertically down (D/0) if (y < Y2 && y2 > Y1) { if (y2 > Y2) y2 = Y2; if (y < Y1) y = Y1; while (y < y2) s[x, y++] = c; } } else { // moving vertically up (U) if (++y2 < Y2 && ++y > Y1) { if (y > Y2) y = Y2; if (y2 < Y1) y2 = Y1; while (y2 < y) s[x, y2++] = c; } } } else if (x2 > 0) { // moving to the right (7/DR/8/R/1/UR/2) if ((y2 -= y) == 0) { // moving horizontally to the right(R) if (Y1 <= y && y < Y2 && x < X2 && (x2 += x) > X1) { if (x2 > X2) x2 = X2; if (x < X1) x = X1; while (x < x2) s[x++, y] = c; } } else if (y2 > 0) { // moving down to the right (7/DR/8) if (x2 >= y2) { // diagonal in the 8th octant (DR/8) for (o = x, y2 <<= 1, x2 = (n = e = x2) << 1; n > 0; --n) { x++; if ((e -= y2) <= 0) { // always increment y if equal distance if (Y1 <= y && y < Y2 && o < X2 && x > X1) { int p = (x <= X2) ? x : X2; if (o < X1) o = X1; while (o < p) s[o++, y] = c; } o = x; y++; e += x2; } } if (Y1 <= y && y < Y2 && o < X2 && x > X1) { int p = (x <= X2) ? x : X2; if (o < X1) o = X1; while (o < p) s[o++, y] = c; } } else { // diagonal in the 7th octant (7) for (o = y, x2 <<= 1, y2 = (n = e = y2) << 1; n > 0; --n) { y++; if ((e -= x2) <= 0) { // always increment x if equal distance if (X1 <= x && x < X2 && o < Y2 && y > Y1) { int p = (y <= Y2) ? y : Y2; if (o < Y1) o = Y1; while (o < p) s[x, o++] = c; } o = y; x++; e += y2; } } if (X1 <= x && x < X2 && o < Y2 && y > Y1) { int p = (y <= Y2) ? y : Y2; if (o < Y1) o = Y1; while (o < p) s[x, o++] = c; } } } else { // moving up to the right (1/UR/2) if (x2 >= (y2 = -y2)) { // diagonal in the 1st octant (UR/1) for (o = x, y2 <<= 1, x2 = (n = e = x2) << 1; n > 0; --n) { x++; if ((e -= y2) < 0) { // never decrement y if equal distance if (Y1 <= y && y < Y2 && o < X2 && x > X1) { int p = (x <= X2) ? x : X2; if (o < X1) o = X1; while (o < p) s[o++, y] = c; } o = x; y--; e += x2; } } if (Y1 <= y && y < Y2 && o < X2 && x > X1) { int p = (x <= X2) ? x : X2; if (o < X1) o = X1; while (o < p) s[o++, y] = c; } } else { // diagonal in the 2nd octant (2) for (o = y, x2 <<= 1, y2 = (n = e = y2) << 1; n > 0; --n) { y--; if ((e -= x2) <= 0) { // always increment x if equal distance if (X1 <= x && x < X2 && o >= Y1 && y < Y2) { int p = (y >= Y1 - 1) ? y : Y1 - 1; if (o >= Y2) o = Y2 - 1; while (o > p) s[x, o--] = c; } o = y; x++; e += y2; } } if (X1 <= x && x < X2 && o >= Y1 && y < Y2) { int p = (y >= Y1 - 1) ? y : Y1 - 1; if (o >= Y2) o = Y2 - 1; while (o > p) s[x, o--] = c; } } } } else { // moving to the left (3/UL/4/L/5/DL/6) if ((y2 -= y) == 0) { // moving horizontally to the left (L) if (Y1 <= y && y < Y2 && x > X1 && (x2 += x) < X2) { if (x2 < X1) x2 = X1; if (x > X2) x = X2; while (x > x2) s[x--, y] = c; } /* Below, the sign tests of e MUST invert the condition where it's zero, otherwise some pixel positions will depend on the direction of drawing. */ } else if (y2 > 0) { // moving down to the left (5/DL/6) if ((x2 = -x2) >= y2) { // diagonal in the 5th octant (5/DL) for (o = x, y2 <<= 1, x2 = (n = e = x2) << 1; n > 0; --n) { x--; if ((e -= y2) <= 0) { // always increment y if equal distance if (Y1 <= y && y < Y2 && o >= X1 && x < X2) { int p = (x >= X1 - 1) ? x : X1 - 1; if (o >= X2) o = X2 - 1; while (o > p) s[o--, y] = c; } o = x; y++; e += x2; } } if (Y1 <= y && y < Y2 && o >= X1 && x < X2) { int p = (x >= X1 - 1) ? x : X1 - 1; if (o >= X2) o = X2 - 1; while (o > p) s[o--, y] = c; } } else { // diagonal in the 6th octant (6) for (o = y, x2 <<= 1, y2 = (n = e = y2) << 1; n > 0; --n) { y++; if ((e -= x2) < 0) { // never decrement x if equal distance if (X1 <= x && x < X2 && o < Y2 && y > Y1) { int p = (y <= Y2) ? y : Y2; if (o < Y1) o = Y1; while (o < p) s[x, o++] = c; } o = y; x--; e += y2; } } if (X1 <= x && x < X2 && o < Y2 && y > Y1) { int p = (y <= Y2) ? y : Y2; if (o < Y1) o = Y1; while (o < p) s[x, o++] = c; } } } else { // moving up to the left (3/UL/4) if ((x2 = -x2) >= (y2 = -y2)) { // diagonal in the 4th octant (UL/4) for (o = x, y2 <<= 1, x2 = (n = e = x2) << 1; n > 0; --n) { x--; if ((e -= y2) < 0) { // never decrement y if equal distance if (Y1 <= y && y < Y2 && o >= X1 && x < X2) { int p = (x >= X1 - 1) ? x : X1 - 1; if (o >= X2) o = X2 - 1; while (o > p) s[o--, y] = c; } o = x; y--; e += x2; } } if (Y1 <= y && y < Y2 && o >= X1 && x < X2) { int p = (x >= X1 - 1) ? x : X1 - 1; if (o >= X2) o = X2 - 1; while (o > p) s[o--, y] = c; } } else { // diagonal in the 3rd octant (3) for (o = y, x2 <<= 1, y2 = (n = e = y2) << 1; n > 0; --n) { y--; if ((e -= x2) < 0) { // never decrement x if equal distance if (X1 <= x && x < X2 && o >= Y1 && y < Y2) { int p = (y >= Y1 - 1) ? y : Y1 - 1; if (o >= Y2) o = Y2 - 1; while (o > p) s[x, o--] = c; } o = y; x--; e += y2; } } if (X1 <= x && x < X2 && o >= Y1 && y < Y2) { int p = (y >= Y1 - 1) ? y : Y1 - 1; if (o >= Y2) o = Y2 - 1; while (o > p) s[x, o--] = c; } } } } }
  22. Some related suggestions to PaintDotNet or CodeLab developers: Allow custom effect plugins to specify the properties they need and that should be precomputed in the PaintDotNet environment or in CodeLab environment : this custom properties routine should instruct PaintDotNet or CodeLab if it needs the selection area, and which format it can process, if it's not a simple bounding box. In that case, PaintDotNet or CodeLab could compute itself the selection area in an alpha plane (with the specified bitdepth, either 1-bit if the effect does not use antialiasing, or 8-bit if the effect uses antialiasing. In that case, the Render() routines will have another parameter containing the precomputed selection surface as a single color plane with alpha values. As there will be several Render() routines, one for each surfaces combinator, they should be described using a standard interface, and the list of combinators will just have to create and return instances of each renderer, which should each implement a createInstance() routine returning an instance of one of the classes supporting one of the supported interfaces (for example an interface for a Render() routine combining two surfaces, another for an interface for just transforming 1 surface: if the renderer can modify one of the source surfaces, it must be also specified accordingly, because it will influence how the working threads will be ordered hierarchically, and will also influence how the effect can be applied to images in the PaintDotNet environment: how many of them must be selected to apply the effect, and how many additional images must be created). May be a generic interface will just pass an ordered array of source surfaces in one parameter, and an array of target surfaces in the second parameter). The scene to draw could also be described using multiple surfaces (possibly more than 2) managed by PaintDotNet itself, and to which a list of "combiners" (a specified Render() routine) will be applied hierarchically to produce the final surface(s) : the target of the effect could be multiple image layers, and there will also be one or several surfaces with one or more color planes with variable depths. And PaintDotNet will split the rendering jobs conveniently. according to memory constraints per thread (the memory constraints apply to the size of bands in each described virtual surface, and a rendering routine will not be able to draw outside this virtual area, which will still be passed as the existing third parameter for the existing Render() routine. Finally, there should be a way to specify that a Renderer() routine will only use the computing primitives that can be safely converted to the local GPU shader language, for much faster processing. Some dotNet operators or types or instructions should become unavailable, and there should be a way to use other surface formats than just the Bgra surfaces, to meet one of the supported GPU format surfaces. (The DirectX/Direct3D API will make the rest, notably recompiling the DotNet-written Render() routines into a valid GPU pixel shader, if possible, or using the software pixel shaders implemented in DirectX). An effect should also be allowed to control how the jobs will be split into bands: some parts of the scene may require more processing and more passes (combining more surfaces) than others, so there could also exist a property allowing a custom effect to specify how to split the scene into a minimum number of rectangles: PaintDotNet will then possibly subdivide these rectangles into bands when needed (and according to multithreading constraints or GPU capabilities requested by each combiner). Currently, it just assumes that all the image surface has the same complexity, which is a poor strategy : rendering a tree on one part of the scene with many leaves is much more complex than rendering a house or a wall or a simple sphere. When the positions of objects on the image can be separated into areas where they do not collide, there's no need to allocate and compute large surfaces covering all the scene, when our effect can specify that the final image has areas with a simpler structure, and a more complex rendering routine, using more planes for computing the transparent areas to combine, will be used only on the bounding rectangles where they effectively collide.
  23. Note: this Renderer class is a very simplified version of a more complete class that I am writing for rendering SVG images: it currently supports all SVG Tiny, and most of SVG 2.0 (with the exception of scripts that I'll implement later). It also supports strokes of variable width, and renders not just segments, but also other splines of variable orders (which I am currently optimizing for the orders 2 and 3), and it can fill polygones as well as shapes limited by splines. It supports also fonts described in either TrueType or PostScript shapes (using quadratic or cubic curves). It also supports precise rendering of all conics, and I will include later the support for NURBS (which are the best splines to fit natural models). I also hope that, one day, the SVG graphics format and OpenType fonts will also support them instead of just the cubic Beziers which take too many curves to fit the basic conics and most math functions more exactly). For now I've just written specialized incremental routines for ellipses and for arcs of polynomial curves (with a "reasonnable" finite degree up to 256), drawn over an horizontal or vertical axis (and with a normal secondary axis). Is also supports subpixels rendering (with 256 alpha values), optimized specially to use alpha values only where they are needed near the border of shapes, providing antialised lines. This version above does not implement it, but you can implement it easily from the code above by drawing into a temporary 1-bit surface with dimensions multiplied by 8 (for 64 alpha values, which you can remap to a byte value with a simple multiplication) or 16 (for 256 alpha values, requiring a 4-times bigger 1D surface), and using a cache for counting pixels. My more optimized version done not use a large 1-bit surface but computes the alpha value of each computed pixel directly in a single top-to-bottom, left-to-right pass over the destination surface, counting the matching 1-bit subpixels only where it is needed on the unaligned partially filled border of lines, and using a very small work buffer of only 16 bytes (for alpha counters in the same horizontal position), which is used only when there's at least one polygonal edge traversing it in the middle of the same 16x16x1bit subpixels square (making each final pixel), this small buffer being not used at all when I can predict that there will be either 0 or 256 1-bit subpixels covered by the surface to fill (according to the geometry). I have also avoided all tesselisation of polygons into a list of convex trapezoids : this is not needed, even if in a prior version I used it : it does not scale very well with shapes described with curved borders, it requires more memory, it is in fact more complex to do correctly, and it is definitely not faster when you can compute the geometry incrementally with the correct (and much smaller) data structures directly from the original vertice coordinates only transformed into list of curved edges whose only the extremum points have been precomputed by splitting some edges, but without even computing where they may intersect (this will be determined incrementally directly when drawing, by reordering on the fly the list of active edges processed from top to bottom with a single scanline, and from right to left on the current scan line). The antialiasing also supports for ClearType-like positioning of color planes (yes, this means that colors are transformed near the border of shapes, using monochromatic subpixels, because each color plane is computed separately as if they were slightly offsetted horizontally with about -1/3 pixel for red and about +1/3 pixel for blue, when using a LCD display with RGB subpixel order). I will later optimize this ClearType-like rendering to take the color model into account (so that white points are more exactly preserved, something that even Windows does not consider in its implementation of ClearType subpixels, as it does not use correctly the separate gamma settings and the separate orientation of color planes which have separate angles, even when the 3 subpixels making a pixel are theoretically of equal size, something that is not completely true: the exact relative positioning of color planes is not really 1/3 pixel, as the green subpixel is a bit narrower than the blue one, and the red subpixel is a bit larger than the blue one, and this influences both the color model and the exact computing of alpha values for each color plane). But I will respect the sRGB color model strictly. I'm also currently extending it to include 3D drawing primitives, using 4D transform matrixes to include the support of various projections). Don't expect to see this posted here before long.
  24. If you use the CodeLab plugin, you will note that it offers some sample scripts, but a requirement for these plugins is not correctly explained. Your renderer routine should be declared like this: void Render( PaintDotNet.Surface dstSurf, PaintDotNet.Surface srcSurf, System.Drawing.Rectangle rectDraw) It is critically important to take the third parameter (here named rectDraw) into account, i.e. you must not draw on the destination surface outside of this box, otherwise you will see some undesirable "band" effects (it will affect the results of concurrent threads working on the same surface to draw in the surrounding bands). I think that this is nearly a bug of Paint.Net (and one source of bugs in many complex filters where those bands become visible) that can cause undesirable effects of your custom effect built with "CodeLab"; you code must be adapted to effectively clip its rendering. In addition, you must not read any pixel outside this box from the destination surface (because its content can change without notice as other concurrent threads running your Render routine are also drawing to these areas). Only the source surface is safe and can be read everywhere (but you must not write to the source surface because concurrent threads, also running your Render routine, will also need to read also from the same unmodified image). You can however safely write and read pixels (any number of times) in the destination surface as long as they are in the specified rectangle. If you consider this parameter, you can also significantly improve the rendering time of your custom filter (by avoiding unnecessary computing on areas that should not be drawn multiple times and that must be kept untouched). The following program demonstrate how your plugin will receive rendering orders with rectDraw using variable size (but it does not demonstrate which thread number is drawing each band, I don't know if the list of rendering threads and the active thread number it is accessible from PaintDotNet.Effect's environment): Hidden Content: class Renderer { private PaintDotNet.Surface dstSurf; private System.Drawing.Rectangle rectDraw; public Renderer( PaintDotNet.Surface dstSurf, System.Drawing.Rectangle rectDraw) { this.dstSurf = dstSurf; this.rectDraw = rectDraw; this.openPath = false; } /* Fill the rectangle from (x, y) inclusive to (x2, y2) exclusive * (if both points are equal, fill nothing). */ public void rectFill(PaintDotNet.ColorBgra c, int x, int y, int x2, int y2) { int x1, y1; if (x > x2) { x1 = x2 + 1; x2 = x + 1; } else x1 = x; if (y > y2) { y1 = y2 + 1; y2 = y + 1; } else y1 = y; // Implementation note: we MUST NOT draw outside this bounding box // (otherwise random bands will be visible) int X1 = this.rectDraw.Left, X2 = this.rectDraw.Right, Y1 = this.rectDraw.Top, Y2 = this.rectDraw.Bottom; if (x1 < X2 && x2 > X1 && y1 < Y2 && y2 > Y1) { // At least some part is in the bounding box, cap to that box if (x1 < X1) x1 = X1; if (x2 > X2) x2 = X2; if (y1 < Y1) y1 = Y1; if (y2 > Y2) y2 = Y2; // Fill the remaining part in that surface PaintDotNet.Surface dstSurf = this.dstSurf; for (y = y1; y < y2; y++) for (x = x1; x < x2; x++) dstSurf[x, y] = c; } } /* Plot the border of rectangle from (x, y) inclusive to (x2, y2) exclusive * (if both points are equal, plot nothing). */ public void rect(PaintDotNet.ColorBgra c, int x, int y, int x2, int y2) { if (x != x2 && y != y2) { int tmp; if (x > x2) { tmp = x + 1; x = x2 + 1; x2 = tmp; } if (y > y2) { tmp = y + 1; y = y2 + 1; y2 = tmp; } rectFill(c, x, y, x2, ++y); if (y != y2) { rectFill(c, x, --y2, x2, y2 + 1); if (y != y2) { rectFill(c, x++, y, x, y2); if (x != x2) rectFill(c, x2 - 1, y, x2, y2); } } } } }; void Render( PaintDotNet.Surface dstSurf, PaintDotNet.Surface srcSurf, System.Drawing.Rectangle rectDraw) { PaintDotNet.Effects.EffectEnvironmentParameters env; PaintDotNet.ColorBgra cFg, cBg; //float bw; //PaintDotNet.PdnRegion rgnSel; //System.Drawing.Rectangle rectSel; env = this.EnvironmentParameters; cFg = env.PrimaryColor; cBg = env.SecondaryColor; //bw = env.BrushWidth; //rgnSel = env.GetSelection(dstSurf.Bounds); //rectSel = rgnSel.GetBoundsInt(); Renderer draw = new Renderer(dstSurf, rectDraw); // display draw bands int x1 = rectDraw.Left; int y1 = rectDraw.Top; int x2 = rectDraw.Right; int y2 = rectDraw.Bottom; draw.rectFill(cBg, x1, y1, x2, y2); PaintDotNet.ColorBgra c = new PaintDotNet.ColorBgra(); c.G = (byte)(y1*51); c.B = (byte)(y1*53); c.R = (byte)(y1*59); draw.rect(c, x1, y1, x2, y2); } Run it from a new empty image, it will fill it with a series of rectangle with a border of varying color, filled with white (the current background color selected in the main interface of Paint.net. Note that the rectangles have variable height (this depends on the number of processor cores you have, and on the width of the surface to render, and on the system load on each CPU, because Paint.Net splits the rendering into multiple tasks to perform the effect in the selected rectangle, which by default is the whole image). If you drop the test in rectFill() of the limits for rectDraw, you will see that the rectangles are partly overriden, but not completely. And that the final image you get when running the plugin may change over time, even when your CPU has a single CPU core (generally the first band at the top is higher than the remaining ones). One consequence of the "banding" effect produced by Paint.Net when it calls your Render() function (and demonstrated here) is that almost all Render() routines should not make the computed colors of pixels dependant of the position of the drawing rect (whose position in the image or in the selection area if you use it is unpredictable)... unless you really want to make those bands visible as above, where the position of the rectangle to draw is used to compute the color of the 1-pixel borders of rectangles, in order to make them distinctive. In all cases, your Render routine must strictly avoid drawing outside bands, or you'll see unpredictable overlapping effects (due to concurrent threads), notably if your render routine renders some pixels in multiple overlapping passes (for example filling the whole rectangular area in white, then drawing some other parts in black) ! This undesirable effect will occur even when running on a single core CPU where only one thread is active : why does Paint.Net splits the area to draw in multiple bands is a mystery for me: it should use a single rendering thread, and not more than the number of active CPU cores... But may be it uses the capabilities of the GPU to run your render routine on it with parallel threads (I don't know the internals of Paint.Net, if it can recompile your dotNet Render() routine into a shader routine for the GPU)... ---- Now suppose you want to draw straight segments in your plugin. You could think about using a Bresenham algorithm to compute the position of pixels between two positions: as long as the positions of the two limiting vertices are effectively in the image limits, you could think that you can draw anywhere on the image, ignoring the third "rectDraw" parameter in the plugin. DON'T DO THAT ! This program will show you what you can do to draw line segments fast, but still respect the rectangle (the actual rendering code is in the defined class "Renderer" which is slightly optimized, but not completely to demoonstrate the problem): Here is the implementation of a generic Renderer class: Hidden Content: class Renderer { private PaintDotNet.Surface dstSurf; private System.Drawing.Rectangle rectDraw; public Renderer( PaintDotNet.Surface dstSurf, System.Drawing.Rectangle rectDraw) { this.dstSurf = dstSurf; this.rectDraw = rectDraw; this.openPath = false; } /* Fill the rectangle from (x, y) inclusive to (x2, y2) exclusive * (if both points are equal, fill nothing). */ public void rectFill(PaintDotNet.ColorBgra c, int x, int y, int x2, int y2) { int x1, y1; if (x > x2) { x1 = x2 + 1; x2 = x + 1; } else x1 = x; if (y > y2) { y1 = y2 + 1; y2 = y + 1; } else y1 = y; // Implementation note: we MUST NOT draw outside this bounding box // (otherwise random bands will be visible) int X1 = this.rectDraw.Left, X2 = this.rectDraw.Right, Y1 = this.rectDraw.Top, Y2 = this.rectDraw.Bottom; if (x1 < X2 && x2 > X1 && y1 < Y2 && y2 > Y1) { // At least some part is in the bounding box, cap to that box if (x1 < X1) x1 = X1; if (x2 > X2) x2 = X2; if (y1 < Y1) y1 = Y1; if (y2 > Y2) y2 = Y2; // Fill the remaining part in that surface PaintDotNet.Surface dstSurf = this.dstSurf; for (y = y1; y < y2; y++) for (x = x1; x < x2; x++) dstSurf[x, y] = c; } } /* Plot the border of rectangle from (x, y) inclusive to (x2, y2) exclusive * (if both points are equal, plot nothing). */ public void rect(PaintDotNet.ColorBgra c, int x, int y, int x2, int y2) { if (x != x2 && y != y2) { int tmp; if (x > x2) { tmp = x + 1; x = x2 + 1; x2 = tmp; } if (y > y2) { tmp = y + 1; y = y2 + 1; y2 = tmp; } rectFill(c, x, y, x2, ++y); if (y != y2) { rectFill(c, x, --y2, x2, y2 + 1); if (y != y2) { rectFill(c, x++, y, x, y2); if (x != x2) rectFill(c, x2 - 1, y, x2, y2); } } } } /* Plot a line between (x, y) inclusive to (x2, y2) exclusive * (if both points are equal, plot nothing). */ public void line(PaintDotNet.ColorBgra c, int x, int y, int x2, int y2) { int n, e; // Implementation note: we MUST NOT draw outside this bounding box // (otherwise random bands will be visible) int X1 = this.rectDraw.Left, X2 = this.rectDraw.Right, Y1 = this.rectDraw.Top, Y2 = this.rectDraw.Bottom; PaintDotNet.Surface s = this.dstSurf; if ((x2 -= x) >= 0) if ((y2 -= y) >= 0) if (x2 >= y2) for (y2 *= 2, x2 = (n = e = x2) * 2; n > 0; --n) { if (X1 <= x && x < X2 && Y1 <= y && y < Y2) s[x, y] = c; x++; if ((e -= y2) <= 0) { e += x2; y++; } } else for (x2 *= 2, y2 = (n = e = y2) * 2; n > 0; --n) { if (X1 <= x && x < X2 && Y1 <= y && y < Y2) s[x, y] = c; y++; if ((e -= x2) <= 0) { e += y2; x++; } } else if (x2 >= (y2 = -y2)) for (y2 *= 2, x2 = (n = e = x2) * 2; n > 0; --n) { if (X1 <= x && x < X2 && Y1 <= y && y < Y2) s[x, y] = c; x++; if ((e -= y2) < 0) { e += x2; y--; } } else for (x2 *= 2, y2 = (n = e = y2) * 2; n > 0; --n) { if (0 <= x && x < X2 && Y1 <= y && y < Y2) s[x, y] = c; y--; if ((e -= x2) <= 0) { e += y2; x++; } } else /* Note: the tests of sign of e MUST invert the condition where it's zero, otherwise some pixel positions will depend on the direction of drawing */ if ((y2 -= y) >= 0) if ((x2 = -x2) >= y2) for (y2 *= 2, x2 = (n = e = x2) * 2; n > 0; --n) { if (X1 <= x && x < X2 && Y1 <= y && y < Y2) s[x, y] = c; x--; if ((e -= y2) <= 0) { e += x2; y++; } } else for (x2 *= 2, y2 = (n = e = y2) * 2; n > 0; --n) { if (X1 <= x && x < X2 && Y1 <= y && y < Y2) s[x, y] = c; y++; if ((e -= x2) < 0) { e += y2; x--; } } else if ((x2 = -x2) >= (y2 = -y2)) for (y2 *= 2, x2 = (n = e = x2) * 2; n > 0; --n) { if (X1 <= x && x < X2 && Y1 <= y && y < Y2) s[x, y] = c; x--; if ((e -= y2) < 0) { e += x2; y--; } } else for (x2 *= 2, y2 = (n = e = y2) * 2; n > 0; --n) { if (X1 <= x && x < X2 && Y1 <= y && y < Y2) s[x, y] = c; y--; if ((e -= x2) < 0) { e += y2; x--; } } } private PaintDotNet.ColorBgra color, fill; public void setColor(PaintDotNet.ColorBgra c) { this.color = c; } public void setFill(PaintDotNet.ColorBgra c) { this.fill = c; } private bool openPath; private int xStart, yStart; private int xCurrent, yCurrent; public void closePath() { lineTo(this.xStart, this.yStart); this.openPath = false; } public void endPath() { if (!this.openPath) return; rectFill(this.color, this.xCurrent, this.yCurrent, this.xCurrent + 1, this.yCurrent + 1); this.openPath = false; } public void moveTo(int x, int y) { if (this.openPath) return; this.xStart = this.xCurrent = x; this.yStart = this.yCurrent = y; this.openPath = true; } public void lineTo(int x2, int y2) { if (!this.openPath) return; line(this.color, this.xCurrent, this.yCurrent, x2, y2); this.xCurrent = x2; this.yCurrent = y2; } } And here is an example of how you can use it (insert the class above in the code before this Render routine): Hidden Content: void Render( PaintDotNet.Surface dstSurf, PaintDotNet.Surface srcSurf, System.Drawing.Rectangle rectDraw) { PaintDotNet.Effects.EffectEnvironmentParameters env; PaintDotNet.ColorBgra cFg, cBg; //float bw; //PaintDotNet.PdnRegion rgnSel; //System.Drawing.Rectangle rectSel; env = this.EnvironmentParameters; cFg = env.PrimaryColor; cBg = env.SecondaryColor; //bw = env.BrushWidth; //rgnSel = env.GetSelection(dstSurf.Bounds); //rectSel = rgnSel.GetBoundsInt(); Renderer draw = new Renderer(dstSurf, rectDraw); int xMax = dstSurf.Width - 1; int yMax = dstSurf.Height - 1; PaintDotNet.ColorBgra c = new PaintDotNet.ColorBgra(); // background color c.R = (byte)192; c.G = (byte)192; c.B = (byte)192; c.A = (byte)255; draw.rectFill(c, 0, 0, xMax + 1, yMax + 1); // number of horizontal and vertical subdivisions of the surface rectangle const int N = 40; // draw a distinctive (and inversible) light color in one direction c.R = (byte)255; c.G = (byte)255; c.B = (byte)255; c.A = (byte)255; draw.setColor(c); for (int i = N; i >= 0; --i) { draw.moveTo( 0, 0); draw.lineTo(xMax , yMax * i / N); draw.lineTo( 0, yMax ); draw.lineTo(xMax * i / N, 0); draw.lineTo(xMax , yMax ); draw.lineTo( 0, yMax * i / N); draw.lineTo(xMax , 0); draw.lineTo(xMax * i / N, yMax ); draw.closePath(); draw.moveTo(xMax * i / N, 0); draw.lineTo(xMax , yMax * i / N); draw.lineTo(xMax * (N - i) / N, yMax ); draw.lineTo( 0, yMax * (N - i) / N); draw.closePath(); } // Redraw in inverted dark color in the reverse direction: // There should be no light pixels remaining on the border of lines c.R = (byte)(~c.R); c.G = (byte)(~c.G); c.B = (byte)(~c.; c.A = (byte)(255); draw.setColor(c); for (int i = N; i >= 0; --i) { draw.moveTo( 0, 0); draw.lineTo(xMax * i / N, yMax ); draw.lineTo(xMax , 0); draw.lineTo( 0, yMax * i / N); draw.lineTo(xMax , yMax ); draw.lineTo(xMax * i / N, 0); draw.lineTo( 0, yMax ); draw.lineTo(xMax , yMax * i / N); draw.closePath(); draw.moveTo(xMax * i / N, 0); draw.lineTo( 0, yMax * (N - i) / N); draw.lineTo(xMax * (N - i) / N, yMax ); draw.lineTo(xMax , yMax * i / N); draw.closePath(); } } Note that the code to draw lines is tuned so that lines will draw identically, independantly of their direction, and this is what the main Render() routine will test: you should NOT see any white pixels on the left or right of the black lines that are simply drawn in the reverse directions, over the grey surface. You'll also note (if you set the constant N to a high value like 200, when your image size is about 799x599) that the expected dot patterns will not exhibit the perfect symetries of the "moiré" that you would expect when the image size has even coordinates (so that its center is on the center of a pixel): this is the effect of the dissimetry introduced on purpose here, to plot diagonal lines with exactly the same pixels independantly of their foreward or backward direction. If you make the tests of the sign of "e" equivalent between the cases where delta x is positive and the case where it is negative, you'll get symetric patterns, but the position of pixels drawn will not always be equivalent in the octants where the lines is nearer from the horizontal, you'll see white pixels remaining on the border of lines when the black lines are drawn in the reverse direction. Now, if you comment out the test on X1/X2/Y1/Y2 within the "rectFill" primitive of the Renderer class, you will see the bad effect that the PaintDotNet "banding" can produce (because of concurrent threads): some parts will be drawn, some not, because the effective surface after each call of your Render() main function will be swapped concurrently and there's no protection of pixels outside of the rectDraw parameter: one thread running Render() routine will erase, with rectFill(), some parts of the lines currently drawn in another concurrent thread also running the same Render() routine. This code (in the Renderer class provided) is also a much faster demonstration of how to draw lines or fill rectangles in a custom plugin: you can effectively test the rectDraw rectangle to avoid drawing multiple times in the current selected area (due to the way Paint.Net splits it into bands): you can set N=800 for example in the main Render() function, and it will effectively draw 16*800= 12800 segments over the surface. Note: this code here does not use the selection rectangle: there's no need to test it in the Renderer class, but you can use it in the Render() function to compute the geometry of the draw or effect that your plugin will apply. As this Render() is just a demo for the Renderer class to see how you can use it safely, it does not use the selection (but it could). If your Render function must use the current selection as a rectangle, make sure you first compute its bounding box to allow faster processing (and don't forget to clip this selection rectangle within the drawRect rectangle, and return immediately without drawing anything if it then becomes empty, using a clipping test like in the rectFill() primitive above). But if the current selection is a random polygon and your custom effect uses complex transforms, it is highly recommended to first compute a rendering of this simple polygon within a new internal surface (with a single Alpha channel) created in the Render() function (and that you'll bound to the rectDraw rectangle to minimize the memory needed), and then draw your filter using the precomputed alpha surface: it will be much faster than testing the position of each pixel to see if it's within the selected area. I just wonder why Paint.Net (or the CodeLab plugin) does not internally precomputes itself the selection polygon as an alpha plane, and why this selection surface is not passed as a parameter of the CodeLab plugin (this could be a null reference if the selection is a simple rectangle, in which case you'll just use the bounding box that you get from the environment). This should be a cute improvement of CodeLab. Also I suggest that Paint.Net or CodeLab is fixed to assert that plugins will NOT be able to draw outside of the specified rectangle. Currently the surface outside of the band rectangle is NOT protected (so, as long as the coordinates given to index the dstSurf array are within the array limits, you can modify any portion of this surface; you can also read pixels from these locations, even if they are being updated in concurrent threads). So if you run your plugin on a multicore CPU, your effect may affect areas currently under modification by another thread using the same Render() function. I hope that this sample code will help. Notes: the Renderer.line() primitive can be optimized further to avoid scanning every pixels making a segment and see if they are within the bounds of the draw rectangle. But it would be more complex. For this reason, don't compute a geometry in your main Render() function that will use coordinates outside of the image, even if it is clipped by the Renderer class using the bounds check of the draw rectangle). If all coordinates are in the image, you reasonably don't need to make this optimization for most geometries, as it will not give significant optimizations. In addition, there would be the possible danger of incorrectly rounding the coordinates of the start and end of segments, and you would also have to compute the correct content of the "e" variable (which stores the double distance from the computed pixel position and the segment, and that is initialized here to the value of the smallest absolute delta between delta x and delta y, this variable controlling the effective line slope and where the secondary coordinate can be incremented or decremented): you would not only need to compute and correctly round the coordinates of the ends of the clipped segment, but you would also have to compute their distance from the theoretical segment that joins the initial unclipped segment. Final note: this basic and quite fast Renderer class can be used to create all sorts of basic shapes, including the arrows that are shown in the CodeLab samples (and that are clearly NOT optimized to avoid multiple redraws, and that are drawing outside of the draw box !). With this class, it's really easy to draw circles or to render maths functions. However, this basic class won't allow you to create basic shapes that are not drawn with basic 1-pixel lines with a single solid color (possibly with a single alpha value) or to fill something else than rectangles aligned with the basic coordinates system : you'll need to add support for filling polygons, or you'll have to use the strategy of drawing many parallel lines spaced by at most 1 pixel (which may be slow if the area to fill is large and the lines are not parallel to the coordinates system). In addition, it is not optimized to use the faster rectFill() when drawing horizontal and vertical lines in Renderer.line() ; that's something that is easy to add in this class: you can do it as a beginner's exercice, because it is easy, in this case, to clip the lines in the bounding box instead of testing each pixel. But you can optimize the tests in this generic version, by drawing horizontal or vertical segments using RendererRectFill(), only when one of the two coordinate changes conditionally after testing the sign of "e" (you can do it yourself too and see how it improves the rendering time, because it avoids many conditional jumps in the compiled code). (if you want the solution for the further optimized version that uses RectFill() as much as possible for drawing everything in the lineTo() primitive, look down in this thread).
×
×
  • Create New...