Jump to content

verdy_p

Members
  • Posts

    24
  • Joined

  • Last visited

verdy_p's Achievements

Apprentice

Apprentice (3/14)

  • First Post
  • Collaborator
  • Conversation Starter
  • Week One Done
  • One Month Later

Recent Badges

0

Reputation

  1. You might want to look at the code a little closer. I do test the firstTime flag both inside and outside the critical section. The outside test is simply for efficiency.As far as the claim it must be static, I'm not so sure that's true. While I only have a single core system, so I can't test it myself, I believe non-static variables defined at the class level are properly shared between different cores. Non-static member variables will be shared ONLY if the render class is not instanciated several times. Nothing warranties in PaintDotNet that there will not be separate instances of this class, one for each worker thread (even if the flag may be reset in the OnSetRenderInfo even handler). we have no clear info about how many renderer class instances will be allocated to process our effects, notably on multicode CPUs or on multi-CPU systems (including distributed systems that may run on several hosts). We lack a specification for getting a strong unique reference to a whole task to process on a single source image. We don't have such reference to create per-task variables, the render object only has members for isolated worker threads... And there's currently no mechanism to make the task unique object glbally visible (including through proxies to synchronized remote instances). Given that .Net is not made to run only in a single host (the virutalized environment is independant of the host itself which may be multiple), we are in some gray area here (so we can just suppose that PaintDotNet DLLs will not be used in multihosted VMs... but this may be simply wrong if PaintDotNet is run itself from a virtual OS: a .Net VM in a guest OS VM running in a multihost application server using MS VirtualPC, MS Virtual Server, Sun Virtual Box, Citrix, or other OS virtualization software (the host OSes supporting the guest OS where PaintDotNet is started may even be something else than Windows, including Linux, MacOSX, Solaris, BSD, possibly over an heterogeneous network, working together to offer a multi-cpu and multicode virtual environment with virtual shared memory and various levels of memory barriers for read/write order consistency... A thread may even start running on a host, then be yielded to continue on another host : the VM will manage the constistant transport of the current state of this thread across several guest OSes, possiblyt virtualized themselves, guest OS processes, guest OS threads, .Net VM processes, and .Net VM threads, and even possibly across other lightweight threads like fibers and Win7 tasks, provided that the code does not use "unsafe" bindings to native memory or threads/processes). As long as there's no such warranty, all that can be done is to use a static class member variable and synchronize it within the safe (virtual) .Net memory model (things will be even more complicate if you bind your code to native memory or binary code, which will prohibit the .Net environment from safely moving one .Net thread to another CPU or guest OS). Clearly, each worker thread that runs our render function cannot state anything about which task they are part from, all we can track is the references of the source and destination images, something that is not really and correctly identifying a distinct task made of several threads for separate ROIs. The fact that PaintDotNet was designed to hide such linkage/tracking info in our render routines is, in my opinion, a API DESIGN ERROR: we should have a parameter in the Render() routine pointing to the object containing the description of the task effectively shared by the multiple render threads, and a way to control its lifetime independantly of the lifetime of worker threads (the OnSetRenderInfo is not enough as it just tracks the initialization of several threads that are just supposed to run in the same .Net process, but this is not warrantied). Finally, note that the main Render() routine has the vocation of being recompiled at run-time to effectively run in a GPU core instead of the CPU, because it would be much faster there, and could benefit from more massive parallelization (typical GPUs today include about 32 parallel processing units, usable for example from DirectX, Direct3D, OpenCL, OpenGL, nVidia's PhysX, and why not accessible automatically to virtual .Net environments as well...). This could allow our render DLLs to perform real-time effects, including for animations or for video editing (if PaintDotNet integrates the suppport for multiframed images and frame numbering by synchronous timestamps or by frame counters, all in the same media format, either in a file or in a real-time stream).
  2. In fact, what you try to compute is the center of vertices of a polygon on the unit circle, by first converting its coordinates from polar to cartesian coordinates, and then converting this center back to polar to get the angle. I think that you can be much faster, because the actual hue value in HSV is not a true angle but a position measured along the perimeter of an hexagonal color wheel. So instead of using a circle, convert the angles to hexagonal perimeter values: no trigonometry (sin, cos, atan) is needed for that, just linear algebra and range tests. Why an hexagon : look at the color cube as it is effectively distorted slightly by the CIE color model and then rotate it as if you were hanging it with a thin thread linked to the white corner (the black corner laying vertically at the lowest altitude) : the lightness is the altitude of points in this distorted dice. Rotate this distorted dice around the thread, when you look at it horizontally on the plane passing through the dice center: the saturated colors will all pass in front of your eyes as the points nearest from you. These points are effectively disposed along an hexagonal on your horizontal plane. The rotation angle of the dice is what you generally call the "hue" angle (but it is NOT the very approximated "angle" used in HSV which is really a series of tangent values for distinct ranges of the effective angles). But you absolutely don't need to compute the angles in order to project (vertically) any color point in the distorted dice on your horizontal plane: you can use the projected positions on that plane to compute a mean point in this plane, and its position can be used to compute any color in the color dice that has the same hue: this is just a half plane delimited by the thread and passing by this mean point (you cannot define this half plane if the mean point is on the thread: it is simply grey, and it can have any arbitrary hue value). All this computing does not require any trignonometry, just linear transforms to compute the means you want (and you don't even need to use several ranges). However, you may need to use non linear transforms to effectively distort the color cube when taking the gamma value into account: this requires logarithms or exponentials, depending on the direction of transforms (between RGB linear energy to the CIE-based perceptual lighting model). The usual assumption that colors form a perfect circular wheel is wrong given that it does not respect the effective gammut of the sRGB color model which is more restricted. In fact it is really a sort of ellipsoid (when considering 3 color pigments) which is notably distorted by the exponential gamma factor (usual display devices or rendering surfaces use a 1.2 gamma value, not 1.0, but they are even more distorted by some electronic adjustments in order to create near black values, using a small linear transform instead. If you applied the same transform to the theoretical sRGB color cube, you would get a sort of "patatoid" with some curved facets of a parallelogram and additional facets with different curvatures created by the linear adjustments near black, but still curved by the perceptual gamma). For this reason, I absolutely don't like the color circle and what it produces ("hue" pseudo-angles). I much prefer the color hexagon which is more exact but is still much simpler to compute than the color circle: this is what is used effectively in the YUV (or related YCbCr based on another set of fundamental pigments) color models, which also use other white and black points within the color dice, to align the vertical grey thread on which the distorted dice (used by the rendering device) is hanging. ---- However, the exact behavior of your "mean hue" function should really weight the hue values according to their significance: colors along the grey line (from black to white, as they are defined in the chosen rotated color cube depending on the CIE-based color model you choose) should be weighted 0 as the hue is not significant at all (so they can be safely ignored), and colors with the maximum saturation should be weighted 100%. One simple way of computing the weight of hue values is to girst normalize S between 0%=0 and 100%=1, so that it can be used directly as the weight for computing the weighted average of hue values (it can be used in your method that remaps angles to cartesian positions on a circle, or with a faster method using the square directly without computing angles). But you can use S values directly if they are already in a constant range (independant of the other color components), because, anyway, you'll not only have to sum the weighted coordinates of hue points, but also the weights (S), before dividing the sum of weighted coordinates by the sum of weights. The only case where this weighted average will not return a point is when the sum of weights is null. This can only happen if all pixels in your sample are grey (with saturation zero, note that there can't be negative saturation, zero is its strict minimum) and even in this case, the hue value can be arbitrary given that the computed mean color will also be grey... I intuitively think that you'll get more meaningful results than just computing a non weighted "mean hue" from only the H components: the number of black/grey/white points in your sample will have absolutely no influence on the computed mean hue, and the most saturated colors will play the most important role in this mean, even if there's only 1 pixel with a non-grey color and all other pixels are grey. One good test of the algorithm would be to blur an image with this HSV mean, computed over a spot with small radius, on an image that displays for example an antialiased polygon filled with plain blue, and drawn over a plain white or black background: you should not see any random colors along the border of this polygon (because of antialiasing pixels that have very small saturation or because of the white or black background), the computed mean hue values should remain blue in all cases along the polygon border; such defect is something that can happen very easily with your algorithm, which is also not numerically stable. Another test must also be done with an antialiased thin line using near grey colors, and with a antialiased black-filled polygon, both drawn on a plain white surface: here also, you should not see any visible instable colors along the line or polygon borders after applying the same blur function.
  3. That pseudo-code is incorrect : you MUST NOT test the "firsttime" variable before betting the exclusive lock! And if you test it outside the lock (to see if you need to get it) you test it again in the critical section. The correct code is: if (!myLock.averageComputed) lock (myLock) { if (!myLock.averageComputed) { // Compute average color on full source image myLock.averageColor = average(sourceSurface); myLock.averageComputed = true; } } } // draw in destination ROI in destSurface using the mylock.averageColor Then you must make sure that there's only one "mylock" object. It must be declared as a static, initialized in its constructor with the averageComputed field or property set to false. If you want your effect to be used several times on distinct regions, you'll need a "reset" or "run now" button in the parameters dialog that, when it is pressed will simply rest this "mylock.averageComputed" flag to false, because static objects remain in their current state in memory, as long as the effect DLL is loaded in PaintDotNet.
  4. Note that your intuitive average function is wrong, independantly of the color space in which you want to compute it (RGB or HSV). Suppose that the color space was just a single greyspace, and you are computing the average of 4 pixels, what you are ttrying to compute is: average(average(average(average(average(g[0,0],g[0,0]),g[0,1]),g[1,0]),g[1,1]) What you'll get from such formula is not the linear average of greys, but the weighted average, where the last pixel takes half the weight, the 3 previous one together sharing the other half of the total weight. In other words, the two first pixels will have weight 1, the third will have weight 2, the last one will have weight 4. On a much larger image, you'll see that the resulting image will have the weight depending mostly from the color of the most lower left corner of the image: change *only* that bottom-right pixel to white or to black and you'll see that it will have a *very* large effect on your final average. Now change only the top-left pixel between white and black, and it will have absolutely no effect on the computed average. Suppose for example that the 4 grey values are (in your prefered colorspace): 16,32,64,128 in that order, the average you get from them will be successively: 16, 24, 44, and finally 86. But if pixels are in the reverse order, you'll get successively: 128,96,64, and finally 40. The result is very different because pixels are not weighted identically. In fact, with your false method for computing averages, the more you have pixels, the more this different will be important! In fact, when the computed average value must fit into an 8-bit integer, ONLY the last 8 pixels on the left of/ the last bottom row will have an importance, and all the rest of the source image will be completely ignored: if the source image is a large and high image almost completely plain white, with only the last 8 pixels plain black, the average you'll get will be plain black ! This will clearly not be an "average". A true average needs to be done differently: you must just sum the color components separately within their planes, and must count the number of pixels for which you take the components. You cannot use the intermediate average unless you keep its relative weight (which grows linearily when you scan the image from top-left to bottom-right, while the weight of newly added pixels do not grow but remains constant.) So what you really want is: - convert all pixels from RGB to HSV (or to any color space in which you want to compute the average) - sum all pixels you want to take in separate planes, with one summing varaible for each color component (sumH, sumS, sumV) - finally divide each component by the number of pixels taken in the sum, you'll get the HSV color: (sumH/count, sumS/count, sumV/count) - ONLY then, convert that final HSV color to RGB. But looking at the HSV<=>RGB conversion code you use, it looks like it is completely a linear transform, which is basically a rotation and skew within a 3D space: computing the average in a linearily transformed intermediate space will not be different from computing it directly from the original RGB space: The only way the HSV average would be different would be when your transform is not fully linear: for example you use gamma transform, or the HSV calues are clipped to remain a constrained space, using if() tests or min() or max(). but here, you don't seem to use any gamma transform (in fact, in true photography or TV colorspaces, the gamma function is used only within a subrange of the component values, and the bordering values (near the min and/or max values of the range) are just transformed by a linear transform (to avoid saturation effects). But even in this case, you'll have some saturation, and the component will be clipped into the validity domain: this really happens in your case because you're in fact using a part of the YCbCr colorspace transform (typically used in videos on DVD and HDTV). But your main problem is elsewhere: your code is not warranteed to use a precomputed average, before rendering a band: you cannot control the order in which the various threads will run, and some will start running, then will be delayed, then another one will start and will finish by filling its ROI using a non-computed average, then the first thread will come back to terminate the average computing and will use it. In fact you'll see that some threads will use the computed "average", and some won't, using only partially computed averages stored in your global class. Your example is the perfect example of why Render() routines MUST NOT depend on the result computed by other concurrent threads: each call to Render() MUST be able to compute its ROI completely by computing its results ONLY from the parameters it receives: the full source image and custom plugin parameters. It MUST only draw within the dest rectangle. All it has performed will have NO impact on the run in concurrent thread. You plugin is the perfect example of why we need multipass effects, because otherwise each thread will have to perform the same computing several times from the same source image: each one MUST compute the full image average before writing the result only in their rectangle of interest of the destination image. So the first solution given to your problem is the good one (except that it does not use the colorspace transform and uses simple blending, using unnecessarily unsafe code, just to try making it faster: but it effectively computes the same final average/blended color by computing it several times, i.e. once on the full image within each thread). One way to go is then for each thread to try getting a lock in order to decide which one will be the main thread in charge of computing the average. Others will wait for that exclusive lock, and when they'll get it, they will see if the average has been computed and if so will take the computed average and will release their lock (allowing other threads to do the same to get the same average value). Then all threads will draw into the ROI of the destination image using the average that they got from their critical section. What you'll see: nothing in any thread, as long as the first thread that took the lock has not finished computing the average. When this thread has finished, it must store the average, and set a "already computed" flag to inform others that they don't need to compute it (when they'ell get the lock) and then only release the lock and start drawing concurrently with other threads that will also use the same computed value. You should look at the other forum thread that discusses about how to create multipass effects because this is exactly what you need. So you have at least three things to debugs: - correctly managing the multiple passes and avoiding unnecessary multiple computing of the same average - computing the average correctly (which you don't as you use variable pixels with very different weights, producing only weighted averages in which the most bottom-right pixels counts as much as ALL the rest of the image). - correcting your colorspace transform. In a simple first approach, you should first start by not trying to compute the average only once: yes it will be slower (about 50 times for the average color computing time, but not so long because the other part of the computing time is in filling the destinatoon ROIs that you won't perform several times), but this is the first thing to correct. Then correct your colorspace transform (use the value range clipping, or gamma correction (i.e. exponential<=>logarithm), including the linear segments near the extreme values): RGB to YCbCr (or YUV), and YCbCr to RGB; the effective transforms are NOT just applying a simple linear multiplicatation of a vector by a matrix, because YCbCr and YUV color models are not the result of just a simple linear matric transform (there are distinct ranges of values, and gamma correction changes things radically; but you don't need to use HSV mapping which just uses some polar coordinates and will need some slow trigonometric functions that don't always respect our human perception of colors). Then write the mutex code to avoid multiple concurrent threads to compute the same FULL image weight.
  5. A minor thing to change immediately in CodeLab would be to have a "Pause" button, so that the code will not be executed as we are editing it. Sometimes, I type code, and the fact that it runs immediately, despite it is not complete, can cause the code to turn into an infinite loop. This caused CodeLab and in fact all of PaintDotNet to become unresponsive, and I had to kill the process. For this reason, I generally don't edit code in CodeLab itself, but in an safer external editor, and I use full copy/paste operations from the external editor into the CodeLab window. But this is irritating (note that when pasting code in CodeLab, there's a lengthy scrollbar operation running, just to recolor the text (note one minor bug in this coloring: it recolors keywords found in comments.) One way to avoid the immediate execution is to start editing after leaving a syntax error, for example a "=" sign alone on the first line at the begining of the script. To effectively run the script, just remove that equal sign. I wonder if CodeLab could not simply just monitor the content of a filename (edited and saved from an external editor) and then automatically refresh itself by loading it after it has been closed by the other application, without bothering displaying and coloring it in its window. It will just perform the compilation and execution task in this case: in this case, if the code immediately crashes or hangs in an infinite loop, nothing is lost if we kill PaintDotNet from the task manager. However, the status window should first display the correct line numbers in case of compilation errors (not the line numbers after the automatically generated and hidden lines, so that they can be easily found in the external editor), and we won't even need the code frame in this case : the status frame could take the full height of the window below the menu, and the CodeLab window could be reduced in height; or the code window could be replaced by the automatically generated parameters controls, if there's no compilation error.
  6. Possibly this could be something quite similar to the Multimedia codecs API (audio/image/video producers and sinks, that you can also structure into an acyclic directed graph). Actually, it is not fully acyclic because there can be feedback streams, however it remains acyclic because feedback is only possible between separate synchronisation points, generally a fixed number of time ticks or some number of frames, so within a single synchronisation point, the directed graph if acyclic, and connected to the next processing loop after crossing the sync point to make some outputs become the inputs of the next step...
  7. Given that Effect DLL's are exporting their entry points, nothing should prohibit automating them in a custom script, as long as these DLLs can resolve their own entries in the PDN library (which also exports some public APIs that Effect renderers can use to get some other environment properties). I wished that there was effectively some effect description language, where you decompose it as a series of surfaces with given dimensions and relative positions (plus possibly some tiling mode for coordinates outside of the color planes width), and with the given number of color planes (not just the 32-bit BGRA color model). Then we would describe the renderers that we want to use: how many surfaces and which type of color model (how many color planes and which bitdepth) they accept, as well as their list of parameters. The described scene would then combine a series of input surfaces making the scene, into a ordered graph taking some surfaces on input, positioning them relative to each other, and would supply the reference to the effect or combinator to apply. The output would be one or more surfaces. The graph would be directed and acyclic. Then the script would order the rendering and would submit the tasks to each effect renderer, managing the various surfaces and splitting them according to CPU/GPU ressources and a list of working threads. The input and output ends of the acyclic graph, could be left open to create a new composite effect, fully automated. The script language could also provide some facilities to compute the parameters: dimensioning and positioning the images; aligning them according to some constraints; computing the parameters of some effects, or allowing them to be computed from elements located in some of the source surfaces, or from general input parameters. It should allow describing these parameters, in order to be able to build a GUI for parameters dialogs. Some parameters should also be bindable by default to some objects in the PaintDotNet interface (such as the current selection region, the color chooser, the ordered list of loaded images, or to a new image that PaintDotNet will allocate and prepare in its GUI. Some other facilities should be given, notably the possibily to bind some input or output surfaces to files named from string parameters; and the possibility to bing string parameters (such as filenames) to command line parameters (just assighn them a distinctive symbolic name) or to a file selector control ("Load image", "Save image as") if the string value of the named parameter is not specified. Much later probably, it should also be possible to assemble a series of images/surfaces into a video or an icon file or an animated GIF or PNG, or some other file formats that contain ordered series of images, and insert/embed some other metadata information where it is needed by these formats, including possibly some synchronized audio streams. This would result into a great tool for assembling or treating videos, for example to assemble a scrolling text, or subtitles, or inserting logos on top of all images, or applying color transformations or cleaning/resizing filters, or to build composite mosaics of videos. And may be the script language could be XML-based, for later inclusion on a web page (with a browser plugin). It could use other web standards as well such as CSS and Javascript (including other languages that can work on the XML DOM and retrieve and post various ressources from/to the web, such as some of the developed ML languages that allow working with 3D models, using the capabilities of the local GPU if available). The available effects would specified with an URL, and could even be implemented as a web service. May be I'm dreaming too much here ! This would be very far from the initial PaintDotNet application, but PaintDotNet would be an element in that universe of graphic services, for working on local resources, but at least the best rendering effects should be standardized for use in various graphic tools (this will be possible only if we can build generic effects or renderers and image producers or image output feeds without linking them necessarily to some version of the PaintDotNet API). Ideally, the interface provided in effects should also be independant of the language or platform used to access them (so we could as well use some great effects available in other graphics apps supporting such interface, such as PaintShop, The GIMP, video editing studios, or much simpler media file editors/correctors/animators/...). But the start point would be to generalize the API for custom effects: Render(Surface[] in, Surface[] out, Rectangle rectScene, Rectangle ROI); And conceptually: Describe(Properties[] in, Properties[] out) Where the output Properties would be used to indicate the number of required parameters and their type (surfaces, strings, filenames/URLs), indicate the number of additional work surfaces and the number of output surfaces, and describe the tree of composition, as well as a list of method entries implementing the Render() interface above and that would need to be instanciated for use in worker threads, as well as a list of rectangles in the described scene (passed as the rectScene parameter, which would indicate the valid coordinates within all the input surfaces), PaintDotNet making the job of mapping these scene coordinates to the actual coordinates in the input images (or using some composition mode like tiling, mirroring or a default color outside according to the scene description). The custom renderer would then compute the ROI area within each output surfaces. Also the surfaces should offer more color mapping modes than just the 32-bit BgraColor (PaintDotNet would perform itself the necessary conversions needed to connect one ouput surface format from a given effect to the input of another effect, using its own sets of effects. No temporary surface would need to be passed: you would just need to be able to describe the effects DLL so that it will describe several chained effects, including special effects that would just add some other empty surfaces, or drop one, in wich case, the Render() entry would perform absolutely nothing itself.
  8. For those that were ennoyed to always see the too long source code, I've hidden it by default. You need to unroll it now. This can help browsing between messages. Anyway, I've fixed also minor details in the source, notably after publishing the fully inlined version of the fast line() routine, when I saw that the early version, despite correct, did not use the usual rounding mode: when two candidate pixels at equal distance from the theoretical line, the one which was chosen was not always chosen as the one with the highest coordinate, i.e. always the one at the bottom or left here (this rounding rule generally used in most 3D renderers, as it is also coherent with the binary rounding mode when using floatting points coordinates): the early code was still correct (lines were also drawn with the same pixels, independantly of the direction of drawing) but it used another rounding rule in one symetric pair of octants. Some 2D/3D engines however may use the another floating-point rounding mode, such as the IEEE's recommended mode (rounding half-integers to their nearest even integer); this is not used here as it would complicate the test to perform when e==0; but this rounding mode is typically used when rendering mathematical objects or statistics (as it better distributes and balances the quantization errors). But if you use this fast renderer to build a antialiasing renderer, you won't notice any visible difference, as it will just have a extremely small impact on the overal quantization error, and only within small differences of the alpha channel, and only on very few positions in the final image where the theoretical diagonal lines will pass through the exact middle between quantized pixel centers. This IEEE rounding mode is generally not chosen in 2D images as it will show some "granularity" (in 4x4 pixels square cells) of frequently used lines such as those with exact slope 1/2 or 2 (but this granularity will be invisible when using it to produce antialiased images). It may be used when the image is in fact the quantization of an analog signal: this rounding mode may enhance a little (i.e. increase) the signal/noise ratio (the effect is a bit less than 1dB in the worst case) by allowing more precise reconstruction of the analog signal's amplitude (it may be used for example to feed a JPEG/MPEG image compressor with less data losses and with a slightly better compression rates and with a slightly higher dynamic and precision of contrasts).
  9. First delete the font from the Fonts control folder to uninstall it. Make sure that you don't have a program (such as a web browser in its fonts preference settings or a word processor, or a word processor extension in your email agent) holding a lock on it: close the browser, the email agent and any graphics application that includes a font selection menu. Make sure that this font is not used in your current theme (revert to a standard Windows Theme to make sure that it is not used by window captions, or as a dialog font). Also exit your instant messenger completely: some of its themes may use various fonts. Close the programs running with a icon in the system tray: some of them use various fonts automatically, only according to their metrics or characteristics, or can use any installed font as a possible fallback. Some applications are very unfriendly, and maintain a lock on ALL fonts they've seen once while enumerating the fonts in the Windows fonts folder, as long as they running, even if they haven't even needed to use them to render text. Make sure you exit those apps (generally these are the applications that maintain a permanent menu for selecting fonts, instead of generating the menu items on the fly, only when needed, and maintaining a lock only on the visible menu items. This is supposed to speedup the display of these menus, but actually it just makes the startup of those apps very long, and has the very negative effect of using too many GDI or memory ressources as long as they are running, so the system will actually be slower: upgrade those apps, because Windows is smarter and can use a dynamic LRU cache for fonts) If the font is required by Windows or by the display driver (because it is one of the core fonts needed by the shell or the default Windows Theme, or because this is one of the few default fonts used as fallbacks when a font does not contain some character, or because it is used by the Shell such as the special font containing some window button icons), you won't be able to delete it: these are normally only only the few core fonts owned by Microsoft and preinstalled on you version of Windows (and updated only through Windows Updates). Once you have successfully deleted that font, you may need to reboot in order to completely unlock it with some older version of Windows (due to the GDI font cache): this is generally needed for most legacy bitmap fonts (in the deprecated .FON format) Then install the new font. Avoid all fonts in the legacy .FON format (because they are actually DLL's, which you may have difficulties to remove from memory, due to the system's DLL cache, which maintains these DLL's open for possible later reuse unless you have tweaked your system with registry settings to automatically close DLLs that are no longer in use in any application... Some of these fonts include the fonts used in the DOS-compatible text Console : close any console windows if it's open; note that some hidden consoles may be used in some background applications such as those running in your taskbar).
  10. I don't know what are "OpenFont" fonts, did you mean "OpenType" fonts ? Do you mean that OpenType fonts do not work ? And why ? Is that because most of them embed restrictions that restricts Windows of enumerating their glyphs? Well, there does exist OpenType fonts without those restrictions. The only difference being that they can support many more glyph formats and can embed additional tables (such as "feature" tables enabling fine typography or simply absolutely required to support some complex scripts which need contextual ligatures and complex glyph positioning.) Does this mean that PaintDotNet cannot use these ligature tables, and then cannot access to their glyphs if they are not directly mapped with a Unicode codepoints ? Or does this mean that PaintDotNet will not use fonts that have more than 256 glyphs mapped without reference to Unicode? Or does this need that PaintDotNet does not know how to process and render glyphs contained in OpenType fonts and that contain Postscript-style Bezier splines, and that it will only accept curves described with TrueType-style Bezier splines? Note: it you can process cubic splines, you immediately can process quadradic splines, because all quadratic splines are a subset of cubic splines: * the conversion from quadratic to cubic is exact, you don't need to add additional control points to compute the geometry as they can be infered automatically. You don't need to convert these fonts. * the reverse conversion requires an approximation by splitting some curves with heavy curvature by adding more control points and then simplifying it by changing pairs of control points into a single one: the glyphs using only quadratic splines will require to compute the geometry of more vertices and control points, and the converted font files will be larger in size). * for this reason, most modern font files are made now with cubics instead of quadratics. * some more recent font formats (used in some font design tools) use NURBS which provide more consistant results and that are much easier to position precisely and fit the wanted metrics when designing the glyph shapes : NURBS are later approximated to cubics for building compatible fonts. Very basic OpenType fonts can also contain bitmap glyphs: as they look very bad except at a single resolution, this is just used for compatibility by converting automatically some old and unsupported font formats; but they exist in Windows anyway, notably for Chinese and Japanese users that can rapidly produce their own ideographic shapes in a basic bitmap editor, and save them in a personal font directly usable with their IME and keyboard layout). Many old bitmap fonts have been converted automatically into approximative vector fonts (TrueType or OpenType). OpenType may also contain hinting instructions: these hints, inserted in the glyph instructions, are generally not enumerated by Windows (their use in a font requires a licence to implement these hints in a renderer, because their definition and interpretation in a renderer is patented, notably the hinting instructions for ClearType), but with some fonts you cannot get consistent results for showing glyphs at small sizes with good-looking shape (and typographical "color", i.e. consistant weights and metrics) if you ignore these hints. I suppose that PaintDotNet will not process these hinting instructions, but it should not prohibit using these fonts to extract unhinted glyphs and build drawing shapes from them (for example to build Direct3D or OpenGL object templates, or simple 2D geometric transforms like rotated text in GDI+, which also ignores these hinting instructions when glyphs are rotated at arbitrary angles or when the text is scaled at sizes larger than what the OpenType renderer cache will maintain in memory for prerendered glyphs). So can you be more specific about which font capabilities Paint.NET does not already support though direct calls to GDI+ or though the .Net graphics environment and libraries? Is that because of limitations in some version of the .Net CLR for some older versions of Windows?
  11. If you close this thread, I will not post an antialiasing version of this fast renderer (I have one, but it is tuned differently, for filling polycurves, including various splines, including Bezier of second and third degree and NURBS, and including non convex areas or areas with holes using the even/odd filling rule, as in SVG; it also supports 3D with orthoptic or perspective projections, and it can fill with bitmap brushes; however it still does not support MIPMAPs, and it is NOT a full 3D engine as it does not use z-buffers, and does not compute hidden surfaces, and performs no animation, which is done elsewhere on top of the renderer, with much more complex code or that can use hardware vertex shaders for computing the geometry). [initially my code was written in Java (posting this code to C# or J# for DotNet is trivial), because both Java2D/Java3D and GDI/GDI+/DirectDraw/Direct3D lack some graphic primitives (for common shapes) and both don't have a good enough precision (or have too limited numeric ranges), and they do cause problems when rendering, for example, very complex vector graphics (such as high precision SVG geographic maps, or CAD models), or mathematical functions (too many approximations produce completely wrong images).] Antialising is another unrelated problem: any non-antialiasing rendering engine can be used to create antialiased images: this is a basic transform, which can be optimized only to avoid unneeded works when the fill colors (or brushes) are plain, but when using complex brushes such as scaled bitmaps or computed shades, you have no other choice than fully rendering the image at higher resolution, possibly working in small bands to reduce the memory used for the intermediate high-resolution image, and then rescaling it to the final resolution (including the alpha plane). So you'll still need to use a very fast non-antiliasing renderer (as fast as possible) to render the intermediate image which is MUCH bigger than your final display resolution (a good and precise antialiaser for imagery applications will use intermediate images that are 256 times bigger, i.e. 16 times in each dimension, for all color/alpha planes; popular antialiasers only support 4x4 or 8x8 subsampling, because they are too slow or too CPU intensive to compute the intermediate image, exactly because their non-antiliasing engine is not fast enough, or is underoptimized, or uses too many external memory ressources).
  12. This is not the subject of this thread. This is a basic tutorial explaining some examples about what can be done when creating an extension. By itself it is not an extension as it contains no control for parameters. This is code to play with, that you can compile immediately yourself using the CodeLab plugin (which also does nothing by itself: you have to write your code). Almost everything in this forum contains no plugin: you're in the wrong forum for that. If you don't know what is CodeLab or what is the PaintDotNet API, look elsewhere. There's absolutely nothing complicate here to understand what it is about. And yes, I need "superfast" renderers (and you also need it too when you have to build complex images using many filters that can combine each other). But once again, restart from the initial message : it does not speak really about performance, but about "rectangles of interest" and why it is important to not draw outside of it : many bugs in effects are caused by lack of understanding of why they are important, and it shows what they are, then gives a code that can work correctly as expected, but that will not work with some minor modifications explained but that you could incorrectly assume. Then it proposes a basic (but generic) Renderer class, that can be used to build shapes (using self-explained methods similar to the terminology used in SVG: you certainly know what is SVG, but you can't do anything with it without "programming" the shapes you want into a XML script (so you'll write the SVG code manually, or you'll use an visual editor to build compelx things like geographic maps). But speed is not the only subject of this class: you also need precision in the result. This code is precise up to the half-pixel and tested in all the particular cases (within the range limits of 32-bit integers, much more than what you need in an actual image). It is followed by an example which is very simple but can be used to test it fully. All this is supposed to be compiled and experimented immediately from the CodeLab plugin where you get the effect running instantly with the modifications you make in its editor. You don't need a separate DLL, and you don't even need a complete development kit. CodeLab is enough to compile this into a working DLL (but you also don't need to build this as a permanent DLL : this is just code that you can play with). (And I don't like distributing executable DLLs, which are potentially unsafe; I prefer sources for security, unless the author is clearly identifiable and does not want to be known as a malware author.) In fact almost all free effects should be distributed as CodeLab source scripts, and can be posted here as sources instead of DLLs hosted on random sites: I won't download and use almost all these discussed (possibly bogous and malicious) DLLs.
  13. Wrong! this code coes not use the FPU, everything is integers only, the multiplications and divisions is between integers and returns integers only, and the dotNet compiler already optimizes integer divisions or multiplications by constant powers of 2 using binary shifts. You won't gain any CPU cycle here. So I used simple multiplications just to avoid obscuring the code in this basic tutorial, when there's absolutely no benefit here. This is DotNet code, not a non-optimizing basic C compiler.
  14. Neither I, nor BoltBait, used all these words. So "shut up" yourself too (why did you need to insult both of us?). I just wanted to show that there's improvement possible in the sample code posted by BoldBait, and he admits that. I have definitely not bashed him. (and in fact such code has another use in my own implementation, that computes many more complex geometries with really lots of lines and curves in the same scene). In the background, at the lowest implementation level where most of the CPU time is spent in an internal loop, you need such extra optimization (which will be beneficial as well when implementing antialising on top of the basic renderer: this is why it is written in a separate class for easier reusability), but of course you won't need it to draw a simple shape. He says that the code will not be useful without antialising. But antialising renderers are always built on top of a non-antialising renderer, that does not necessarily need any modification in itself, because you can build another class using it. So the non-antiliasing renderer should be really fast in order to build other bricks on top of it. And antialising was not the subject of this thread. The code was posted in order to demonstrate something else, with enough facilities to allow easy modifications.
  15. I agree that your line drawing code is much better than my thin line drawing code (MUCH faster). But, my point is... there is not much call to draw aliased anything (lines or text). So, any useful code must include antialiased drawing. It doesn't matter how fast your code is, if no one's using it. May be I'll post a modified code (simplified a lot) to render antialiased thin lines (using a wide surface for each 1-pixel-high scan line), but I definitely cannot post here a complete code which computes the geometry of filled polygons, or that optimizes the rendering time by subsampling only the pixels that really need to be subsampled.
×
×
  • Create New...