Jump to content

MJW

Members
  • Posts

    2,853
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by MJW

  1. Most adorable kitten! And a very impressive job. But if you superimpose the lines over the image, they don't really line up that well in some places, so it's not really an accurate tracing. And perhaps I'm mistaken, but I doubt it was easy to do. It certainly wouldn't have been easy for me. The point is, if there were a cubic spline that only required adding nodes along the image path, such things would be quick and easy, not feats to be admired.
  2. Posted on wrong thread, so I moved it.
  3. You might try the HSV Eraser plugin I wrote. For an image with a highly contrasting background, like the PhotoShop example, it would probably work quite well. The plugin's "Portion of Non-Erased Color to Preserve" control is another way of handling the white fringing. When that value is set to 1, erased pixels are made transparent in relation to how nearly they match the Match Color. The Clone Stamp method used in the video seems a bit crude to me. The reason there are fringes is mostly due to the fact that there aren't just hair and non-hair pixels; there are pixels that are part hair and part background. Making them all-hair pixels results in a somewhat clumpy look to my eye. If you use the HSV Eraser, you can't generally just run it on the image, since there are probably interior foreground pixels that match the Match Color. One method I suggest is: - Add a new lowest layer, and set the entire layer to a contrasting color so you'll be able to tell what's erased.. - Duplicate the image. - Make the middle layer invisible. - In the top layer, use the Magic Wand, the Eraser, and whatever else you want to erase the background and all the areas that might be either background or foreground. It's better to leave the foreground as intact as possible, but no great care is necessary. When in doubt, erase! The purpose of this layer is to replace foreground pixels that are removed by the HSV Eraser. It will be the topmost layer and visible while you use the HSV Eraser on the middle layer. - Make the middle layer visible again and make it the active layer. - Select the background color you want to remove as the Primary Color with the Color Picker. - Run Adjustments>HSV Eraser. - Set "Portion of Non-Erased Color to Preserve" to 1. - Set the Match Color source to User Match Color. Press the User Match Color reset (the arrow button) to set it to the Primary Color. - Adjust the tolerances and perhaps the Match Color. Sometimes it's better to make the Match Color further way from the hair color, even though it may no longer quite match the background. The Match Color should not be adjusted to be too far from the background color, however, as the method of preserving the non-erased color depends on knowing the background color mixed into the pixel. In the example below, I set the Match Color to white even though the background is off-white. Hopefully, you'll find values that will erase the background but leave the hair. -When satisfied, merge the top two layers. -Also, as general advice, keep in mind you only need to worry about the pixels near the edges of the foreground and background. Pixels away from the edges, clearly within the foreground or background areas, are easy to fix up. Here is one of the pictures you posted, against a green background:
  4. Bring the image into PDN in its original size. Create a new canvas the size you want with File>New. Go back to the original image, and use Edit>Select All, then Edit>Copy Go to the new canvas, and use Edit>Paste. A menu will appear, warning you the image is bigger than the canvas. Choose the "Keep canvas size" option (which is the middle option). The image will appear on the canvas, with a selection rectangle showing its extent. You will be able to move the image around.
  5. I wasn't criticizing it; I was pointing out it's not a counterexample to my assertion that the current version of ShapeMaker doesn't provide an convenient method of producing complex shapes. That it was intentionally simple doesn't change that.
  6. Most anything can be done with enough persistence. People make models of the Eiffel Tower out of toothpicks. The apple's nice, but those are all rather simple shapes. Shapes that basic should be so easy to produce that they'd hardly be worth sharing. It's like using an Etch-a-Sketch: you can draw some pretty pictures, but it'd be a lot easier to use a pencil.
  7. There may be, in a theoretical sense, overlap, because, as I've mentioned, they're all piece-wise cubic equations. From a practical point of view, there's no overlap that I can see between what I propose be added to ShapeMaker, and what's already there. The most basic requirement to easily make good shapes is the ability to conveniently draw a smooth curve that follows a particular path. How can this be done in ShapeMaker? I don't see a way. Here's a example. Draw a smooth curve or load an image with a smooth curve, then run ShapeMaker and try to trace along the curve. At best, it's very difficult. With cubic splines, it could be done by simply clicking points along the curve, and depending on the end conditions, adjusting the tangents at the endpoints.
  8. RE-EDIT: Now I understand the "not-a-knot" condition (which I had never given any thought to before yesterday). It has nothing to do with splines in which the starting and ending point are joined. Those are periodic end conditions. In not-a-knot splines, the cubic function between the second and third nodes is the same as the one between the first and second nodes. Likewise, for the other end. So the second and next-to-last nodes are not knots.
  9. Excuse my bluntness, but I don't see how the smooth quadratic Bezier as currently implemented in the ShapeMaker is even useful. It seems to draw a parabola between the current two points, based on them and the previous point, resulting in a wildly oscillating curve. That's a good link, though it also uses free-end conditions for the extra two conditions.
  10. I don't think they're different at all. They're just piecewise cubic curves. Give me a cubic equation defined on [0,1], and I can give you the control points that would produce it; likewise, give me the four control points, and I can give you the cubic equation they generate. EDIT: I believe: B(t) = (1 - t)3 P0 + 3 (1 - t)2 t P1 + 3 (1 - t) t2 P2 + t3 P3 B(t) = P0 + 3 (P1 - P0) t + 3 (P2 - 2 P1 + P0) t2 + (P3 - 3 P2 + 3 P1 - P0) t3 The first is from Wikipedia. I'm less certain of the second, which I derived quickly from the first. ANOTHER EDIT: I believe: if B(t) = C0 + C1 t + C2 t2 + C3 t3 P0 = C0 P1 = 1/3 C1 + C0 P2 = 1/3 C2 + 2/3 C1 + Co P3 = C3 + C2 + C1 + C0 I can't guarantee I made no mistakes, but no matter what, it can be done. YET ANOTHER EDIT: A C2 cubic spline is nothing more than a smooth cubic Bezier curve with the non-endpoint control points selected to achieve C2 continuity. STILL ANOTHER EDIT: This article that may help explain it: Smooth Bézier Spline Through Prescribed Points I do have a slight beef, however. The article uses the additional conditions B0"(0) = 0 and Bn-1"(1) = 0. Those are referred to as "natural" or "free" end conditions, because it's what would occur, more or less, with a mechanical spline if the end points were left alone and not forced into any particular slopes. It allows the curve to straighten out near the ends. Gerald Farin in Curves and Surfaces for Computer Aided Geometric Design says that results in poorly shaped curves (and I believe I've read in other places as well). The better choice is probably to let the user determine tangents at the endpoints, or determine them by Bessel end conditions, which means using the tangents of the parabolas that pass through the first three and last three points. There are also quadratic end conditions, where the first two second derivatives are equal, and the last two second derivatives are equal. ONE MORE EDIT: Removed misleading not-a-knot comment, which I correct in a later comment.
  11. I assumed that the spline segments would be converted to the smooth cubic Bezier format once the user indices the operation is complete. The representation as a C2 cubic spline is only necessary while the control points are being moved around. The nodes have to be dealt with as a group as long as they're being edited, because changing any of them changes the others. Once editing is complete, they're just a sequence of cubic Bezier curves which, due to C1 continuity, necessarily share control points at the nodes. The C2 continuity is a consequence of automatically generating the control points, rather than having the user specify them.
  12. Has any consideration been given to adding cubic splines with second-order continuity at the nodes, and control of the derivatives at the endpoints? I realize that requires solving a set of linear equations for every change, but it's a tridiagonal system, so unless there are a very large number of points in the curve, I'd expect it could be easily accomplished with good performance.
  13. HSV is designed with an additive color model in mind, because computer images use additive colors. If you want to think of HSV in terms of mixing Black, White, and Colored paint (with White + Black + Colored = 1 unit of paint), then, assuming S and V range from zero to one: Black = 1 - V White = (1 - S) * V Colored = S * V. (H is the color of the colored paint.) Computing HSV from colored paint: V = 1 - Black = White + Colored S = Colored / (White + Colored) If you want to think of it as mixing colored paint with gray paint: Colored = S * V Gray = 1 - S * V Darkness of gray paint (0 for white, 1 for black): (1 - V) / (1 - S * V)
  14. To my surprise, IndirectIU clamps the the value to the range when run with a smaller image. That's good news. When run with repeat-effect on a smaller window, it appears to use both the old value and range (which is sort of what I expected). That should not be a problem.
  15. I'm concerned about something. Suppose the plugin is used on a 1000x1000 window, and the control is set to 800. Then, in the same PDN session, the next time it's used on a 500x500 window. The control value is usually the same as the previous setting, so since there seems to be no way to programmatically reset the current control value, unless IndirectIU clamps it itself, it would seem like the value would be out of range. Past experience makes me think IndirectIU might throw an exception.
  16. Thank you, ArgusMagnus and BoltBait! I looked at the CodeLab generated code, but couldn't see how to the get the surface size from the arguments passed to the property routines. I should have thought of using EnvironmentParameters (though I wouldn't have been sure the surface size was valid at the time OnCreateConfigUI and OnCreatePropertyCollection are called). I'll probably give it a try today.
  17. Is there some rule or whatever to make the range of a control 0 to width-1 or height-1? I was thinking of writing some plugins in VS instead of CodeLab to achieve that result, then realized I didn't even know whether it could be done. Having arbitrary, and necessarily large, ranges for controls whose only valid values are within the canvas lacks polish. While I'm at it, is there any organized documentation of IndirectUI? I only know that controls can be linked and disabled because Simon Brown started a thread about it.
  18. There's no need for the if statement; in fact, you certainly don't want it. You can read pixels from anywhere within the src buffer. If you couldn't, you wouldn't be able to write the plugin. Not allowing reads outside the rect boundaries would mean most of the reads you need to do wouldn't be done. The restriction is on writing to the dst buffer: in the Render routine, you can never write to the dst buffer outside the rect limits (and you can never write to the src buffer). src and dst are the canvas; rect is not the canvas. It's a rather arbitrary region, callled an ROI, within the canvas that PDN has handed to a particular invocation of Render for processing. Read BoltBait's excellent tutorial, which he linked to earlier. A change you should probably make is to replace "dst[x,y] = src[XX,YY]" with "dst[x,y] = "src.GetBilinearSample((float)X, (float)Y)." Instead of taking a point sample from the nearest pixel, GetBilinearSample linearly interpolates between the four surrounding pixels. For situations like in this plugin, that almost always produces a better result. EDIT: Notice it's "src.GetBilinearSample((float)X, (float)Y)" with the doubles X and Y, not "src.GetBilinearSample((float)XX, (float)YY)" with the ints XX and YY. There's hardly ever a good reason to call GetBilinearSample with arguments that have been truncated or rounded to integer values. Also, to answer your question: No, a plugin can't change the canvas size.
  19. Notice that the version I suggested (mapping square to quadrilateral) is correct for what is wanted. When rescaled to account for the canvas dimensions, it takes each point in the rectangular window and transforms it to the quadrilateral to determine what color to map to the window coordinate.
  20. I admit, I didn't consider whether it would work within CodeLab. I'd be happy to live with, and work around, that limitation, but I'm not sure if it would make the solution a little too messy to be released. EDIT: I realize this is making it more and more complex, but might it be possible when CodeLab initiates running the plugin, it could call the InitilaizeRender function first. I can't recall how that stuff works (and perhaps never knew).
  21. I didn't use diagrams; I just solved the equations. First, I knew that it would be a homogeneous matrix, since that's a well-known fact, so I just had to find the matrix. I assumed a solution of the form: / Mx * x1 - x0 My * x3 - x0 x0 \ | | | Mx * y1 - y0 My * y3 - y0 y0 | | | \ Mx - 1 My - 1 1 / That may not be an obvious choice at first glance, but I chose it because you can quickly verify that it gives the correct results for (0,0)->(x0,y0), (1,0)->(x1,y1), and (0,1)->(x3,y3). So I just need to find Mx, My so that (1,1)->(x2, y2). I need: (Mx * x1 + My * x3 - x0) / (Mx + My - 1) = x2 (Mx * y1 + My * y3 - y0) / (Mx + My - 1) = y2 Which is: Mx * (x1 - x2) + My * (x3 - x2) = x0 - x2 Mx * (y1 - y2) + My * (y3 - y2) = y0 - y2 Those are just simultaneous equations that can be solved by Cramer's Rule. EDIT: Note I didn't solve the same problem (sort of) solved in the PDF file. That was a quadrilateral to another quadrilateral. I just solved a unit square to a quadrilateral. The quadrilateral to quadrilateral could be solved by finding the matrix that maps a quadrilateral to a unit square; then the solution is simply to map the first quadrilateral to a unit square, then map the unit square to the second quadrilateral. That second mapping can be found by inverting the matrix for the first method. Maybe I'll add a comment later explaining how that can be done fairly easily.
  22. The perspective matrix that maps {(0,0), (1,0), (1,1), (0,1)} to {(x0,y0), (x1,y1), (x2,y2), (x3,y3)} is: / x' \ / T230 * x1 - T123 * x0 T012 * x3 - T123 * x0 T123 * x0 \ / x \ | y' | = | T230 * y1 - T123 * y0 T012 * y3 - T123 * y0 T123 * y0 | | y | \ w' / \ T230 - T123 T012 - T123 T123 / \ 1 / | x2-x3 x0-x3 | | x1-x2 x3-x2 | | x0-x1 x2-x1 | Where T230 = | | T123 = | | T012 = | | | y2-y3 y0-y3 | | y1-y2 y3-y2 | | y0-y1 y2-y1 | (x", y") = (x'/w', y'/w') I.e, x' = (T230 * x1 - T123 * x0) * x + (T012 * x3 - T123 * x0) * y + T123 * x0 y' = (T230 * y1 - T123 * y0) * x + (T012 * y3 - T123 * y0) * y + T123 * y0 w' = (T230 - T123) * x + (T012 - T123) * y + T123 (The determinants can be written quite a few different ways.)These are what are called homogeneous coordinates. The coordinates to be transformed are assumed to have a 1 as the third component. Once the coordinates are transformed, the x and y values are divided by the third component to complete the transformation.
  23. I'm working on some other things at the time, but when I get a chance (assuming no one beats me to it), I'll try to write a plugin to help with this problem.
  24. For this situation, fractional pixels are possible. A plugin that performed the inverse perspective transformation needed would probably use a function (or method, if you prefer) called "GetBilinearSample" which allows fractional pixel values. The color is interpolated between the actual pixels in the source buffer. This results in some degree of blurring, but for most images it would be very acceptable.
  25. I noticed that the Fill8 flood fill routine still uses the old, ugly method of pushing phony spans at the beginning, which doesn't even always work right if the seed pixel isn't filled. I never used the Fill8 routine much, so I guess I never got around to fixing it. Darn! Now I've got to figure out how the silly thing works again.
×
×
  • Create New...