Jump to content

MJW

Members
  • Posts

    2,856
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by MJW

  1. I mostly post the source for those who want to see how it works, or want to use the algorithms for something else, but you're certainly welcome to build it yourself. It's a little difficult, though, to look at the error printout and figure out what caused it, since it seems build correctly for me. You'll have to provide a few more details on what you did, and where in the process of building you got the error. Also, what image and selection were active when you were in CodeLab? Perhaps that had something to do with it. Can you successfully run the pre-built version on the same image and selection?
  2. Build the code or run the code? I already built it, so building it shouldn't be necessary.
  3. So there would be a working version, I pretty much directly ported NormalMapPlus to CodeLab: NormalMapPlus.zip In order to understand the code better, I renamed some of the variables. I built the CodeLab version using the updated source. I may regret that if I made any mistakes during the renaming process. Beyond changing some variable names, the code is essentially unchanged. Let me know about any problems. The source code:
  4. Thank you, Prensa. That was a very helpful description. I'll have think about what would be the best solution, though I have some ideas.
  5. Thanks, though I actually know what a normal map is. What I'd like to know from someone who uses the effect: 1) What's used as the effect input image? Is it it a grayscale image where brightness represents height, or do the RGB values actually represent 3D XYZ values? I can't see the sense in the second possibility, but I may misunderstand something. 2) Why is there a lighting calculation? Normals are eventually used to do lighting, but I can't see the sense in "lighting" the image that's supposed to be the generated normals. I guess my question is: when computing a normal map, how is the lighting control used? In looking at the code, I see some things that don't strike me as quite kosher, which doesn't give me a warm, fuzzy feeling that everything the code does makes sense.
  6. What this plugin does appears to be reasonably simple, but I'm not certain exactly what it does. If those who use it could describe what it should do and what they use it for, I might be able to write a new plugin that does that. I can't really see what it's useful for. Is the lighting control actually used, or is it just set to (1, 0, 0) so the unaltered coordinates are returned. Is applying the transformation to the alpha channel used? EDIT: On second thought, maybe I'll just make a updated version that does the same thing, without worrying about the whats and wherefores. I don't think it would be difficult. Whatever is the problem with this plugin, I don't think it's related to the actual functionality. Still, if anyone wants to explain how the plugin is used, that might be helpful. EDIT 2: Let me explain what confuses me. Because the main operation appears to be differentiation, my impression is that the plugin converts some sort of height map into surface-normal vectors. So I'd assume the normal vectors would be returned, in integer form, in the RGB components. I'd also assume the height map would either be represented as a 24-bit number using RGB, a 32-bit number using ARGB, or some 8-bit or so representation using either one of the color components, or the sum of the color components, or something similar. However, the code seems to be treating each of the ARGB components separately as source (height?) values. Also, if the purpose is to compute normal vectors, what is the lighting for? Seemingly, there are things I misunderstand. My question is: in actual use, what is the source image, and what is the desired output?
  7. Though I had in mind a more of less full outside view of a beverage in a glass, I like your entry, and see no reason to exclude it. I think for this particular OOTF topic, originality and variety are encouraged. While on the subject of entry acceptability, I want to mention something unrelated to your entry, but in regard to several entries from others in past contests. I don't want to be too much of a stick-in-the-mud (good luck with that!), but I'd like to remind everyone that the objective of OOTF is to produce a reasonably realistic version of the actual object -- not some stylized design based on the object, and not a different object that incorporates the theme object. That means no rainbow-hued Leroy-Neiman-esque renditions of a shot of scotch, and no martini-glass lapel pins. As toe_head2001 said in his introductory comments: "The object is to be taken literally. It's not meant to be interpreted." The entries to which I allude were generally attractive and well executed; they just didn't fit within the constraints of the competition.
  8. Thank you, and congratulations to Pixey and MadJik. I wish I'd thought of adding dewdrops like on Pixey's entry. I really like the way MadJik (and Woody, too) handled the angled-view shadow. Thanks to toe_head2001 for hosting, and especially for updating my entry's link when Photobucket made the original image inaccessible.
  9. I think the glass should probably be appropriate to the beverage: a glass mug for beer; a wine glass for wine, a shot glass for whiskey, a tall fluted tumbler for Coke, etc. If you have a favorite beverage, that might be a good choice. Provided they are normally associated with the chosen beverage, ice cubes, drinking straws, swizzle sticks, and miniature umbrellas are allowed.
  10. It's my heavy burden (perhaps a slight overstatement) to select a new theme for the next round. While I think rendering a coin would be a interesting challenge, I see a problem. The main component of a coin is the head or tail image, and that seems like it might be little too drawing-intensive for OOTF. I think the emphasis should on using PDN, not on realistic sketching with a mouse (or for those that have one, a pen tablet ). So my inclination is to not choose a coin, but I could be convinced otherwise, and would be happy to hear arguments on either side. (I suppose we could do the inverse side of a Canadian coin; then we could recycle our maple leaves.)
  11. Thanks, and congratulations to Pixey, Dipstick, and to the other entrants. Thanks to Pixey for hosting! (Boo to Photobucket for causing problems with their ridiculous new terms-of-service.) I liked all the entries. I was particularly partial to Woody's somewhat moody sig. I'm sort of surprised their weren't a few more entries. I though it was quite a fun theme that allowed for a variety of approaches. I see Canada Day is also in July. Must be the time of year for independence.
  12. As AndrewDavid suggests, that's a limitation of ScriptLab. As useful as it is, it can only save a sequence of effects applied to a single layer. You can't record the changes to multiple layers, and you can't record any operations besides Adjustments and Effects. For example, you can't save Tool operations, such as selections. No user-written plugin could do that. (It would certainly be nice if the user could highlight a sequence of History entries, and save them as a command!) (The fact that ScriptLab can do as much as it can surprises me. One of these days I'm going to have to study the code and figure out how it works.)
  13. You might want to try Red ochre's ClipDisplace plugin. Use the image (or perhaps an inverted version) in the clipboard as the displacement map, and apply it to the image.
  14. To expand on what Iron67 said, "weird" is not a technical description. Something more specific would be helpful
  15. My guess is there's no remotely easy way to do what you want to do. I think it would help, though, if you showed a version with a number of the elements you want to modify. Seeing a single element, out of context, makes what you are trying to do unclear. Handling each element in relation to the surrounding region seems to me to be a major part of the problem. Also, are all the pixels in all the elements to be shifted the same direction? You say the pixels are shifted right, but is that actually so? Are they shifted right, or are they shifted right and down, in the direction the gradient changes? From the single-element image, I can't tell. Are the "elements" always gradients, as in the example, where the boundary is defined by the point at which they're white, or are they defined in some other fashion? Perhaps if you explained why you want to do this, the method to achieve it would be clearer.
  16. Being the traditionalist I am, I wish that the competition would continue to be called Sig of the Week, even if the entry period is normally extended to two weeks. In the past, the entry period has often been extended -- on occasion, up to three weeks, I believe -- without changing the name.
  17. What you propose strikes me as a less effective, far less flexible method of achieving what can be done quite easily with layers and edge feathering. I write software; I write plugins; but I don't have the time to write plugins that I don't think are needed.
  18. Using layers and gradients (or feathered edges) is probably the way to go. It isn't particularly time-consuming. If you aren't willing to spend the time to get a good result, you probably won't get a good result.
  19. Because light normally comes from above, in most cases beveling will look indented if the bright area is on the lower side and the dark area on the upper side, and raised if the bright area is on the upper side and the dark area on the lower side.
  20. This plugin is admittedly quite similar to TR's Custom Palette Matcher, but may be useful. It's in the Color menu. The Plugin: Recolor Using Palette.zip As the Help menu says: Recolor Using Palette replaces each pixel color with the nearest color from a specified range of the current palette. The Euclidean distance is used. This minimizes the sum of the squared differences of the red, green, and blue color components. The controls are: Starting Palette Index: The number of the first palette color to use. Palette colors are numbered from 1 to 96. Ending Palette Index: The number of the last palette color to use. Ignore Opacity When Comparing Colors: When this option is enabled, only the red, green, and blue color components are compared; the alpha value is ignored. The original alpha is preserved in the replaced color. When this option is disabled, the image and palette colors are adjusted to account for their alpha values before being compared. The distance between colors is the distance between the colors when blended with the background color that maximizes the distance. The original alpha is replaced by the alpha of the nearest palette color. Show Original Image: When enabled, the original, unmodified image is displayed. The UI: Instead of managing palettes like TR's effect, it relies on PDN's built-in palette handling, which I think is fairly easy to use. Recolor Using Palette allows a subrange of the current palette to be used. (Note the color-entry numbering is one-based, not zero-based.) One interesting (which is not necessarily the same as useful) feature is that it can take opacity into account when doing the comparison. When opacity is used in the comparison, the entire palette color, including alpha, replaces the image color; otherwise, the image alpha is preserved, and only the red, green, and blue are replaced. Here's a demonstration, using the default palette. Original image: With Ignore Opacity When Comparing Colors enabled: With Ignore Opacity When Comparing Colors disabled: (To be useful in this mode, the palette would likely need to be better suited to the image than it is in this case.) Ignore Opacity When Comparing Colors is disabled by default.
  21. I'm not sure what exactly you want to do. It would help if you were more specific. A basic approach to doing something like that is to make the regions you want the gradient in transparent, then put the gradient you want in a layer below the original image layer, so the gradient shows through the transparent areas. You can then flatten the image to combine the layers. The best method for making the regions transparent depends on the image. The simplest is probably to use the Magic Wand. The Magic Wand will often produce a jaggy edge, so other methods may be better.
  22. Also, a Google image search will find quite a few compass insignias. There might be copyright issues with some, though a compass image is generic enough that I'm not sure a general one could be copyrighted
  23. One method would be to use the Magic Wand with a low tolerance to select the background color, then erase it, and fill in the region with the gradient you want. You can extend a gradient by using the Rectangular Selection tool, then stretching the selection sideways with the Move Selected Pixels tool. However, if that's the best quality version of the compass image you have, you're probably wasting your time. I doubt it will ever look good against any sort of contrasting background. It's too pixelated. You'd be better of making a new compass, which shouldn't be too difficult.
×
×
  • Create New...