View Full Version : Fading Between Textures as a Function of Ray Length
March 22nd, 2010, 23:19
David Marks and Ruediger showed a couple of different ways of fading between two textures as a function of depth using the State Ray Length node. This is slightly different than the "Erosion" node posted in a previous thread, where fading between two textures occurs as a function of x and y position. I took some snaps but they were so blurry they were useless. Could you post those networks again?
March 25th, 2010, 09:31
This sounds interesting. What do you plan on using this for?
Sounds much like what Mip-Mapping already does for you though.
March 25th, 2010, 19:45
Mip mapping as I understand it is an automated process based on preset distances, that basically use a lower resolution version of your image when the surface is farther away. It might not be based on distance so much as pixels in an image vs. pixels on the screen. I don't know the details. Also automated is the filtering say of a texture within Maya when you create a new file read node. I find that those types of automated maps/filters do nasty things to my images, especially where I am crossing UV borders. There are lots of work arounds, for example, I frequently I will turn filters off and/or bake out pixels beyond UV borders. But what I really want to do is to be able to control the distances at which they are occurring based on what I see rather than what the 3D program is programmed to do automatically.
My immediate application is a painted hair texture. I am working on a collaborative project where, for whatever reason, they do not want to render hair. Instead they would like a texture that looks like hair. I am a pretty good painter, so I can do that, i.e. paint a hair texture that looks great. HOWEVER, as soon as you zoom in on it, eeek, it looks terrible, definitely not like hair. So a solution is using another texture of higher resolution up close. But it doesn't have to be two images that are exchanged, it could be normal maps, spec maps, other effects. Hairs on alpha planes. I have lots of ideas to fix the problems that occur with zooming in.
I have been watching your posts, so Kudos, and I know that you are close. It is just creating the controllers to be able to switch the images when you want instead of when the 3D program wants to. I think there would be a math multiply node or two after ray length, with a controller on them, so that you can control the start and stop points. You also want to be able to control the steepness of the gradient. I have seen it done a couple of times, and in a couple of ways, and for whatever reason it isn't sticking.
Basically you want to create a mix map which is based on ray length (the node you have been playing with), and mix the textures based on that map. Ray lengths of a certain range are compressed to white, and beyond a certain length are compressed to black, and things in-between are a gradient between white and black. Then that black/white map is pumped into a Math Color Mix node as the mix channel, and the two images in the A and B channels. I think I could even work out a four-channel mix node. Or lines of channels for different layers of a shader. I can say it verbally, but my skills aren't yet good enough to make it happen yet.
It seems like a very powerful tool, one of the things I would use almost every day, if I could get it going. I know exactly what I would want to do with it. I know for example that I would like to move some of the tasks of compositing into the shader realm, earlier in the pipeline so-to-speak. For example, if I figured out the controls, I would want to compress the color range, decrease the contrast and decrease the saturation with distance in a scene, except for places where I wanted the viewer to look, where I might want to do the same things, but to a lesser extent, or reverse those processes...I want the control that compositors normally have in my hands at the shader level. I am taking a Nuke class now, not so much because I want to be a compositor, but more because I would like to eventually translate some of their techniques to the shader. I think it would be better done at the shader level because the key information is already part of the shader, i.e. ray length.
I have been trying to get my school (AAU, San Franciso, California) to teach shaders using Mental Mill, but right now they are only offering a class in Houdini that dives into this area. Great shader expert, Will Anielewicz, who I know KNOW's his stuff. Google him and see. I have tried to get him to teach a Mental Mill class or sponsor independent study on the topic, but he is too busy. So I might just have to take his Houdini class just to learn how to do the things I want to do with a shader. I have been trying to figure out a way around the first Houdini class that comes before that, but there doesn't seem to be a way. I have looked at the Houdini interface, which is also node based, but I really like Mental Mill's previews at each node, where I can see what the effect of the node is in-line. It makes visual sense to me, and I have watched Ruediger and David Marks do it, taking my verbal descriptions and put them into action, and see that instant visual que as to whether what you just did was correct or not. I just need to get better at understanding and manipulating the nodes. It would probably help if I had a little more programming experience. But I know I could handle the math part, as I used to be a civil engineer. I am just missing some training.
I tried looking at a book by Kopra, which did help me understand details of the different illumination models. But it wasn't working for me to use Mental Mill. That book has yet to be written I think.
In the meantime, I am trying to learn by using the forum, so I would appreciate any help on setting this up.
Last year I had the GDC freebie of Mental Mill, which was only working for a short period of time. I finally just bought it a couple of weeks ago, so hopefully my learning process won't be so stunted this year.
March 26th, 2010, 09:02
That's a lengthy post :)
First question, what are you going to use this for if in the end? Just for Maya previewing/presentations or in an actual game/demo environment?
I'm quite certain that you can manually set your mip-maps, so you could actually insert a totally different image on different mip-map levels. You could use this to desature the lower you go or other ideas you mentioned.
I'm not sure how you can tweak at what distance mip-maps switch levels, but I'm fairly certain you can. At least in a game engine where you have full control.
I did something similar with my Z-depth experiments in Maya using ray length. The annoying part was that you had to set a near distance and far distance in order to get it working. This would break as soon as you changed the camera distance. You could work out a way to connect a measuring node to the camera and probably work out a more dynamic way to do it. But is it worth all the hassle? Will you need it? Or are you just experimenting and want to learn something new?
I think a few of the things you mentioned such as desaturating at distance is usually done as a post-process effect in an actual game. You would get Z-depth information for the entire frame render and use this as a sort of "mask" for effects such as desaturating, fog, etc.
March 27th, 2010, 04:21
I understand that most game engines already give you some of these options. I am not working in a game engine right now, but instead on animation/VFX, where I am rendering. I have played with Havok, but not other game engines, so I don't know their details.
I understand that the normal process in movies/VFX is to run a depth pass. I would like to see it in my viewer, and have those controls as sliders in my scene. I want to see it all, right now, and be able to control it.
In the long run, yes, I would like to learn these skills. I can think of a jillion applications, even better than what they currently do in game engines. I just have to build up my skills.
In the case of a hair texture, what works at one distance won't necessarily work at another.
I plan on trying to work on this tomorrow, so maybe I will get it going then. Been busy painting...
March 28th, 2010, 20:26
I have something going: the start of the Transition.
The State Ray Length gives the range of ray lengths in the scene, which might vary quite a bit. I convert them to a 0 to 1 range by using a Math float divide, which I have relabeled Scene Scale. That converts all positions to the 0 to 1 range. That is fed into the end of the Math Float Smoothstep.
The start of the Math Float Smoothstep is a Math Float Absolute, which I have re-labeled as Transition Distance. The Math Float Smoothstep is converted to color with a Conversion Floats to Color, and that becomes my mixmap.
The mixmap feeds into a Math Color Mix as the mix. The two Texture lookup 2d nodes are the Color 1 and Color 2 of the Math Color Mix, the two textures I am trying to mix. The resulting mix is going into a phong.
The transition is abrupt, so I have to figure out how to get it to fade. Possibly adding a ramp compressed to a value on either side of the transition?
March 29th, 2010, 01:52
RBurke showed me a better way, which is much more efficient, attached. So if you are trying to learn how to do it right, look at that one. He also suggested looking at his posts on terrain shaders for RGBA mix nodes, and postion/slope based shaders.
This setup doesn't make intuitive sense to me, which is probably why it didn't stick. My intuition looking at the LERP (linear interpolation) would be to put start and stop distances rather than start and stop files. But it works, and very efficiently. So I have incorporated RBurke's setup into a phenomena, where the parameters make more sense to me, also attached. Within that there is scene scale input parameter, which is the size of your scene. There are also start fade and stop fade parameters, which are in the same units as the size of your scene. That way you should be able to import it into any scene and use it. Also a couple of file read nodes, so it is easy to change the files out.
March 29th, 2010, 08:10
Are you setting a near/far distance with this solution or how do you manage this part?
March 29th, 2010, 12:12
The first image is sort of conceptual, what RBurke sent to me. It is what I needed to get started, so thanks Ryan.
The second image has the near far distances set up within it, as well as the scene scale, all wrapped up in a phenomena. I apologize for the second image, which I have redone, and posted again here. The new image has better labels and is more carefully arranged so you can see what I am doing.
The math float divide node right after the State ray length normalizes the ray length float values by the scene scale. So if your scene is 100 meters, you input a value of 100 m, and the values from the ray length node are all divided by 100. This normalizes the float values coming out of the ray length node to the 0 to 1 range, at least as long as you set the scene scale correctly for your scene. Once you do that, the distance at which fade starts and the distance at which fade stops are the actual distances you want the fade to start and stop, using real values from within your scene. You could measure with a measuring tool, or more likely, futz with the value once you have imported it into your scene until it looks like you would like it to look. But the nice thing about it is it is a real value that relates to your scene values one to one. Say you wanted the fade to start at 10 meters and end at 20 meters inside of a 100 meter scene, you would input those values, 10 and 20, into the distance at which fade starts and distance at which fade ends. These are converted to fractions by dividing by the scene scale to 0.1 and 0.2.
March 30th, 2010, 09:14
Ah I see.
I see how the world scale can be useful. I only used a near/far for my zdepth.
Good job. Now go make more cool effects with your depth info and share the results with us ;)
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.