Page 8 of 10 FirstFirst ... 678910 LastLast
Results 71 to 80 of 96

Thread: Point-based shaders

  1. #71
    Join Date
    Dec 2005
    Location
    Chicago
    Posts
    2,397

    Default

    You should be able to re-use the data between each eye in stereo without recalculation.

    FG right now is done by a center camera and is once for both eyes but point based should prove to be more stable over an animation if implemented correctly.

    SSS etc could also be done once per stereo pair.
    "Don't let anyone drive you crazy when you know it's in walking distance."

    "Don't argue with an idiot, they will drag you down to their level and beat you over the head with experience."

    http://elementalray.wordpress.com/

  2. #72
    Join Date
    Oct 2007
    Location
    New York
    Posts
    381

    Default

    hi max,

    with the point baking, is it computing the whole surface or what is just seen through the camera? if the object was close up in first frame and the turned and moved away from camera, would this require an animated bake of the point cloud?

    or can just bake the point cloud once in one frame from any camera and it will then be perfectly reusable for any camera angle/object position?

    cheers,

    rich

  3. #73
    Join Date
    Nov 2008
    Posts
    233

    Default

    hi richard,

    that depends by what you're baking and more generally how a pointbased workflow is managing your maps. for example if you bake direct illuminatioin with shadows, you need of course to re-bake stuff involved in the animation. however there're certain cases where you may need to re-bake only the pointcloud for the animated objects. think about IBL, you don't have any direct light and any map is independent from any others so you may end up re-baking only the animated stuff. the GI shader node can lookup multiple maps onthefly and start assembling from there a whole new pcloud and data structs to support it.

    max

  4. #74
    Join Date
    Nov 2008
    Posts
    233

    Lightbulb

    here as mentioned above I'm testing, mray FG for the first bounce, and pointbased light gathering for the second and final one, which maybe a very interesting approach for this kind of scene. in fact it seems simple but both fg and pbased stuff may have problems to render it out easily, for the former because of the hdr range, for the latter because of the 'directional' lighting. so I use them both.

    thing is that FG is a two step approach, formerly it's called lazy irradiance evaluation, an initial 'lazy' step where irradiance samples are placed onto the scene(here at 1/128 vertx ratio), then for each point on a surface we evaluate irradiance, if the cache (the lazy samples) contains any valid entries interpolate between them, if not get a new irradiance entry; the main problems are that we get back a blotchy solution, or an un-detailed one, or something that will take forever to render. so I use the irrradiance caching to store a first rough light bounce and pointbased light gathering to smooth it out, where I can keep the cuberaster at low resolution (for a faster render) as I have a more evenly lighted scene. final result time (3mins 45secs) has both fg and pbased evaluated. for the same time a 2 bounce FG would have still very poor quality.
    Attached Images Attached Images

  5. #75
    Join Date
    Nov 2008
    Posts
    233

    Default

    here the 2 bounces mray FG for comparisons under the same amount of time(480rays, 2 for density and 12 for interpolation). Looking at the results I think we're an order of magnitude faster than plain mray FG as it would require bruteforce to get to the same quality (if anyone has time to test that I can pass you the scene ).
    Attached Images Attached Images
    Last edited by maxt; March 2nd, 2012 at 03:07.

  6. #76
    Join Date
    Nov 2008
    Posts
    233

    Default

    here instead the same tecnique as above (mrayFG+pbasedFG) where I increase the cube raster to 24 pixel resolution and the solidangle tolerance to 0.05 to get rid of some 'streaks' you may have noted on the wall under the left soldier arm.

    just take care that this is still not SSEnhanced as I still have to integrate the full sse code for mentalray (ie. sse working only for pre-baked standalone pbgi ), and being too tired to start rightnow.. that's why I keep rendering queer stuff .

    for simple scenes like this one with a pointcloud of around 5M points (edit: of course being almost at pixel resolution that amount of points would suffice also for displacement or some added complexity) I'm generally able to render at 1K resolution. in-core memory consumption is around 500MB for the whole pointbased stuff. cheers.
    Attached Images Attached Images
    Last edited by maxt; March 2nd, 2012 at 05:52.

  7. #77
    Join Date
    Apr 2009
    Posts
    310

    Default

    looks great!
    i like the detailed shadowing of the solution. compared to fg its way faster.
    it also shows whats wrong with fg and that could be improved in general for this method.
    cant wait to use these shaders.

  8. #78
    Join Date
    Nov 2008
    Posts
    233

    Default

    ehy just to keep the thread alive, nothing fancy, a 'point tessellator' based on REYES split&dice.

    on the camera facing triangle you may recognize three main (bilinear) patches (basically four sides polys) which are then subdivided into microgrids, the little points are the vertex micropolygon after the microgrid has been diced.

    in the REYES pipeline it is there (on the micropoly vtxs) that are computed dUdV, applied discplacement and finally passed to the shading grid that will take care of sampling (before that may have been happen the hiding pass). take care you can see the subds because I render the micropolys border too (which won't happen in practice), you can see also how the tessellator is view dependant looking at point density.

    for our point renderer we'll stop just at this stage getting the centroid of the micropoly and its radius and go straight to bake stuff. using the reyes approach for the point generation will give in turn the same pixel resolution to a pointcloud there's in prman. the point generation itself is fully sse compliant (a patch is a transposed 4(sse vector)*3 matrix actually) and it takes 'micro seconds' to happen (in the lightmapping pass).
    Attached Images Attached Images
    Last edited by maxt; May 8th, 2012 at 18:28.

  9. #79
    Join Date
    Dec 2005
    Location
    Chicago
    Posts
    2,397

    Default

    Out of curiosity (and practicality) are you avoiding anything that might make this hard to use later because it is based on something Pixar owns? Everyone is litigation happy nowadays.
    "Don't let anyone drive you crazy when you know it's in walking distance."

    "Don't argue with an idiot, they will drag you down to their level and beat you over the head with experience."

    http://elementalray.wordpress.com/

  10. #80
    Join Date
    Nov 2008
    Posts
    233

    Default

    I don't think anything about REYES is patented (probably mainly because if was 1987) also originally it has been proposed as a worldwide standard to adopt for 3d.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •