Hi all.
First I would like to say thanks to all the input so far. Second I would like to ask everybody to stay civil
But the actual reason for this post is to outline how we understand the various requests and to propose a first plan how we want to go forward. You may wonder why are asking so much for your input and not just simply implement something. The answer is that we are no artists and don't really know how your various workflows are. We also can't just implement heaps of stuff and then hope that some of the features actually make sense, because of the constraints of GPU programming (limited memory, limited kernel complexity, etc.). So when we (i.e. mostly Thomas) continue adding more functionality to render passes we need to do it in a way that we achieve the most with as few additional features, buffers, etc. as possible.
After some pondering, we came to the conclusion that there are actually 4 groups of images or results that people want to see coming out of a rendering with render passes:
- Beauty passes: Splitting up the contributions of an image by the material/BRDF type and by direct and indirect paths. This allows further tweaking of the rendering by modifying/filtering those contributions.
- Info passes: Providing additional information like normals, Z-depth etc. to compositing tools to do some more advanced compositing/image processing.
- Lighting passes: Splitting up the contributions of an image by light sources to allow tweaking the lighting after rendering.
- Layers: Splitting up the scene into separate parts and then use them to combine renderings of those parts with other images from other sources (photos, drawings, renderings of other engines, you name it) using compositing.
The first two categories are pretty clear for us and are basically what we have already and we are going to improve that system. Lighting passes are also pretty clear to us and it will probably be straight-forward to implement them.
The last category (layers) is the one that proved to be the most controversial and lots of different ways are being proposed to do basically the same (at least how we understand it): Getting out an image (or multiple images when render passes are used) that shows only part of the scene plus its "side-effects" on the rest of the scene. What does that mean? Let's look at this simple example:
Imagine you want compose the ring and the box onto some background. In that case you probably also want to be able to take over the shadows, caustics and the reflection on the floor, but not the floor itself. That's the "side-effects" I mean, i.e. effects the geometry of a layer has on the rest of the scene that is not in the layer. If you just cut the geometry, but leave out the side-effects and compose it onto some other image the result would always look wrong. And all these things are not doable with simple masks you derive from some object/material ID pass.
So what can we do? The idea is to select a sub-set of the scene geometry and call it a layer. Our plan would be to give each material/object layer node a layer ID pin, which would just be a number and all geometry that has the same layer ID assigned would be part of the same layer. Then in the render target you would specify the layer you want to render (by selecting its ID) and if it would be inclusive (i.e. everything in the layer) or exclusive (i.e. everything NOT in the layer). To get the ring and the box of the example above, I would put the ring and the box into the same layer and select that layer inclusively or put the floor into a different layer and then select the floor layer exclusively. During rendering we would then consider everything that is not part of the rendered layer to behave similar to a matte material, i.e. it would be invisible but would still "receive" the side-effects of the layer geometry.
What would be the output like? If you would render this setup the beauty pass would contain the geometry of the layer only (the second image is its alpha channel):
We would then provide an additional layer shadow pass, which would roughly look like this:
And eventually a layer reflection pass, which would roughly look like this:
It's not 100% clear to me how a reflection pass is done usually. In the example above I would use it as an additive layer, i.e. it wouldn't have an alpha mask. In the image above I also had to add some offset. Otherwise parts of the layer would have negative values which Gimp can't use. This is just meant to be an example to give you an idea what I mean. We will figure out the details later.
All three passes together can then be composited onto some background image:
With shadows (alpha blending - could be made multiplicative, too):
With reflections (addition):
With the layer (alpha blending):
Please let us know if that makes sense to you.
Cheers,
Marcus