Hi Beppe,bepeg4d wrote:wow, great improvement in the compositing workflow![]()
what I have understood is that with the deep channel is like to have a dynamic z-depth pass without needing to mask objects. The different layers are correctly stacked using the z distance, is this correct?
ciao beppe
I think your understanding is correct - just some extra clarification from my side.
In a deep image, there can be multiple samples at different depths (a.k.a. depth samples). As Goldorak already said, usually this is a colour and an alpha value. A pixel with depth samples is called a deep pixel. You get multiple depth samples when:
- Several objects are visible through the same pixel. In a regular image, an average value is calculated for that pixel. In a deep image, the individual color for each object is stored together with the depth. Because all the depth information is still available, you can compose without any artifacts, even when the render has motion blur or depth-of-field.
- When an atmospheric effect like fog is visible through the pixel, the deep pixels correctly register the attenuation function. This allows you to compose images in volumes.
You can find less theoretical explanations here and here. Another cool application of deep images is nice lens bokeh as shown here.
cheers,
Thomas