You do more compositing than most of us Frank?? With pngs??? Lol
No compositing??? Now I'm mystified!
So a few people would still rather want to render multiple sessions for every black and white matt pass they need just like using the infochannel back then to render out passes manually each and every time? I used to work like this in lw making objects black or its alpha excluded with constant black and rendering out each scene over and over again and end up with a lot of png mask sequences. I learned 3dmax and c4d and learned a better way to deal with masks by having PROPER INPLEMENTATION of Obj/mat tags/id and EXRS. Unfortunately lw doesnt have this natively wothout exrtrader and duponts plugins and most dont understand it and stuck to lw old legacy ways. Octane to support a legacy toolset is a waste since now they are choosing only one to implement. Its better to add support for Janus or exrtrader in this case for lw to modernize it to todays level.
God I wish greenlaw, gerardestrada, cageman were here for the lw group not to look outdated.
Render Passes Discussion
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
http://render.otoy.com/features.php
Object Visibility Options (On/Off for shadow casting and camera visibility)
Does this work on standalone?
It doesn't work in lightwave plugin.
These two rectangle shadow should not be there. They're reflection cards with cast shadow off.
They're showing up in the render and alpha. The ground plane is matt shadow material


Related to renderpasses
Object Visibility Options (On/Off for shadow casting and camera visibility)
Does this work on standalone?
It doesn't work in lightwave plugin.
These two rectangle shadow should not be there. They're reflection cards with cast shadow off.
They're showing up in the render and alpha. The ground plane is matt shadow material


Related to renderpasses
- linvanchene
- Posts: 783
- Joined: Mon Mar 25, 2013 10:58 pm
- Location: Switzerland
obsolete post edited and removed by user
Last edited by linvanchene on Mon May 11, 2015 7:30 am, edited 1 time in total.
Agree with these remarks. Vue render passes is pretty good and it has Coverage pass.linvanchene wrote: I want to render out some simple black and white masks for the objects or surfaces I select.
- for the whole scene without background
- for selected objects
- for selected surfaces
- for selected materials
Please have a look at what e-on vue is doing with multipasses.
return some hours later and find every single pass labeled correctly in the different folders I selected for each pass
When trying to rendering animations every second we waste to restart from scratch just to render out an info kernel pass is time wasted
All the available information like alpha masks and z-depth should not need to be recalculated again separately.
Please add an option to select a different save location for every pass type
You do not want to have 10'000 files with all different passes in the same folder.
Regarding black and white masks, they are ok as long as you don't have to prepare another scene and change materials or object properties of that scene to make objects black and white, constant black ,etc and then rerender again from scratch the whole scene for every object you want to mask. This was the issue I had with lightwave's prehistoric compositing toolset using constant black and constant value material because it didn't support material/obj id natively. Support for that lw toolset is going backwards.
If each material/object id can be named manually as you suggest then it means we need proper implemetation of material/object id toolset in octane, not only a random auto assignment that we have now with no control.
Re-rendering out an info kernel paas before was really a waste of time I agree.
The way zdepth is being rendered is a bit time consuming even now but it might be a GPU limitation.
They must fix that shadow issue for unseen objects. Matt shadow material is busted.
- ristoraven
- Posts: 390
- Joined: Wed Jan 08, 2014 5:47 am
Sorry, I had to drop this here.
Propably the oldest joke in render biz, post -90´s era.
Propably the oldest joke in render biz, post -90´s era.

I'm in with the guys who would like to assign an alpha of 0 to an object or surface. Sure, the other techniques mentioned by the Pros are valid as well. I don't see why there is angst amongst some users that we may want something that they see as outdated.
Last edited by gristle on Tue Oct 14, 2014 1:40 am, edited 1 time in total.
Hi all.
First I would like to say thanks to all the input so far. Second I would like to ask everybody to stay civil
But the actual reason for this post is to outline how we understand the various requests and to propose a first plan how we want to go forward. You may wonder why are asking so much for your input and not just simply implement something. The answer is that we are no artists and don't really know how your various workflows are. We also can't just implement heaps of stuff and then hope that some of the features actually make sense, because of the constraints of GPU programming (limited memory, limited kernel complexity, etc.). So when we (i.e. mostly Thomas) continue adding more functionality to render passes we need to do it in a way that we achieve the most with as few additional features, buffers, etc. as possible.
After some pondering, we came to the conclusion that there are actually 4 groups of images or results that people want to see coming out of a rendering with render passes:
The last category (layers) is the one that proved to be the most controversial and lots of different ways are being proposed to do basically the same (at least how we understand it): Getting out an image (or multiple images when render passes are used) that shows only part of the scene plus its "side-effects" on the rest of the scene. What does that mean? Let's look at this simple example: Imagine you want compose the ring and the box onto some background. In that case you probably also want to be able to take over the shadows, caustics and the reflection on the floor, but not the floor itself. That's the "side-effects" I mean, i.e. effects the geometry of a layer has on the rest of the scene that is not in the layer. If you just cut the geometry, but leave out the side-effects and compose it onto some other image the result would always look wrong. And all these things are not doable with simple masks you derive from some object/material ID pass.
So what can we do? The idea is to select a sub-set of the scene geometry and call it a layer. Our plan would be to give each material/object layer node a layer ID pin, which would just be a number and all geometry that has the same layer ID assigned would be part of the same layer. Then in the render target you would specify the layer you want to render (by selecting its ID) and if it would be inclusive (i.e. everything in the layer) or exclusive (i.e. everything NOT in the layer). To get the ring and the box of the example above, I would put the ring and the box into the same layer and select that layer inclusively or put the floor into a different layer and then select the floor layer exclusively. During rendering we would then consider everything that is not part of the rendered layer to behave similar to a matte material, i.e. it would be invisible but would still "receive" the side-effects of the layer geometry.
What would be the output like? If you would render this setup the beauty pass would contain the geometry of the layer only (the second image is its alpha channel): We would then provide an additional layer shadow pass, which would roughly look like this: And eventually a layer reflection pass, which would roughly look like this: It's not 100% clear to me how a reflection pass is done usually. In the example above I would use it as an additive layer, i.e. it wouldn't have an alpha mask. In the image above I also had to add some offset. Otherwise parts of the layer would have negative values which Gimp can't use. This is just meant to be an example to give you an idea what I mean. We will figure out the details later.
All three passes together can then be composited onto some background image: With shadows (alpha blending - could be made multiplicative, too): With reflections (addition): With the layer (alpha blending): Please let us know if that makes sense to you.
Cheers,
Marcus
First I would like to say thanks to all the input so far. Second I would like to ask everybody to stay civil

But the actual reason for this post is to outline how we understand the various requests and to propose a first plan how we want to go forward. You may wonder why are asking so much for your input and not just simply implement something. The answer is that we are no artists and don't really know how your various workflows are. We also can't just implement heaps of stuff and then hope that some of the features actually make sense, because of the constraints of GPU programming (limited memory, limited kernel complexity, etc.). So when we (i.e. mostly Thomas) continue adding more functionality to render passes we need to do it in a way that we achieve the most with as few additional features, buffers, etc. as possible.
After some pondering, we came to the conclusion that there are actually 4 groups of images or results that people want to see coming out of a rendering with render passes:
- Beauty passes: Splitting up the contributions of an image by the material/BRDF type and by direct and indirect paths. This allows further tweaking of the rendering by modifying/filtering those contributions.
- Info passes: Providing additional information like normals, Z-depth etc. to compositing tools to do some more advanced compositing/image processing.
- Lighting passes: Splitting up the contributions of an image by light sources to allow tweaking the lighting after rendering.
- Layers: Splitting up the scene into separate parts and then use them to combine renderings of those parts with other images from other sources (photos, drawings, renderings of other engines, you name it) using compositing.
The last category (layers) is the one that proved to be the most controversial and lots of different ways are being proposed to do basically the same (at least how we understand it): Getting out an image (or multiple images when render passes are used) that shows only part of the scene plus its "side-effects" on the rest of the scene. What does that mean? Let's look at this simple example: Imagine you want compose the ring and the box onto some background. In that case you probably also want to be able to take over the shadows, caustics and the reflection on the floor, but not the floor itself. That's the "side-effects" I mean, i.e. effects the geometry of a layer has on the rest of the scene that is not in the layer. If you just cut the geometry, but leave out the side-effects and compose it onto some other image the result would always look wrong. And all these things are not doable with simple masks you derive from some object/material ID pass.
So what can we do? The idea is to select a sub-set of the scene geometry and call it a layer. Our plan would be to give each material/object layer node a layer ID pin, which would just be a number and all geometry that has the same layer ID assigned would be part of the same layer. Then in the render target you would specify the layer you want to render (by selecting its ID) and if it would be inclusive (i.e. everything in the layer) or exclusive (i.e. everything NOT in the layer). To get the ring and the box of the example above, I would put the ring and the box into the same layer and select that layer inclusively or put the floor into a different layer and then select the floor layer exclusively. During rendering we would then consider everything that is not part of the rendered layer to behave similar to a matte material, i.e. it would be invisible but would still "receive" the side-effects of the layer geometry.
What would be the output like? If you would render this setup the beauty pass would contain the geometry of the layer only (the second image is its alpha channel): We would then provide an additional layer shadow pass, which would roughly look like this: And eventually a layer reflection pass, which would roughly look like this: It's not 100% clear to me how a reflection pass is done usually. In the example above I would use it as an additive layer, i.e. it wouldn't have an alpha mask. In the image above I also had to add some offset. Otherwise parts of the layer would have negative values which Gimp can't use. This is just meant to be an example to give you an idea what I mean. We will figure out the details later.
All three passes together can then be composited onto some background image: With shadows (alpha blending - could be made multiplicative, too): With reflections (addition): With the layer (alpha blending): Please let us know if that makes sense to you.
Cheers,
Marcus
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
We are here in New Zealand and not in Los Angeles so all the big film stuff is happening far away. And although we get some feedback we also focus on the current customers which are definitely not working like film studios.riggles wrote:Marcus,
I would think that with OTOY having Lightstage and working with different studios on projects, the Octane team would have access to VFX house supervisors, TDs, or artists to bounce ideas off. No?
-> Could you please have a look at the layer proposal above and tell me if it makes sense to do it that way and if not, what is missing? We really need to nail this down so we can start working on it. Thanks.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Hi Marcus,
I've had a look at the layer proposal, it is similar to what I do at the moment, except using hidden from camera.
Render objects, with BG/floor hidden from camera.
Render floor, with object hidden from camera.
Shadow multiplied over the BG colour/image in photoshop.
The layer proposal gives finer grained control however, which I think will be useful.
Do you need to work in 32bpc space to get the ref/shadow/caustic layers to add together and look 100% like they would in a single beauty pass?
Cheers,
Andrew
I've had a look at the layer proposal, it is similar to what I do at the moment, except using hidden from camera.
Render objects, with BG/floor hidden from camera.
Render floor, with object hidden from camera.
Shadow multiplied over the BG colour/image in photoshop.
The layer proposal gives finer grained control however, which I think will be useful.
Do you need to work in 32bpc space to get the ref/shadow/caustic layers to add together and look 100% like they would in a single beauty pass?
Cheers,
Andrew