Page 2 of 2
Re: Rendering big scenes, overcoming gpu limitations
Posted: Tue May 29, 2012 1:19 am
by vagos21
Karba wrote:Jaberwocky wrote:How about this as a way out of the problem.
Use a back face culling algorithm in Octane calculated from the camera position.All items in the scene that are not in view, or not affecting the rays are eliminated before the scene is sent by the CPU to the GPU.
That would in effect:
A) limit the amount of Polys to be rendered and speed up the rendering process no end.
B) limit the number of maps sent to the GPU thus keeping it below the Cuda limit , even of the whole scene had excess Maps.
Just an

It will not work properly.
Even If we don't see some parts directly, we can see it indirectly (reflections, refractions, global illuminations, shadows and so on).
Exactly, this is the problem with unbiased (and even biased) renderers, you can't just go on and start cutting away polygons and surfaces like this, how about the shadows these polys would cast? also in animation this would cause a lot of crazy flickering, adding/removing geometry dynamically for each frame... it all has to fit on the GPU, or at least based on a camera-to object distance algorithm to hide them, but still... wouldn't be nice to hide a large building

Re: Rendering big scenes, overcoming gpu limitations
Posted: Tue May 29, 2012 11:17 am
by Jaberwocky
I do understand the issue's.
Example:
you have a building filled with furnishings and are rendering an external view of it , how is that going to affect the situation if there are no windows in the camera facing view ?, or conversely having to use up GPU render time on all those plants on the outside, around say the back of the building, if you're rendering up an internal view of a room say on the opposite side of the building.
I know this is an extreme example ..... It's just to illustrate my point.
Re: Rendering big scenes, overcoming gpu limitations
Posted: Tue May 29, 2012 4:14 pm
by palhano
Here is what we would do to render any size of scenes with any hardware available:
Let's say you have 3 GTX470s installed on a computer, 2 GTX580s on another one and 1 GTX680 on the 3rd machine.
1) Assign objects to different layers with layers panel on 3dsmax application
2) Use the interactive preview of the octane 3dsmax plug-in to make sure the layers do not exceed 1,2MB which is the memory limit of the smallest card (in this case the GTX470). I would keep the pieces even smaller so that they load faster on the gpus.
3) Unhide one layer at a time and save one scene state for each layer.
4) From the batch render panel choose all scene states and submit them to net render saving the pictures for later compositing
This way you would have all layers rendered on the tree machines using all gpus available.
I tested on a single machine without net render and works fine, but could not make it work on net render, need further tests.
Questions:
1) Is there some issue with Octane in net render?
2) What about the support for saving the deph channel?
3) When using the bach render one can choose from previously saved presets. Something like assigning render to the first gpu on the GPU Config Panel from the Octane3dsmax plug-in and savind the preset to let's say gpu1 , assigning render the second one and saving to preset called gpu2 and so forth. Well its not working it looks like the plugin only reads that seting the first time it opens and never change the settings anymore.
Re: Rendering big scenes, overcoming gpu limitations
Posted: Tue May 29, 2012 4:52 pm
by palhano
Karba,
The workflow I am testing is assuming that at some point Octane Render will have full support for render elements and will be able to save channels like alpa, depth, relections, shadow, and so forth so that the final picture can be composited on any available compositor.
Is render pass support planned to release any time soon?
Re: Rendering big scenes, overcoming gpu limitations
Posted: Wed May 30, 2012 12:46 am
by palhano
Karba wrote:palhano wrote:3dsMax plugin allows us to easilly send to the gpu (octane render viewport) parts of the scene when used together with the layers panel on the 3dsmax application.
All you have to do is to assign objects to different layers and then hide/unhide layers before you hit the refresh button on the octane render viewport, this way you send portions of a huge scene that would fit to the gpu memory no matter what card do you use.
My questios are:
1) Is it possible to automate this procedure so that it would send to the layers sequentially to the renderer and save the pics with alpha channel for further comppositing?
2) On a multi-gpu installation, can you implement some way to send different layers to different gpus so that we can forward the output of the boards to a realtime compositor?
Best regards
1) It is possible. You just are first who want it. Let see, whether there are enough people who want it too.
2) It is not possible.
For the second question we need something like Roeland describes on his command line tutorial (see the post
http://www.refractivesoftware.com/forum ... 21&t=20465) where a parameter -g 0 is used to assign render to the first gpu and then again changing -g 1 to send to the sencond gpu and so forth.
What do you think? At least could we render with diferent gpus or any time you send a scene to one card the others will do nothing?
Re: Rendering big scenes, overcoming gpu limitations
Posted: Wed May 30, 2012 1:04 am
by Karba
palhano wrote:Karba wrote:palhano wrote:3dsMax plugin allows us to easilly send to the gpu (octane render viewport) parts of the scene when used together with the layers panel on the 3dsmax application.
All you have to do is to assign objects to different layers and then hide/unhide layers before you hit the refresh button on the octane render viewport, this way you send portions of a huge scene that would fit to the gpu memory no matter what card do you use.
My questios are:
1) Is it possible to automate this procedure so that it would send to the layers sequentially to the renderer and save the pics with alpha channel for further comppositing?
2) On a multi-gpu installation, can you implement some way to send different layers to different gpus so that we can forward the output of the boards to a realtime compositor?
Best regards
1) It is possible. You just are first who want it. Let see, whether there are enough people who want it too.
2) It is not possible.
For the second question we need something like Roeland describes on his command line tutorial (see the post
http://www.refractivesoftware.com/forum ... 21&t=20465) where a parameter -g 0 is used to assign render to the first gpu and then again changing -g 1 to send to the sencond gpu and so forth.
What do you think?
I can't start up several copies of Octane to make job in parallel.
Re: Rendering big scenes, overcoming gpu limitations
Posted: Wed May 30, 2012 7:04 pm
by palhano
Karba wrote:
I can't start up several copies of Octane to make job in parallel.
Ok about that...
But suppose for a while that there is a button at the plugin interface to save all user selected options including the gpu parameter, then would we still need to export the scene to obj format in order to have it rendered from the command line?
The reason for this is that we can launch multiple instances of Octane render from the command line.
Re: Rendering big scenes, overcoming gpu limitations
Posted: Fri Jun 01, 2012 7:57 am
by petermax
+1 this is VERY much needed
but what I find even more important is to circumvent the issue with just 64 textures - can we auto combine texture in the background, before processing by the GPU
this is the main problem I'm facing atm