Page 1 of 1

2.24.* : out-of-core feature not working [SOLVED]

Posted: Wed Nov 18, 2015 11:59 am
by papillon
I've been testing both the stable and alpha version of the 2.24 C4D plugin on a Mac Pro OSX 10.11.1, with 24GB of RAM, and Geforce GTX 970 4GB.
The mesh used is about 1,5M quads, and uses two 4K .jpg textures.
From the Activity Monitor, once I commit the sending to Octane Live Preview, I can see an increase of memory usage up to 8GB which, as I believe, it's due to the Octane data being allocated on system ram because the 4GB GPU's have been consumed.
Nevertheless, the operation ends up in a render failure. Log file attached. If you need further information please let me know, I'd be glad if this option would be fixed.

PS: just a note that while in the Octane standalone application is reported the correct amount of system memory, in the plugin the slider allows to range up to 64GB regardless of the *real* system memory amount...

Re: 2.24.* : out-of-core feature not working

Posted: Wed Nov 18, 2015 5:03 pm
by bepeg4d
Hi,
I have a set up similar to yours and I use the out-of-core feature every day. It save me because currently I have only a GTX 560 1GB on my mac. Here is an example (apologize for the messy scene just for testing purpose) with 4x 8k textures and 1.8 m polys:
Screen Shot 2015-11-18 at 17.49.19.png
I expect that you can manage more bigger scene with yours set up :roll:
Are you using also a monitor connected to your GTX 970 losing some VRAM?
I have a dedicated singol slot GT 120 for monitor only and it helps a lot.
Have you already tried with the latest versions?
Version 2.24.1-R3
Version 2.24.2-TEST4.2
ciao beppe

Re: 2.24.* : out-of-core feature not working [solved]

Posted: Wed Nov 18, 2015 5:12 pm
by papillon
Hi Beppe,

thanks for the response. Actually I (dumbly) neglected the fact that VRAM would be consumed being the only GPU in the system, with two monitors attached to it.
So I added another Geforce 750 2GB currently used as main adapter to drive the monitors, and left the Geforce 970 4GB alone for GPU render.
So far it works like a charm and I was able to fill the whole scene into VRAM. I've read the out-of-core is used to allocate textures only but the geometry has to fit into the GPU VRAM, so I guess previously that was my issue.
I marked this as solved since it was an issue on my end.

Re: 2.24.* : out-of-core feature not working [SOLVED]

Posted: Thu Nov 19, 2015 10:01 am
by bepeg4d
I'm glad that you have solved :)
Yes the out-of-core feature is only for textures for now, so both geometry and film buffer have to be loaded on vram.
In the upcoming v3, the film buffer will not be on vram anymore, giving us even more freedom.
Happy GPU rendering ;)
ciao beppe

Re: 2.24.* : out-of-core feature not working [SOLVED]

Posted: Thu Nov 19, 2015 10:16 am
by glimpse
bepeg4d wrote: In the upcoming v3, the film buffer will not be on vram anymore, giving us even more freedom.
What? Film buffer will be not in vRAM? How come so?..

Re: 2.24.* : out-of-core feature not working [SOLVED]

Posted: Thu Nov 19, 2015 11:25 am
by bepeg4d
glimpse wrote:
bepeg4d wrote: In the upcoming v3, the film buffer will not be on vram anymore, giving us even more freedom.
What? Film buffer will be not in vRAM? How come so?..
From the Marcus's post about v3 development:
viewtopic.php?f=9&t=50956
  • Moved film buffers to the host and tiled rendering

    The second major refactoring in the render core was the way we store render results. Until v3 each GPU had its own film buffer where part of the calculated samples were aggregated. This has various drawbacks: For example, a CUDA error usually means that you lose the samples calculated by that GPU or a crashing/disconnected slave means you lose the samples. Another issue is that large images mean a large film buffer, especially if you enable render passes.

    To solve the above (and more) issues we decided to move the film buffer into host memory. Doesn't sound exciting, but has some major consequences. The biggest one we had to fight with is finding a way to deal with the huge amount of data the GPUs produce. Especially in multi-GPU setups or when network rendering is used.

    As a solution, we introduced tiled rendering for all integration kernels except PMC and added a bunch of additional options to allow you to tweak the way how integration work is distributed. One side effect is that we (hopefully) have solved the problem that slaves got starved off work while they were sending very large results back to the master (like stereo panos for the GearVR). Another side effect is that info passes are now rendered in parallel.