Yesterday I was rendering a project that took over 7GB of VRAM and I thought that having Out of Core rendering enabled and enough physical memory assigned to it should do the trick.
Apparently, OOC either does not work, or I did something wrong, or I misunderstand this feature.
What I thought until yesterday, is that when the GPU does not have enough of physical VRAM then it should be able to use system RAM allocated to it and being able to render such scene at reduced speed.
I'm using network rendering with main workstation running DAZ and having 2 Titans X (with 12GB VRAM each). Then I have a network Octane node with a legacy GTX Titan (6GB VRAM) and GTX 780 (3GB VRAM). Both the main workstation and the node have OOC enabled and 8GB of system RAM assigned to it.
However, with the scene I was rendering yesterday both Titans X were crunching away happily as they could fit the scene into their physical VRAM, but the network node crashed because Out Of Memory.
Could anyone enlighten me on this? Is it a bug, or the OOC simply does not work like this? And if this is the case, then how does it work? What has to be done for this feature to kick in?
Thanks for any info.