Rendering time using out-of core feature
Posted: Fri Feb 05, 2016 11:42 am
I want to share my experience about rendering time using the "out-of-core" switch, also to know if other user have found similar results.
I have tested a scende with 5Gb Vram usage in a multi-gpu system (each card with 4Gb ram).
One Gpu is placed inside the Pc in a pci-e slot, the second one is connected to a MB pci slot using a pci extender like this one:
http://www.ebay.it/itm/PCI-E-Express-1x ... 1771037549
So I have made a rendering time test of my scene in 2 conditions:
1) using only the internal 4gb GPU, with "out-of-core" switch enabled and a 5Gb vram scene;
2) using only the external 4gb GPU, with "out-of-core" switch enabled and a 5Gb vram scene;
and comparing the same test rendering the scene decreasing the size of some textures in order to stay under 4G of Vram usage, and in other words not using the out-of core feature.
First of all I have run a octane bench test and I have found no difference comparing my system to other with gpu without the pci extender, so I think to confirm that you have non loss of rendering time using these kind of adapter if the Vram usage fit into the ram of your gpu.
In my first test (using only the internal gpu) I have found that the rendering time was increased of about 35% using out-of-core.
In the second one (using the remoted external gpu card) I have measured an increment of 10X of rendering time!
So, if someone have found similar results, can we say that using these adapters make quite unusable the external cards when you exeed the ram of the GPU?
I have tested a scende with 5Gb Vram usage in a multi-gpu system (each card with 4Gb ram).
One Gpu is placed inside the Pc in a pci-e slot, the second one is connected to a MB pci slot using a pci extender like this one:
http://www.ebay.it/itm/PCI-E-Express-1x ... 1771037549
So I have made a rendering time test of my scene in 2 conditions:
1) using only the internal 4gb GPU, with "out-of-core" switch enabled and a 5Gb vram scene;
2) using only the external 4gb GPU, with "out-of-core" switch enabled and a 5Gb vram scene;
and comparing the same test rendering the scene decreasing the size of some textures in order to stay under 4G of Vram usage, and in other words not using the out-of core feature.
First of all I have run a octane bench test and I have found no difference comparing my system to other with gpu without the pci extender, so I think to confirm that you have non loss of rendering time using these kind of adapter if the Vram usage fit into the ram of your gpu.
In my first test (using only the internal gpu) I have found that the rendering time was increased of about 35% using out-of-core.
In the second one (using the remoted external gpu card) I have measured an increment of 10X of rendering time!
So, if someone have found similar results, can we say that using these adapters make quite unusable the external cards when you exeed the ram of the GPU?