Hi!
I recently noticed some bugs with my most recent PC station while using Cinema4D R19 (or R20) with Octane 2020. It's about the render time/performance..
The specs for that one are :
- 2x RTX2080Ti 11Gb
- Intel Core i9-9900KF 3.60 GHz
- 64 Gb RAM
- Windows 10 64 Bits
And I have another station that has :
- 1x GTX1080TI 11Gb + 1X GTX1080 6Gb
- Intel Core i7-7700K 4.20 GHz
- 32 Gb RAM
- Windows 10 64 Bits
If I use the Octane Benchmark 2020, I got a 701.37 score on the RTX's and 334.5 for the GTX's, so everything seems fine here.
But whatever scene I used on Cinema4D with the last version of the plugin, with the EXACT same scene, same C4D version and same render settings.. My RTX's station is rendering every frame almost 2 times slower than my GTX's station...
In fact, it seems that the RTX's cards are just not running full speed when using C4D. During the Benchmark test, I could see a huge difference regarding the fan speed, noise etc compared to a normal Octane scene render. I really don't get it
Here is some screenshots of benchmarks and actual renders:
- Octane BenchMark https://www.dropbox.com/s/l4qyyypzyn146 ... k.jpg?dl=0
- C4D R19 Octane2020 renders ; https://www.dropbox.com/s/0jjlu13kvam54 ... 0.jpg?dl=0
- RTX & GTX settings (with All Gpu's activated) ; https://www.dropbox.com/s/y6vnv110m0xo4 ... s.png?dl=0
I've also tried several C4D scenes, same issue.. And all my drivers are updated on both stations..
Hopefully this is just tiny thing to adjust.. because it hurts to pay twice the price of a 1080Ti and gets that result...
Thanks for reading,
2x RTX2080Ti two times slower than 2xGTX1080
Moderators: ChrisHekman, aoktar
Hi,
how much VRAM are you using for the test scene?
Please note that RTX option slightly increase the VRAM consumption, so if the scene is at the limit of available VRAM, with RTX option active, it is possible that also Out-of-core kicks in, reducing the rendering speed, and overall performance.
Have you tried to do the test with a less complex scene?
ciao Beppe
how much VRAM are you using for the test scene?
Please note that RTX option slightly increase the VRAM consumption, so if the scene is at the limit of available VRAM, with RTX option active, it is possible that also Out-of-core kicks in, reducing the rendering speed, and overall performance.
Have you tried to do the test with a less complex scene?
ciao Beppe
Hi!
For that particular scene, I'm at 4.2 Gb used and 3.9Gb free to use ; https://www.dropbox.com/s/60xx9b0t52xjy ... 9.png?dl=0
Yes, I indeed. I took some time yesterday to make a lot of tests.. And in fact I have scenes that are logically going faster on my RTX's and just a few that are going slower on that station, like that scene. In fact, I have that issue on 2 others scenes and what those scenes have in common is that they are shared files that have been used by other people before I start working on it.
And it's not particularly heavy scenes, so I don't get it.. And unfortunately, I can't share those scenes since it's for a client.
I'll try to dig more the settings on those scenes
Thanks,
For that particular scene, I'm at 4.2 Gb used and 3.9Gb free to use ; https://www.dropbox.com/s/60xx9b0t52xjy ... 9.png?dl=0
Yes, I indeed. I took some time yesterday to make a lot of tests.. And in fact I have scenes that are logically going faster on my RTX's and just a few that are going slower on that station, like that scene. In fact, I have that issue on 2 others scenes and what those scenes have in common is that they are shared files that have been used by other people before I start working on it.
And it's not particularly heavy scenes, so I don't get it.. And unfortunately, I can't share those scenes since it's for a client.
I'll try to dig more the settings on those scenes
Thanks,
Not properly, from your screenshot, as suspected, you have ~3.7GB of Out-of-core active, so VRAM is already finished, and part of the scene has moved to system RAM, losing efficiency.
If you sum used VRAM + Out-of-core, you end up to have only 200MB of free VRAM, that is too low.
You should just need to decrease VRAM usage by ~1GB in my opinion, to have the scene completely loaded in VRAM, and render at full speed.
Please try this, reduce the parallel Sampling value in Kernel setting, until Out-of-core is no more active, set the Max Tile value at 2x of it, and try again.
You need to report the same settings also in the GTX machine for a correct comparison.
ciao Beppe
If you sum used VRAM + Out-of-core, you end up to have only 200MB of free VRAM, that is too low.
You should just need to decrease VRAM usage by ~1GB in my opinion, to have the scene completely loaded in VRAM, and render at full speed.
Please try this, reduce the parallel Sampling value in Kernel setting, until Out-of-core is no more active, set the Max Tile value at 2x of it, and try again.
You need to report the same settings also in the GTX machine for a correct comparison.
ciao Beppe
Sorry, I forgot to ask you to share a screenshot from the Out-of-core panel in c4doctane/Settings.
I suspect that the headroom value is too high
ciao Beppe
I suspect that the headroom value is too high

ciao Beppe
Well, I think that I've found a tiny part of the solution..
In the Octane settings of my devices, I always put the render priority to HIGH so that I have full performance of the cards.
But for whatever reason, on R20, when I set this to HIGH priority and than I close the window, if I re-open it just after, I can see that the setting will go back to LOW by itself.. I don't get why, it just won't take my changes about the priority into account..
So I made that very same test on R19 (with HIGH_priority activated) and the result I have about the scene I previously talked about are ; https://www.dropbox.com/s/kel7wcevdtlyu ... 2.jpg?dl=0
So the results are getting better (from 1min50 to 1min30), but the GTX's are still faster on this particular scene.
About the VRAM and the RAM used, I'm kind of a noob in that matter.. It seems to make sense to me than a 2xRTX with both 11Gb VRAM compared to a 11Gb + 6Gb card should handle the scene the same way, or a bit better.
Here is some screenshots
1 : https://www.dropbox.com/s/2jzpmssa22447 ... 6.png?dl=0
2 : https://www.dropbox.com/s/rm78g3iwivujq ... 4.png?dl=0
So the solution would be to decrease the size of the scene to 1GB? But I'll still have that big gap of performances between the cards I suppose, no?
I've reduced the parallel Sampling to half the previous value (from 16 to 8) and increased the Max Tile value (from 32 to 64) and the results are really almost the same.
Thanks for your time and your help
In the Octane settings of my devices, I always put the render priority to HIGH so that I have full performance of the cards.
But for whatever reason, on R20, when I set this to HIGH priority and than I close the window, if I re-open it just after, I can see that the setting will go back to LOW by itself.. I don't get why, it just won't take my changes about the priority into account..
So I made that very same test on R19 (with HIGH_priority activated) and the result I have about the scene I previously talked about are ; https://www.dropbox.com/s/kel7wcevdtlyu ... 2.jpg?dl=0
So the results are getting better (from 1min50 to 1min30), but the GTX's are still faster on this particular scene.
About the VRAM and the RAM used, I'm kind of a noob in that matter.. It seems to make sense to me than a 2xRTX with both 11Gb VRAM compared to a 11Gb + 6Gb card should handle the scene the same way, or a bit better.
Here is some screenshots
1 : https://www.dropbox.com/s/2jzpmssa22447 ... 6.png?dl=0
2 : https://www.dropbox.com/s/rm78g3iwivujq ... 4.png?dl=0
So the solution would be to decrease the size of the scene to 1GB? But I'll still have that big gap of performances between the cards I suppose, no?
I've reduced the parallel Sampling to half the previous value (from 16 to 8) and increased the Max Tile value (from 32 to 64) and the results are really almost the same.
Thanks for your time and your help
Please set the GPU headroom at 300MB in the Out-of-core panel, then set parallel sampling at 4, and max tile at 8 in the kernel, and do the same for both machines, thanks.
Out-of-core must to be inactive while rendering with both machines, or the comparison is not correct.
ciao Beppe
Out-of-core must to be inactive while rendering with both machines, or the comparison is not correct.
ciao Beppe