So i found out today that Out of Core slows down rendering quite significantly.
With my current scene, when its in full use, both cards VRAM are filled to cca 4,6GB and 5,5GB is out of core. The rendering speed is fluctuating around 5Ms/sec. I realized its rendering slower when the computer felt surprisingly silent - and indeed the GPUs were both around 60C under load (when usually they are 75+/80C).
So i somewhat reduced the geometry, for the OoC not to kick in - the new scene fit into 8,1GB in VRAM. The rendering speed is 30 Ms/sec - everything else exactly the same as before (except less grass blades in the grass). So 6x speed-up - i knew its gonna be slower with OoC, but this much? Is that normal?
Out of Core
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Interesting. I think that it highly depends of RAM speed and a little CPU. Can you check how much more CPU usage is while it goes to Out of Core ? What is your RAM speed ?
NV link has only 10% of render time disadvantage while using it ( nv link test https://www.redshift3d.com/forums/viewthread/28320 )
NV link has only 10% of render time disadvantage while using it ( nv link test https://www.redshift3d.com/forums/viewthread/28320 )
Architectural Visualizations http://www.archviz-4d.studio
Thank you for response. I reworked the scene, decreased the geometry size and separated it into several smaller chunks (OBJ files), so i can only use those, which are visible only on the rendered picture....so i would need to redo it back. Maybe i will try over the weekend and post then.
My RAM speed is 3000 MHz, not much these days, i admit, when you have 4600+...but i have 64GB and i have it since 2016/2017...back then 3600 was like max, so it was decent back then.
Regarding NVLink, i actually have the bridge, but never used it. Usually running with multiGPU/SLI off, as its more often than not problematic in games (and with 2080Ti for majority of them not needed anyway) and this is the first time i actually have a scene this big for the OoC to kick in. Does the NVLink take the precedence over OoC? I mean, if i enabled SLI, would the NVLink memory pooling activate instead of dropping the data into system RAM? Does NVLink work with 4.05 or do i need those newer versions (2019), which i need to pay for? I have not purchased them yet, as i am waiting for full RTX support.
Thanks!
My RAM speed is 3000 MHz, not much these days, i admit, when you have 4600+...but i have 64GB and i have it since 2016/2017...back then 3600 was like max, so it was decent back then.
Regarding NVLink, i actually have the bridge, but never used it. Usually running with multiGPU/SLI off, as its more often than not problematic in games (and with 2080Ti for majority of them not needed anyway) and this is the first time i actually have a scene this big for the OoC to kick in. Does the NVLink take the precedence over OoC? I mean, if i enabled SLI, would the NVLink memory pooling activate instead of dropping the data into system RAM? Does NVLink work with 4.05 or do i need those newer versions (2019), which i need to pay for? I have not purchased them yet, as i am waiting for full RTX support.
Thanks!
R9 7950x, 64GB DDR5 6000 MHz, 2x RTX 4090, Samsung 990 Pro 2TB, Kingston KC3000 2TB, Kingston KC3000 1TB, WD Caviar Gold 6TB, Win11 Pro 64bit
From the little non-scientific testing I did a while ago my impression was that you can go out-of-core without a significant speed hit but it kind of depends a lot on your scene - So basically if its textures or geo. One thing I also noticed is that the more out-of-core you go the slower things get.
My suspicion is also that PCI-E lanes matter in this case. I'm running a dual Xeon workstation with 80 lanes (so each GPU gets x16) which is probably less of a bottleneck than if I were running everything on x8 or x4. The GPUs need to communicate with system RAM and in that case I can only imagine that lanes are at the forefront there.
Generally speaking though, I never noticed a huge dropoff in speed. Maybe 20% or so but that was still perfectly acceptable to me. Then again, my tests were just tests and not real production scenes.
Hope that helps, lol
My suspicion is also that PCI-E lanes matter in this case. I'm running a dual Xeon workstation with 80 lanes (so each GPU gets x16) which is probably less of a bottleneck than if I were running everything on x8 or x4. The GPUs need to communicate with system RAM and in that case I can only imagine that lanes are at the forefront there.
Generally speaking though, I never noticed a huge dropoff in speed. Maybe 20% or so but that was still perfectly acceptable to me. Then again, my tests were just tests and not real production scenes.
Hope that helps, lol
