Hi there,
i'm having a strange issue while rendering my scenes over the internal network. I have a scene configured with pathtracing, just 100k of polygons with a small camera animation running with 25fps for 100 frames. My local machine has a RTX 3090, the render node has 2 RTX 3090. Both running Windows 10. When using the Live Viewer with network rendering enabled, the remote GPUs are being used for computing. GPU-Z also confirms this with showing the GPU utilization.
But when rendering the scene via the Picture Manager in C4D R.20 (PT, 1000 Samples) i can see that the scene is being transfered over the network to the render node but the GPU utilization is 0. Also the Picture Manager doesn't indicate that the remote node is rendering (octane info is just showing: updating).
I know that it could be a timing issue in theory - so the remote node is unable to send it's update to the host, because the host has finished rendering faster. But even if i crank the samples up to 4000, it doen't make any difference. After some Frames it just freezes and doesn't render any further. If i then quit the render process, i'll get the following error message on the render node:
Launching net render node (10021700) with master xxx.xxx.xxx.xxx:1025
CUDA error 201 on device 0: invalid device context
-> could not get memory info
CUDA error 201 on device 0: invalid device context
-> could not get memory info
CUDA error 201 on device 0: invalid device context
-> could not get memory info
device 0: CUDA module wasn't unloaded!
device 0: CUDA context wasn't destroyed!
CUDA error 201 on device 1: invalid device context
-> could not get memory info
CUDA error 201 on device 1: invalid device context
-> could not get memory info
CUDA error 201 on device 1: invalid device context
-> could not get memory info
device 1: CUDA module wasn't unloaded!
device 1: CUDA context wasn't destroyed!
Local rendering does work without any issues.
I'm Using:
- C4D R.20
- Octane 2020.2.5 - R3
- Nvidia Studio Driver 472.12
Kind Regards
René