OctaneRender® for 3ds max® v2025.2.1 - 16.07

Sub forum for plugin releases
Forum rules
Any bugs related to this plugin should be reported to the Bug Reports sub-forum.
Bugs related to Octane itself should be posted into the Standalone Support sub-forum.
ofirgardy
Licensed Customer
Posts: 18
Joined: Wed Feb 05, 2020 12:17 pm

Hi Pride,

Thank you for all your help.

I was wondering, was there any change to the system requirements of the Render-Nodes (not RNDR nodes, just your average network distribution nodes) over the past release?

I used to have 2 systems - mainly as my studio's network slaves. and up till recently - all worked well. Following latest Octane release - they are showing :
"render node crashed or was stopped via CTRL+C
started logging DD.MM.YY XX:XX:XX"

Now, this only happens on older systems. (ones that have 6xGTX1060 and also on one that has 2x2070Super).
I've brought my laptop to check, it has a newer board+CPU + 4070 mobile version, and it DOES work as expected.

So, has anything changed in the recent 2025.2.1 release?
User avatar
paride4331
Octane Guru
Posts: 3813
Joined: Fri Sep 18, 2015 7:19 am

Hi ofirgardy,
Could you let me know which versions of Octane Render and the Daemon you’re currently using, and which versions you last used that worked for you without any issues? Also, what version of the Nvidia drivers are you running?
Regards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
ofirgardy
Licensed Customer
Posts: 18
Joined: Wed Feb 05, 2020 12:17 pm

The version I use is this most recent:
OctaneRender® for 3ds max® v2025.2.1 - 16.07
And it's corresponding network node.

And which driver? the main system or the slaves?

On the main system, the Admin job sending one, that has 3xA5000 + 1x 3080 I'm using Studio driver 576.80 - It generally works great.
On the slave(s) that crash - I've updated the drivers and tried all sort of combinations, mostly the newest 'Studio Driver' and 'Game ready' changing and checking to no avail:
1 system with 2x2070Super, 32GB RAM, windows 11
1 system with 6x GTX1060 6GB, 64GB RAM, windows 10

Both of these system now don't work as render node and they used to, up until the last node version major update.

The only case where I have something that works as node is when I hook up my laptop which is Core i9 with 4070 Laptop version, 64GB RAM and nvidia driver Studio version 576.80. This system DOES work, although earlier today, for the first time I saw that error message I've mentioned before. Saying it crashed or CTRL+C was pressed.

If I was to guess something, is that it's OOC memory management is doing something funny. Either the driver or the node.
When I've tested a large scene containing instances of teapots, it DOES render. But my regular type of scenes, with textures and all - force it to bail out. (and they used to render perfectly).
User avatar
paride4331
Octane Guru
Posts: 3813
Joined: Fri Sep 18, 2015 7:19 am

Hi ofirgardy,
Based on what you're referring to, it makes me think the issue could be related to how Octane is handling VRAM and Out of Core (OOC) memory, especially after the recent Node update. Since each render node has to load the entire scene into GPU memory locally, if the scene is too large, particularly in terms of texture data, and doesn't fully fit into available VRAM, Octane might try to push part of that data into system RAM using OOC.
How well that works can really depend on a few factors, like the GPU architecture, the amount of free system RAM, and the current OOC settings. Even with 32GB or 64GB of RAM, if the OOC limits aren’t set high enough, or if something about the paging behavior has changed, it could lead to instability or failed renders, especially on older GPUs like the GTX 1060s and RTX 2070 Supers. Cards like those may have a harder time under memory pressure, especially with high-res or uncompressed textures.
Another thing that stands out is the master setup — you're running 3×A5000s alongside a 3080. Since Octane needs to keep data synced across all active GPUs in a single system, the 3080's smaller VRAM becomes the ceiling. So even though the A5000s have 24GB each, they won't be able to use all of that if the 3080 is active, because the effective limit drops to match the 10GB of the 3080. That could introduce memory constraints even on the master, depending on the scene. I’d recommend checking the OOC memory limits on both the master and the slave machines. If they’re too low, increasing them might help. Also, try testing with a lighter version of the scene, fewer or lower-res textures, just to see if it renders correctly. That could help confirm whether it’s really a memory bottleneck causing the crash.
Regards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
ofirgardy
Licensed Customer
Posts: 18
Joined: Wed Feb 05, 2020 12:17 pm

Thank you @Pride for all this info!

Questions.

1) can I see on the Octane viewport (while I work) how much vram is being used in the specific shot - is that a good reliable benchmark to make decisions upon?
2) Does setting the OOB "window" into system ram to a higher number (on the main system I have 128GB of RAM) does it affect the render nodes?
3) If a render node has, say, 64GB of RAM but plenty of SSD space, will it be wise to set the render node's "use maximum memory" when installing the node to say 128GB? (not that I believe that I'm getting to such huge amounts of memory usage - but who knows - it might help?)

4) on the RNDR Slack I've read, on the node operators discussion, that recent Octane releases, when confronted with an older card, tries to emulate new technologies that are built in to newer cards. If so, I guess this is also something that "eats" memory off of the vram?

once again,
Thank you for this prompt support.
User avatar
paride4331
Octane Guru
Posts: 3813
Joined: Fri Sep 18, 2015 7:19 am

Hi ofirgardy,
1) yes, https://docs.otoy.com/3dsmax/ViewportInfo.html
2) Setting a higher Out-of-Core (OOC) memory limit on your main system (like with 128GB RAM) mainly affects that local machine. Render nodes have their own OOC settings, so bumping the value on your workstation won’t impact the nodes unless you change it on them too. Each machine manages its own memory independently.
3) On a node with 64GB of RAM, setting the “maximum memory” slider to 128GB might not really help — Octane can't use what the system physically doesn't have. It doesn’t magically access disk space like RAM, and while OOC can spill over to SSD, it’s way slower. So unless that node actually has 128GB of real RAM, setting it that high won’t give you much benefit and might even be misleading. It's better to keep it realistic and let OOC handle spillover properly.
4) yes, it's pretty likely that recent versions of Octane, when running on older GPUs, try to emulate some of the hardware features introduced with newer cards (like RT Cores or Tensor Cores). Even if you're not explicitly using those features, the engine might still attempt to replicate their behavior in software to keep things consistent. That kind of emulation usually means extra memory usage, since it needs to allocate additional buffers or structures to make up for the lack of native hardware support. So yeah, it’s definitely possible that some of your VRAM is being used up by that process.
Regards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
Post Reply

Return to “Releases”