OctaneRender® 2026.2 for 3ds max® v26.5
Forum rules
Any bugs related to this plugin should be reported to the Bug Reports sub-forum.
Bugs related to Octane itself should be posted into the Standalone Support sub-forum.
Any bugs related to this plugin should be reported to the Bug Reports sub-forum.
Bugs related to Octane itself should be posted into the Standalone Support sub-forum.
- Elvissuperstar007

- Posts: 2576
- Joined: Thu May 20, 2010 8:20 am
- Location: Ukraine
- Contact:
Wow, it eats up a lot of video memory, octane, it's just two sources, one sun, a cube and a plane
Win11/msi x79a-gd45 (8d)/ Intel Xeon e5 2690v2/ 64gb DDR3 1866/ Nvidia 4090 Asus TUF/ be quiet! Straight Power 11 1000W 80 Plus Platinum
Page octane render " В Контакте " http://vkontakte.ru/club17913093
Page octane render " В Контакте " http://vkontakte.ru/club17913093
- paride4331

- Posts: 3911
- Joined: Fri Sep 18, 2015 7:19 am
Hi Elvis,
Neural Radiance Cache is feature to cache and predict indirect lighting and radiance values, slashing noise and speeding up renders to near-real-time without losing quality.
Use it when: rendering complex scenes with heavy global illumination, indirect bounces, or interiors, great for faster previews or finals where traditional sampling takes forever.
Set low max samples (under 30), and render; it auto-trains on your GPU (Turing or newer, single GPU only).
Why it eats VRAM: It stores neural data and training samples in a cache, adding overhead on top of standard rendering, more with high res/samples/tiles.
Reduce VRAM spike: Drop max/tile samples to 16-32, monitor usage in Preferences > Device memory, disable for ultra-high-res/final exports, or use a beefier GPU 8GB+.
Regards
Paride
Neural Radiance Cache is feature to cache and predict indirect lighting and radiance values, slashing noise and speeding up renders to near-real-time without losing quality.
Use it when: rendering complex scenes with heavy global illumination, indirect bounces, or interiors, great for faster previews or finals where traditional sampling takes forever.
Set low max samples (under 30), and render; it auto-trains on your GPU (Turing or newer, single GPU only).
Why it eats VRAM: It stores neural data and training samples in a cache, adding overhead on top of standard rendering, more with high res/samples/tiles.
Reduce VRAM spike: Drop max/tile samples to 16-32, monitor usage in Preferences > Device memory, disable for ultra-high-res/final exports, or use a beefier GPU 8GB+.
Regards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
- paride4331

- Posts: 3911
- Joined: Fri Sep 18, 2015 7:19 am
Hi coilbook,coilbook wrote: Thu Mar 05, 2026 5:09 amHi Paride,paride4331 wrote: Tue Mar 03, 2026 12:27 pmHi coilbook,coilbook wrote: Fri Feb 27, 2026 5:59 pm
Thank you!
Anyway to get better vram usage. Parallels samples from 16 to 32 use like 2 gb of vram, AI light uses like 3-4 gb of vram. That's a lot of wasted vram. So half of 12 GB of precious vram gets used by octane for technical purposes only.
Could you provide some information about the type and number of light sources in the scene?
Regards
Paride
AO kernel, octane sun and three octane lights with some mesh lights that have diffuse light disabled. Not much but vram usage is huge when AI light is on
It sounds like an VRAM overhead with AI Light enabled, it's an AI-driven sampler that dynamically weights lights to cut noise in complex lighting setups, but it adds 3-4GB extra because it stores runtime data for light importance and updates. In your scene, it seems not super light-heavy, so AI Light might be overkill if noise isn't a big issue, try disabling it first and see if render quality/speed holds up without the VRAM hit. Parallel samples jumping from 16-32 eating 2GB is normal; it boosts tile processing for speed but scales VRAM linearly, drop to 8-16 to reclaim space without much slowdown. Overall, that "wasted" half of your 12GB isn't totally idle, OS/3dsmax/drivers reserve 25-30%, plus engine data, but you can optimize: Compress textures using 8:1 in Octane or grayscale nodes where possible) and instance duplicates to minimize geo/texture load, or Enable Out-of-Core for overflow to system RAM (slower, but avoids crashes).
Regards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
- neonZorglub

- Posts: 1068
- Joined: Sun Jul 31, 2016 10:08 pm
Hi coilbook,coilbook wrote: Wed Feb 25, 2026 5:12 pm Could you please render 150 frames instead of just one frame? You’ll see the evaluation time go up to 1–2 minutes, and the RAM gets clogged when using tyFlow particles.
During our render with 20 GPUs, the time per frame increased from 2 minutes to 7 minutes (During 150 frame rendering), and about 60% of that was just evaluation time per frame. VRAM usage also increases significantly.
With 10,000 particles, VRAM usage was 8 GB. Without tyFlow, VRAM usage was only 1 GB. This proves that tyFlow renders them as COPIES, not INSTANCES. I also think MB slows down rendering a lot.
UPDATE: For some reason in some scenes tyflow renders as instances but sometimes as copies clogging vram and increasing eval times.
I've investigated your scene for memory leaks and performance, rendering frames from 0 to 250, and got those results:
-The main reason of memory usage and performance reduction is the use of the 3dsMax Boolean object !
It has a memory leak, and should not be used, especially for animation !
I rendered your scene with Arnold and Scaneline, with Octane un-installed, and got similar results:
Around 10 GB are lost after the render. Even after a File/Reset, there are still ~7 GB lost.
-Another issue comes from tyFlow. but it's not exactly a memory leak. It seems more like a memory fragmentation:
When we render all frames after loading the scene, no frame is in the particle cache
For each frame to be rendered, tyFlow calculate the simulation and store the particle data in a cache memory, and keep it for further use when editing the scene.
But this cache memory is not continuous because other things are allocated each frame. That may cause performance issues.
To avoid this, you could disable the tyFlow cache when rendering, or make tyFlow calculate the full simulation and create the particle cache efficiently, by setting the timeline to the last frame, before rendering.
(eg with maxScript: slidertime = 250)
-There was a small memory leak in some Octane objects. Even small size leak is bad as it makes memory fragmentation worse. This will be fixed in the next release.
So the main issue is the Boolean objects (Bar to cut001 .. Bar to cut010)
There are several ways to replace this Boolean object.
The simplest I can think of is to use an Octane proxy:
-move 'Bar to cut001' to x=0, and y=0 (that help to use the resulting proxy object)
-select 'Bar to cut001'
-Use the context menu to open "Export to Octane Proxy"
-Enable Animation export, set frames from 40 to 300, Export
You'll get a 'Bar to cut001.octprx' file (~200 MB)
Create a Octane / Proxy object, and set the file to this octprx file
Move it and create instances to match all your Bar to cutxxx objects, then you can delete those Boolean objects.
Another way is to use Octane Vectron objects and operators.
but I could see some issues when trying to combine several objects (like 2 Torus) to link them to a Vectron Subtract, or creating a custom subtract osl.
I'll fix those issues soon, as it might be the best alternative to Boolean object.
Thanks
Thank you!neonZorglub wrote: Mon Mar 09, 2026 12:23 amHi coilbook,coilbook wrote: Wed Feb 25, 2026 5:12 pm Could you please render 150 frames instead of just one frame? You’ll see the evaluation time go up to 1–2 minutes, and the RAM gets clogged when using tyFlow particles.
During our render with 20 GPUs, the time per frame increased from 2 minutes to 7 minutes (During 150 frame rendering), and about 60% of that was just evaluation time per frame. VRAM usage also increases significantly.
With 10,000 particles, VRAM usage was 8 GB. Without tyFlow, VRAM usage was only 1 GB. This proves that tyFlow renders them as COPIES, not INSTANCES. I also think MB slows down rendering a lot.
UPDATE: For some reason in some scenes tyflow renders as instances but sometimes as copies clogging vram and increasing eval times.
I've investigated your scene for memory leaks and performance, rendering frames from 0 to 250, and got those results:
-The main reason of memory usage and performance reduction is the use of the 3dsMax Boolean object !
It has a memory leak, and should not be used, especially for animation !
I rendered your scene with Arnold and Scaneline, with Octane un-installed, and got similar results:
Around 10 GB are lost after the render. Even after a File/Reset, there are still ~7 GB lost.
-Another issue comes from tyFlow. but it's not exactly a memory leak. It seems more like a memory fragmentation:
When we render all frames after loading the scene, no frame is in the particle cache
For each frame to be rendered, tyFlow calculate the simulation and store the particle data in a cache memory, and keep it for further use when editing the scene.
But this cache memory is not continuous because other things are allocated each frame. That may cause performance issues.
To avoid this, you could disable the tyFlow cache when rendering, or make tyFlow calculate the full simulation and create the particle cache efficiently, by setting the timeline to the last frame, before rendering.
(eg with maxScript: slidertime = 250)
-There was a small memory leak in some Octane objects. Even small size leak is bad as it makes memory fragmentation worse. This will be fixed in the next release.
So the main issue is the Boolean objects (Bar to cut001 .. Bar to cut010)
There are several ways to replace this Boolean object.
The simplest I can think of is to use an Octane proxy:
-move 'Bar to cut001' to x=0, and y=0 (that help to use the resulting proxy object)
-select 'Bar to cut001'
-Use the context menu to open "Export to Octane Proxy"
-Enable Animation export, set frames from 40 to 300, Export
You'll get a 'Bar to cut001.octprx' file (~200 MB)
Create a Octane / Proxy object, and set the file to this octprx file
Move it and create instances to match all your Bar to cutxxx objects, then you can delete those Boolean objects.
Another way is to use Octane Vectron objects and operators.
but I could see some issues when trying to combine several objects (like 2 Torus) to link them to a Vectron Subtract, or creating a custom subtract osl.
I'll fix those issues soon, as it might be the best alternative to Boolean object.
Thanks
Thank you!paride4331 wrote: Thu Mar 05, 2026 9:32 amHi coilbook,coilbook wrote: Thu Mar 05, 2026 5:09 amHi Paride,paride4331 wrote: Tue Mar 03, 2026 12:27 pm
Hi coilbook,
Could you provide some information about the type and number of light sources in the scene?
Regards
Paride
AO kernel, octane sun and three octane lights with some mesh lights that have diffuse light disabled. Not much but vram usage is huge when AI light is on
It sounds like an VRAM overhead with AI Light enabled, it's an AI-driven sampler that dynamically weights lights to cut noise in complex lighting setups, but it adds 3-4GB extra because it stores runtime data for light importance and updates. In your scene, it seems not super light-heavy, so AI Light might be overkill if noise isn't a big issue, try disabling it first and see if render quality/speed holds up without the VRAM hit. Parallel samples jumping from 16-32 eating 2GB is normal; it boosts tile processing for speed but scales VRAM linearly, drop to 8-16 to reclaim space without much slowdown. Overall, that "wasted" half of your 12GB isn't totally idle, OS/3dsmax/drivers reserve 25-30%, plus engine data, but you can optimize: Compress textures using 8:1 in Octane or grayscale nodes where possible) and instance duplicates to minimize geo/texture load, or Enable Out-of-Core for overflow to system RAM (slower, but avoids crashes).
Regards
Paride