Sounds like you have other programs running that eats up your Vram. Octane does NOT say how much Vram you have left on your graphic card. It will only tell you how much Octane is using for the scene you are loading! Octane does not check other programs that are running and how much Vram they use. You have to figure that out yourself.
GPU-Z is a good program to use, as it will tell you how much Vram is used by your graphic card.
http://www.techpowerup.com/downloads/19 ... 0.5.3.html
Memory management.How much RAM amount needed to manage VRAM?
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
thanks for the info, but the only program opened at the same time are the ressource monitor, EVGA Precision for the monitoring of the temperature of the GPUs, Bit Defender antivirus and the network manager to allow internet connexion... And I have an internal GPU for the 2 screens, and two external GPU used only for rendering. So no application eating the VRAM used for Octane !
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
@Marcus and Radiance :
Hello! I have done more experiments, and switched to Octane 1.0 beta 2.47 (PMC TEST).
Good surprise, even if multi GPU is not (or weirdly supported by this version) :
Despite my maximum of 8GB of RAM on my motherboard, this time I can increase the resolution up to the maximum, with the scene which gives me a black screen under the Cuda 3.0 version !
I started with my maximum usable resolution in 3.0 version, 2560x1440, and I increased by multiples of 1280x1440 up to the maximum in that ratio, and then tried the max resolution in full square format 8192x8192, with success !
I get 2334.3/3002 MB used with 1GB of Geometry for 8447086 triangles.
Next time I will try to increase the geometry weight, as I still have no roof on the second building of my current scene.
The weird thing about GPU management is that EVGA Precision monitor shows that the two GPUs are used alternately : One is at 99% when the other one is at 0%, and they switch over the time. I would have expected only one used constantly at maximum...
I have not looked precisely at the RAM amount used during loading and voxelizing... I will have to do more tests about that and also chronometer the process.
So, it gives some hope and shows that 8GB of RAM maybe enough for 3GB of VRAM, except if when the two GPUs will be usable together the RAM usage will be multiplied by two... This would be important to know !
Your opinion on that point ?
EDIT : Well, new experiments with an amazing result :
As my previous test has shown that Octane 1.0 beta 2.47 uses the two
active GPUs alternatively, I disabled one GPU, and the other one was used at 99%. Then, while Octane was rendering on one GPU, I opened a second instance of Octane and selected the second GPU. I loaded the same scene and I set lens shifting to -0.5 and 0.5 on the two Octane.
I adjusted the resolution to 4096 on each instance, and the two GPUs are currently rendering fine !
Each GPU uses 2014.3/3002MB.
I tried to try to render in parallel in two parts of 8192x8192...
But I crashed one instance after the other when I increased the resolution.
I have tried again after some time to allow the memory to purge, but it failed loading ! I tried again with lower horizontal resolution (8192x6000) and it shown that I was around 2600 of the 3002MB. Increasing up to 8192x6500 leaded to a crash.
I think that a certain (variable) amount of VRAM is used by the system itself, because a variable amount of 164 to 260MB is shown by EVGA Precision monitor, even when the GPUs are idle. As it is variable the crash doesn't always occurs at the same resolution.
Anyway, rendering simultaneously on two instances in two parts of 4096x4096 should match my needs in most cases.
See the screen shot below, showing the two GPU working alternately :
Hello! I have done more experiments, and switched to Octane 1.0 beta 2.47 (PMC TEST).
Good surprise, even if multi GPU is not (or weirdly supported by this version) :

I started with my maximum usable resolution in 3.0 version, 2560x1440, and I increased by multiples of 1280x1440 up to the maximum in that ratio, and then tried the max resolution in full square format 8192x8192, with success !
I get 2334.3/3002 MB used with 1GB of Geometry for 8447086 triangles.
Next time I will try to increase the geometry weight, as I still have no roof on the second building of my current scene.
The weird thing about GPU management is that EVGA Precision monitor shows that the two GPUs are used alternately : One is at 99% when the other one is at 0%, and they switch over the time. I would have expected only one used constantly at maximum...
I have not looked precisely at the RAM amount used during loading and voxelizing... I will have to do more tests about that and also chronometer the process.
So, it gives some hope and shows that 8GB of RAM maybe enough for 3GB of VRAM, except if when the two GPUs will be usable together the RAM usage will be multiplied by two... This would be important to know !
Your opinion on that point ?

As my previous test has shown that Octane 1.0 beta 2.47 uses the two
active GPUs alternatively, I disabled one GPU, and the other one was used at 99%. Then, while Octane was rendering on one GPU, I opened a second instance of Octane and selected the second GPU. I loaded the same scene and I set lens shifting to -0.5 and 0.5 on the two Octane.
I adjusted the resolution to 4096 on each instance, and the two GPUs are currently rendering fine !
Each GPU uses 2014.3/3002MB.
I tried to try to render in parallel in two parts of 8192x8192...
But I crashed one instance after the other when I increased the resolution.

I think that a certain (variable) amount of VRAM is used by the system itself, because a variable amount of 164 to 260MB is shown by EVGA Precision monitor, even when the GPUs are idle. As it is variable the crash doesn't always occurs at the same resolution.
Anyway, rendering simultaneously on two instances in two parts of 4096x4096 should match my needs in most cases.
See the screen shot below, showing the two GPU working alternately :
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.