Hello World: OctaneRender 4 is here
Forum rules
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
I get a failed gpu on ORR4 (shows red) right away after clicking a rendertarget, for example. I`m using a single gtx580 on latest drivers on win 7x64. As well installed the latest cuda toolkit.
Octane 2022.1.1 nv535.98
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
Does the same scene work on 3.08? If not, is it possible to send me the scene so I can take a look? It is entirely possible that we have already fixed this issue for the next release, but I'd like to take a look anyway.whersmy wrote:I get a failed gpu on ORR4 (shows red) right away after clicking a rendertarget, for example. I`m using a single gtx580 on latest drivers on win 7x64. As well installed the latest cuda toolkit.
PM senthaze wrote:Does the same scene work on 3.08? If not, is it possible to send me the scene so I can take a look? It is entirely possible that we have already fixed this issue for the next release, but I'd like to take a look anyway.whersmy wrote:I get a failed gpu on ORR4 (shows red) right away after clicking a rendertarget, for example. I`m using a single gtx580 on latest drivers on win 7x64. As well installed the latest cuda toolkit.
Octane 2022.1.1 nv535.98
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
I have noted a pattern, at current state the denoiser is super snappy and fast at lower resolutions, and conversely is slower the higher you get.
I am finding it's not worth using on a 1080p render, or a 2160p render, because the data collection time + denoiser processing takes longer than a higher sample render.
And yes, you can lower samples, but then conversely the fidelity is lost too. I can get better results with more samples almost universally, so a lot of time it just doesn't comparatively pay off.
So I know it was mentioned work will be done later on getting the denoiser faster.
I would say this feature has such game changing potential, that it should be Priority 1/Top Priority/Priority Numero Uno/King of Priorities/Prioritus Primius .
Anyway you say it....you have a potential monster here, please don't have it sleep for too long...
Release the Kraken!
I am finding it's not worth using on a 1080p render, or a 2160p render, because the data collection time + denoiser processing takes longer than a higher sample render.
And yes, you can lower samples, but then conversely the fidelity is lost too. I can get better results with more samples almost universally, so a lot of time it just doesn't comparatively pay off.
So I know it was mentioned work will be done later on getting the denoiser faster.
I would say this feature has such game changing potential, that it should be Priority 1/Top Priority/Priority Numero Uno/King of Priorities/Prioritus Primius .
Anyway you say it....you have a potential monster here, please don't have it sleep for too long...
Release the Kraken!
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
yes.
YOKO Studio | win 10 64 | i7 5930K GTX 3090 | 3dsmax 2022.3 |
- FrankPooleFloating
- Posts: 1669
- Joined: Thu Nov 29, 2012 3:48 pm
+1 for kraken
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles
- PolderAnimation
- Posts: 375
- Joined: Mon Oct 10, 2011 10:23 am
- Location: Netherlands
- Contact:
Personly I think we need more of a good basis before this fancy stuff. I think a proper/better displacement is needed more then things like this. I know it is fun and very exiting but we are missing some basis stuff for a production ready renderer. But that is my opinion.Notiusweb wrote:I have noted a pattern, at current state the denoiser is super snappy and fast at lower resolutions, and conversely is slower the higher you get.
I am finding it's not worth using on a 1080p render, or a 2160p render, because the data collection time + denoiser processing takes longer than a higher sample render.
And yes, you can lower samples, but then conversely the fidelity is lost too. I can get better results with more samples almost universally, so a lot of time it just doesn't comparatively pay off.
So I know it was mentioned work will be done later on getting the denoiser faster.
I would say this feature has such game changing potential, that it should be Priority 1/Top Priority/Priority Numero Uno/King of Priorities/Prioritus Primius .
Anyway you say it....you have a potential monster here, please don't have it sleep for too long...
Release the Kraken!
Win 10 64bit | RTX 3090 | i9 7960X | 64GB
PolderAnimation wrote:Notiusweb wrote:
Personly I think we need more of a good basis before this fancy stuff. I think a proper/better displacement is needed more then things like this. I know it is fun and very exiting but we are missing some basis stuff for a production ready renderer. But that is my opinion.
right

Kraken - Kraken - Kraken - Kraken - Kraken!
Come on Otoy....
You can showcase this. This will boost advertising and be a win for current users...
Win - win.
KRAKIFY THIS MONSTER!
Come on Otoy....
You can showcase this. This will boost advertising and be a win for current users...
Win - win.
KRAKIFY THIS MONSTER!
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
Hello Octanight Keepers,
Since the next Octane full version release (V4) is designed to showcase the dominance of our favorite renderer and attract an even greater following, please take into account that the 20 GPU limit ought to be a relic of the V3 past. Even your Benchmark page shows, that at least for Linux users, that 11 GPUs in a single system are a present achievement and, with the V4's added 2 render node union, the GPU cap ought to surely be raised now to, at least, 33 (3x11) GPUs. The GPU count limit ought to be an evolver; not written in stone. It ought to match a reality that is constantly changing. The greatest 3d renderer on the planet deserves to show the whole evolution of its full power.
Since the next Octane full version release (V4) is designed to showcase the dominance of our favorite renderer and attract an even greater following, please take into account that the 20 GPU limit ought to be a relic of the V3 past. Even your Benchmark page shows, that at least for Linux users, that 11 GPUs in a single system are a present achievement and, with the V4's added 2 render node union, the GPU cap ought to surely be raised now to, at least, 33 (3x11) GPUs. The GPU count limit ought to be an evolver; not written in stone. It ought to match a reality that is constantly changing. The greatest 3d renderer on the planet deserves to show the whole evolution of its full power.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.