Well, then that's a limitation of Lightwave plugin.BorisGoreta wrote:No, this is not possible, whenever the total geometry poly count changes you have to recompile the entire scene again.
I don't know how Standalone or Max work but in Lightwave it is like that. Merge and proxy are not Lightwave terms so I don't know what that means.
Basically if you want to add 1 poly to the 10 million poly scene with IPR running you have to recompile entire 10 million and one polygon to see this added polygon.
I am not talking about adding this poly to the existing object thus object is changed, needs to be reloaded and recompiled. I am talking about adding a second object with 1 poly, still all objects need to be recompiled.
OctaneRender™ Standalone 3.00 alpha 1
Forum rules
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Marcus,
Why in the standalone I must wait about 30-50 seconds to get all randomly transformed textured over instances calculated (the plugin you helped with coding viewtopic.php?f=73&t=41388). When I have a single node with 256 random transforms (max for one node) everything is working smoothly. But when there are 10 or more such nodes (thousands of random transforms) the node editor freezes unless entire scene is recalculated even if I want to unpin any node.
Could you please add this LUA code to 3.xx in the form of a ready node to e.g. texture nodes or transform nodes?
Why in the standalone I must wait about 30-50 seconds to get all randomly transformed textured over instances calculated (the plugin you helped with coding viewtopic.php?f=73&t=41388). When I have a single node with 256 random transforms (max for one node) everything is working smoothly. But when there are 10 or more such nodes (thousands of random transforms) the node editor freezes unless entire scene is recalculated even if I want to unpin any node.
Could you please add this LUA code to 3.xx in the form of a ready node to e.g. texture nodes or transform nodes?
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
I don't fully understand. Could you please explain step-by-step what you are doing and what is slow?smicha wrote:Marcus,
Why in the standalone I must wait about 30-50 seconds to get all randomly transformed textured over instances calculated (the plugin you helped with coding viewtopic.php?f=73&t=41388). When I have a single node with 256 random transforms (max for one node) everything is working smoothly. But when there are 10 or more such nodes (thousands of random transforms) the node editor freezes unless entire scene is recalculated even if I want to unpin any node.
Could you please add this LUA code to 3.xx in the form of a ready node to e.g. texture nodes or transform nodes?
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Sure. I'll write step-by-step explanation.abstrax wrote:I don't fully understand. Could you please explain step-by-step what you are doing and what is slow?
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
abstrax wrote:I don't fully understand. Could you please explain step-by-step what you are doing and what is slow?
I PMed you Marcus with a simple orbx.
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
It's me again asking for displacement 
Is it hard to implement a softening parameter for displacement maps?
since displacement node takes in account of every pixel. This makes hard for me to use displacement all the time.
Btw I do not mean blurring the image. But overall normalising or interpolating the height differences thus giving us more smooth surface?
Best,

Is it hard to implement a softening parameter for displacement maps?
since displacement node takes in account of every pixel. This makes hard for me to use displacement all the time.
Btw I do not mean blurring the image. But overall normalising or interpolating the height differences thus giving us more smooth surface?
Best,
My Portfolio
windows 10 Pro. |1070 + 1070 + 1070 + 1070 | i7 @4.5Ghz
windows 10 Pro. |1070 + 1070 + 1070 + 1070 | i7 @4.5Ghz
Think I've found an issue/bug (& sorry if it was mentioned somewhere, but I haven't found anything on that yet =)
the idea is pretty simple here, as you bump resolution up & up You see multiple GPUs utilised worse & worse (I can give multiple screenshots if interested..)
Let's say if You rendering 1kx1k output.. three local GPUs are 99% load 99% percent of time.. however..if You push resolution higher.. one GPU stays fully loaded all the time, while others keep dropping down.. & You see those 0% GPU loads for longer & longer periods =)
I'm not sure what that cause & so, first though that this might help, but it seems that's not the case:
"Minimize net traffic", if enabled, distributes only the same tile to the net render slaves, until the max samples/pixel has been reached for that tile and only then the next tile is distributed to slaves. Work done by local GPUs is not affected by this option.
the idea is pretty simple here, as you bump resolution up & up You see multiple GPUs utilised worse & worse (I can give multiple screenshots if interested..)
Let's say if You rendering 1kx1k output.. three local GPUs are 99% load 99% percent of time.. however..if You push resolution higher.. one GPU stays fully loaded all the time, while others keep dropping down.. & You see those 0% GPU loads for longer & longer periods =)
I'm not sure what that cause & so, first though that this might help, but it seems that's not the case:
"Minimize net traffic", if enabled, distributes only the same tile to the net render slaves, until the max samples/pixel has been reached for that tile and only then the next tile is distributed to slaves. Work done by local GPUs is not affected by this option.
- Silverwing
- Posts: 287
- Joined: Wed Jun 15, 2011 8:36 pm
- Location: Ludwigsburg Germany
- Contact:
Hi there. I just wanted to let you know that I am running into the same problem.manalokos wrote:Volumes cause light to enter inside closed spaces.
I attached an image illustrating what happens inside a closed box when a volume intercepts the surface.
I know its being looked after. Just wanted to confirm that there is a problem with shadows!
WIN 10 PRO 64 | ASUS X99-E WS | 3 X GTX 1080Ti | i7 5960X 8X 3,33GHz | 64GB RAM