Ive been doing some "sheen" tests in modo, blender cycles and renderman for blender (using pxrDisney), since they all have a principled shader. The sheen doesnt change based on roughness in any of their shaders.
In octane, roughness affects sheen. Is this intentional?
OctaneRender™ Standalone 3.08 RC 1
Forum rules
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
can anyone at otoy at least give me an advice why slaves just disappear after rendering 2-3 hours from network list especially when rendering lots of foam and phoenix. Does my network card fail? Can it be a router? Slaves just disappear and octane continues to render with master GPUs. All slaves have to be manually restarted it's been for 3-4 months now Never had these problems before. Thanks
Please try binding your machines MAC addresses to certain IPs on your router settings (usually it is 192.168.1.1 or 100). It may happen that your IPs are assigned randomly. Let me know if it helps.coilbook wrote:can anyone at otoy at least give me an advice why slaves just disappear after rendering 2-3 hours from network list especially when rendering lots of foam and phoenix. Does my network card fail? Can it be a router? Slaves just disappear and octane continues to render with master GPUs. All slaves have to be manually restarted it's been for 3-4 months now Never had these problems before. Thanks
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Sorry, I don't know what happened here. The only thing I can think of is that the slave either crashed or disconnected from the network, then reconnected, the DHCP server assigned it a new IP address and neither the DHCP or Octane detected the disconnect and thus it gets a new IP address and a new entry for the net render master.Phantom107 wrote:Not sure if this is 3.08 RC 1 specific but reporting on it anyways;
Apparently it's possible for a render slave to crash, it apparently gets a new IP from the network (maybe this causes the crash in the first place), then when killing off the daemon and re-launching the slave, the master PC recognises it as a totally new machine (please see attachment)
I have noticed it before that sometimes a slave completely disconnects from the network and then reconnects and the master doesn't recognize the disconnect and reconnect and the socket connection becomes useless, but the master still thinks everything is fine. Unfortunately, this happens only rarely and never when I try to debug the problem which is why I don't know yet, what exactly is happening in that case.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Thank youabstrax wrote:Sorry, I don't know what happened here. The only thing I can think of is that the slave either crashed or disconnected from the network, then reconnected, the DHCP server assigned it a new IP address and neither the DHCP or Octane detected the disconnect and thus it gets a new IP address and a new entry for the net render master.Phantom107 wrote:Not sure if this is 3.08 RC 1 specific but reporting on it anyways;
Apparently it's possible for a render slave to crash, it apparently gets a new IP from the network (maybe this causes the crash in the first place), then when killing off the daemon and re-launching the slave, the master PC recognises it as a totally new machine (please see attachment)
I have noticed it before that sometimes a slave completely disconnects from the network and then reconnects and the master doesn't recognize the disconnect and reconnect and the socket connection becomes useless, but the master still thinks everything is fine. Unfortunately, this happens only rarely and never when I try to debug the problem which is why I don't know yet, what exactly is happening in that case.
Seems like all the problems began after this new windows 10 update 1709 we were forced to install
- BorisGoreta
- Posts: 1413
- Joined: Fri Dec 07, 2012 6:45 pm
- Contact:
I have noticed that after the frame gets rendered ( progress bar gets to 80% or 90% ) GPUs stop rendering ( GPU usage utility ) but I still have to wait 5-8 seconds for the frame to actually finish. Rendering on 6 GPUs, when only 1 GPU active there is a negligible wait, as I increase GPU count the wait gets longer. Why is that ? I guess after GPUs stop rendering CPU is doing something but why does it take so long ? I suppose this bit is single threaded, but why ? Isn't it just adding results and tone mapping, isn't that by nature multi threaded friendly, I mean each pixel doesn't depend on any other pixel to do this. My CPU usage is at 3% during this wait on a 32 logical processor work staiton. What a waste of CPU power, and time after all.
19 x NVIDIA GTX http://www.borisgoreta.com
At the moment that is intentional yes, since the sheen roughness is driven by the glossy roughness. But yes, practically it doesn't make a lot of sense since the sheen is basically an additional material layer (i.e. some "fluff") sitting on top of the glossy material and thus shouldn't really be driven by the underlying glossy properties.funk wrote:Ive been doing some "sheen" tests in modo, blender cycles and renderman for blender (using pxrDisney), since they all have a principled shader. The sheen doesnt change based on roughness in any of their shaders.
In octane, roughness affects sheen. Is this intentional?
-> We will add a sheen roughness channel.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Are these GPUs all on the same computer or do you use network rendering?BorisGoreta wrote:I have noticed that after the frame gets rendered ( progress bar gets to 80% or 90% ) GPUs stop rendering ( GPU usage utility ) but I still have to wait 5-8 seconds for the frame to actually finish. Rendering on 6 GPUs, when only 1 GPU active there is a negligible wait, as I increase GPU count the wait gets longer. Why is that ? I guess after GPUs stop rendering CPU is doing something but why does it take so long ? I suppose this bit is single threaded, but why ? Isn't it just adding results and tone mapping, isn't that by nature multi threaded friendly, I mean each pixel doesn't depend on any other pixel to do this. My CPU usage is at 3% during this wait on a 32 logical processor work staiton. What a waste of CPU power, and time after all.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
- BorisGoreta
- Posts: 1413
- Joined: Fri Dec 07, 2012 6:45 pm
- Contact:
All on the same computer, nodes were disabled during the test.
I have tested the same scene locally on a node which also has 6 GPUs, there is no wait at all ( at least not noticeable ).
CPU there is i7-7700 with 8 logical processors ( 10792 CPU benchmark score, single core 2351 ). cca 20%-45% constant CPU usage during rendering
Workstation CPU is dual XEON E5-2687W, ( 28808 CPU benchmark score, single core 1863 ) cca. 8%-10% CPU usage during rendering, falls to constant 3% at the end of the frame, that 5-8 second wait.
Single core difference isn't that big judging by the numbers so difference in the wait shouldn't be that big, no wait compared to 5-8 seconds wait.
I have tested the same scene locally on a node which also has 6 GPUs, there is no wait at all ( at least not noticeable ).
CPU there is i7-7700 with 8 logical processors ( 10792 CPU benchmark score, single core 2351 ). cca 20%-45% constant CPU usage during rendering
Workstation CPU is dual XEON E5-2687W, ( 28808 CPU benchmark score, single core 1863 ) cca. 8%-10% CPU usage during rendering, falls to constant 3% at the end of the frame, that 5-8 second wait.
Single core difference isn't that big judging by the numbers so difference in the wait shouldn't be that big, no wait compared to 5-8 seconds wait.
19 x NVIDIA GTX http://www.borisgoreta.com
- BorisGoreta
- Posts: 1413
- Joined: Fri Dec 07, 2012 6:45 pm
- Contact:
I have just realized that during this 5-8 seconds wait on a workstation buttons of the IPR interface are not reacting, maybe I should have reported this to Juanjo.
19 x NVIDIA GTX http://www.borisgoreta.com