OctaneRender™ 2020.1 RC2 [superseded by 2020.1 RC3]

A forum where development builds are posted for testing by the community.
Forum rules
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
Post Reply
jimho
Licensed Customer
Posts: 271
Joined: Tue Aug 21, 2018 10:58 am

further to above,
if we consider OOC and NVlink parallely, they will conflict, as mentioned they start function at the same point,
Can we consider NVlink as higher priority (since it is faster), OOC as lower in the prioritized level, then the NVlinked can be considered as a big GPU...

Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
karl
OctaneRender Team
Posts: 396
Joined: Sun Oct 13, 2019 11:26 pm

jimho wrote: Hi Karu,
Thanks for your respond and the explanation, Generally it make sense to me, only a small question,
According to your articulation how the multiple GPUs are working, when OOC is on they should be working though slow but not failure, this is my understanding.
though the current situation is:
Even when OOC is on, not every time the non-nvlinked GPU will work, as mentioned: only when I check off the "use priority" box for them, they may work, yet still seems not so stable.

my question is: can ooc and nvlink work together stablely,

in theory the nvlinked pair is just like a big GPU (with more memory), the situation might be similar with mixing different sized GPUs
Consider when we mix different Vram sized GPUs, the situation may be as below:
1) before ooc, when the scene is exceeded the smaller GPU's Vram they will fail (Obviously, that is why we need OOC)
2)with ooc,
2.1) should the small GPU still fail meanwhile the big one keep running (just like before), OOC will start working when the big GPU exeed its VRAM
2.2) the small one could start using ooc, the big one keep running with its internal Vram,
2.2.1) further to the 2.2, when the big one is also running out of Vram a second mark point can be made, to let the big GPU know it's OOC starts here, only after this point the big GPU will go OOC,
Means different mark point made for different sized GPU to make them all utilize the system memory, they can share the data but will not conflict to each other.

Currently what we see is probably the case 1 and 2.1,
Ideally is it possible that we go the 2.2 scenario, if this 2.2 senario could work for mixed size GPUs the nvlink pair could act just like the big gpu, then ooc and nvlink could both work...

or is there a better possibilty that just let the GPUs not fail...

I am not a programmer, above thoughts just for your reference
Many thanks,
Jim
jimho wrote:further to above,
if we consider OOC and NVlink parallely, they will conflict, as mentioned they start function at the same point,
Can we consider NVlink as higher priority (since it is faster), OOC as lower in the prioritized level, then the NVlinked can be considered as a big GPU...
OOC and NVLink should work together just fine, from Octane's perspective. However, the GPU driver may place a different limit on the amount of usable OOC memory when NVLink is in use (or various other interactions we don't know about). There's no guarantee that enabling OOC will prevent render failure; we are really at the mercy of the driver there. I am not yet sure how the "use priority" box could be affecting the outcome here.

All your numbered scenarios, including 2.2 and 2.2.1, are how Octane currently works. The only issue is that we don't know how much OOC memory the driver will let us use. In any case, we always try to use the minimum possible amount of OOC memory, so it's unclear what we could do differently if rendering fails.

We do consider NVLink as higher priority than OOC - there's no conflict between them. When allocating a piece of data we try to put it on the GPU that will use it, and if that fails we try to use a copy from a peered GPU, and only if that fails do we fall back to OOC.

Are you running with RTX acceleration enabled? You could try disabling that, and you could also try increasing the GPU headroom setting. (Increasing the GPU headroom should not be necessary, but if it does improve things, please let me know.) Those are the only things I can think of that could be contributing to a lack of stability from the Octane side.
User avatar
mojave
OctaneRender Team
Posts: 1336
Joined: Sun Feb 08, 2015 10:35 pm

galleon27 wrote:I have an issue with the way Surface brightness in Lighting works (Blackbody/Texture).
When it is ticked off, the light should be the same intensity no matter the size of the object. That works fine with size, but not with scale. If you scale the object, the intensity changes and that really shouldn't happen.
That is an issue cause in Houdini plugin, the Octane light object is a plane with a Blackbody texture and the only way to determine its size is by scale, not size. It can be fixed on the plugin side by letting me set the size of the plane but i think its wrong the way it works now. Scale and size should be the same thing as far and surface brightness is concerned.
Hi galleon27,

I have given it a try and could not find any obvious issue like the one you describe, could you provide an ORBX file so we can reproduce this?

Thank you.
jimho
Licensed Customer
Posts: 271
Joined: Tue Aug 21, 2018 10:58 am

karu wrote: ...We do consider NVLink as higher priority than OOC...
My observation is a bit different...
Nvlink&OOC.jpg
in the above image, there should be more than 4G spare space in Gpu 4, yet the pair of GPU3 and GPU4 still use 3.4 G OOC,
this maynot be a good example, actually in many cases I can see there is very small amount of P2P is used but large OOC while there is big spare space is still left in the Nvlink pairs,
at the beginning of this rendering there is arround 3G OOC and 0 P2P
that is the reason Why I proposed the 2.2 and 2.21senario
If really NVlink is at high priority the p2p should appear first then OOC, for instance it is not...

Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
karl
OctaneRender Team
Posts: 396
Joined: Sun Oct 13, 2019 11:26 pm

jimho wrote:
karu wrote: ...We do consider NVLink as higher priority than OOC...
My observation is a bit different...
Nvlink&OOC.jpg
in the above image, there should be more than 4G spare space in Gpu 4, yet the pair of GPU3 and GPU4 still use 3.4 G OOC,
this maynot be a good example, actually in many cases I can see there is very small amount of P2P is used but large OOC while there is big spare space is still left in the Nvlink pairs,
at the beginning of this rendering there is arround 3G OOC and 0 P2P
that is the reason Why I proposed the 2.2 and 2.21senario
If really NVlink is at high priority the p2p should appear first then OOC, for instance it is not...
Is the scene you are using very heavy on textures? Currently, only meshes can make use of NVLink (we are planning to support textures in the future). When I said we prioritise NVLink over OOC, that's only for the cases where we have a choice. It might be the case that all the meshes fit onto each GPU without needing to use NVLink or OOC, but then the textures overrun the GPU memory, requiring the use of OOC.
User avatar
mojave
OctaneRender Team
Posts: 1336
Joined: Sun Feb 08, 2015 10:35 pm

mojave wrote:
ptunstall wrote:Network rendering from a windows system to a linux system does not work for this version or RC1, I get error 700 illegal access. Tried from Houdini as well as standalone with an ORBx. orbx file I'm rendering is linked below.

Works fine on both systems in both standalone and houdini though, just not via network rendering.


https://www.dropbox.com/s/dx3xjnwayymqa ... .orbx?dl=0
Thank you for the report, we will look into this.
Hi ptunstall,

This issue should be fixed in the next release, please let us know if you still encounter any problems.

Thank you.
ptunstall
Licensed Customer
Posts: 153
Joined: Thu Jun 04, 2015 1:36 am

You guys are awesome as always! Thank you!
jimho
Licensed Customer
Posts: 271
Joined: Tue Aug 21, 2018 10:58 am

karu wrote:
Is the scene you are using very heavy on textures? Currently, only meshes can make use of NVLink (we are planning to support textures in the future). When I said we prioritise NVLink over OOC, that's only for the cases where we have a choice. It might be the case that all the meshes fit onto each GPU without needing to use NVLink or OOC, but then the textures overrun the GPU memory, requiring the use of OOC.
Hi Karu,
Thanks for respond,
it is not the case with heavy textures, but as you had mentioned previously, there is one single mesh contains 20 Million triangles. yes ,this is exactly the same scene I sent to you the last time,
so you may already know the issue, probably this situation is a bit similar to the heavy textures', guess it just need some more time for you guys to resolve it.

the scene is not a real project, just some testing, in the real project I had much more optimized geometry, so take your time with no problem.

Regards.
Jim,

Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
jimho
Licensed Customer
Posts: 271
Joined: Tue Aug 21, 2018 10:58 am

amazingly a pair of 2080ti + a pair of quadro, it works!
SLI_quadro2.JPG
though the quadro pair does not appear in the nvidia control panel
SLI_quadro.JPG

Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
User avatar
Silverwing
Licensed Customer
Posts: 287
Joined: Wed Jun 15, 2011 8:36 pm
Location: Ludwigsburg Germany
Contact:

Hi there.

@aether_nox has encountered a Bug and gave me the scene for inspection:
This is not directly related to the latest version of Octane as it also occurs in other versions.

If you have a Octane Spot Light and shine it through a refractive object all works as expected. Caustics form nicely.
Unless you change the material BSDF to anything else then Octane. Then the caustics do not resolve nicely.

Please find attached the Orbx
Octane_Spot_Light_Caustics_Bug_01.png
Attachments
Octane_Spotlight_Caustic_Bug.orbx
(1.82 MiB) Downloaded 195 times
WIN 10 PRO 64 | ASUS X99-E WS | 3 X GTX 1080Ti | i7 5960X 8X 3,33GHz | 64GB RAM
Post Reply

Return to “Development Build Releases”