OctaneRender™ Standalone 3.00 alpha 3

A forum where development builds are posted for testing by the community.
Forum rules
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
Post Reply
User avatar
grimm
Licensed Customer
Posts: 1332
Joined: Wed Jan 27, 2010 8:11 pm
Location: Spokane, Washington, USA

stratified wrote: Unless I'm missing something, loading this MTL file works as expected...

There are no image textures described in this MTL file (via directives map_ka, map_kd, map_ks, map_ns,, map_bump, bump, map_d). The 2 materials are identical and Octane correctly creates a glossy material for them using:
  • Kd 0.640000 0.640000 0.640000 -> diffuse RGB texture with value (0.64, 0.64, 0.64).
  • Ks 0.500000 0.500000 0.500000 -> specular RGB texture with value (0.5, 0.5, 0.5).
cheers,
Thomas
Thanks for looking at this too Thomas, It looks like a problem on Blender's side then. Both textures should have come in as diffuse with no textures, just color. Could this be an issue with the plugin as the materials were Octane nodes and I was using it?

Jason
Linux Mint 21.3 x64 | Nvidia GTX 980 4GB (displays) RTX 2070 8GB| Intel I7 5820K 3.8 Ghz | 32Gb Memory | Nvidia Driver 535.171
User avatar
itou31
Licensed Customer
Posts: 377
Joined: Tue Jan 22, 2013 8:43 am

Hi,
back to V3 alpha 3 to test with my system, but I have always a freeze and reboot issue when using the USB riser and also with amfeltec.
1) I have no issue stability on the v2. rock solid even overclocked.
2) Works well with the 3 titan directly connected to the motherboard.
3) this happens only when I enable "external GPU" (on amfeltec adapter PCIe 4X to four PCIe 1X and/or USB3 riser adapter PCIe 1X)
4) This happens with PT (not tested yet with PMC), but seems fine with directlighting (not tested deeper).
5) I have tested with underclocking the "external" GPUs.
6) tested with only one "external GPU" (not power supply issue)
7) if I load the scene and start rendering, it finished without pb (only test up to 2000sp), but at the moment that I want to play with the interactive UI, it freeze and reboot. seems waiting 10s and then reboot (TDrdelay=10s by default)
7) Also try Tdrdelay at 30s. (longer freeze and reboot)
8) I'm on win8.1 and use 361.43 drivers.

Notiusweb has also TitanZ on USB riser and does not have this issue.

Do you have an idea to help me to fix that freeze ?
I hope to fix that before the V3 release for the upgrade.
Thanks
I7-3930K 64Go RAM Win8.1pro , main 3 titans + 780Ti
Xeon 2696V3 64Go RAM Win8.1/win10/win7, 2x 1080Ti + 3x 980Ti + 2x Titan Black
User avatar
abstrax
OctaneRender Team
Posts: 5510
Joined: Tue May 18, 2010 11:01 am
Location: Auckland, New Zealand

itou31 wrote:Hi,
back to V3 alpha 3 to test with my system, but I have always a freeze and reboot issue when using the USB riser and also with amfeltec.
1) I have no issue stability on the v2. rock solid even overclocked.
2) Works well with the 3 titan directly connected to the motherboard.
3) this happens only when I enable "external GPU" (on amfeltec adapter PCIe 4X to four PCIe 1X and/or USB3 riser adapter PCIe 1X)
4) This happens with PT (not tested yet with PMC), but seems fine with directlighting (not tested deeper).
5) I have tested with underclocking the "external" GPUs.
6) tested with only one "external GPU" (not power supply issue)
7) if I load the scene and start rendering, it finished without pb (only test up to 2000sp), but at the moment that I want to play with the interactive UI, it freeze and reboot. seems waiting 10s and then reboot (TDrdelay=10s by default)
7) Also try Tdrdelay at 30s. (longer freeze and reboot)
8) I'm on win8.1 and use 361.43 drivers.

Notiusweb has also TitanZ on USB riser and does not have this issue.

Do you have an idea to help me to fix that freeze ?
I hope to fix that before the V3 release for the upgrade.
Thanks
Thanks for the report. I fear that the restructuring of the kernels (which results many many more kernel calls) and the render film changes (which is now on the host and therefor requires more data transfers) may have broken some systems with slow PCIe connections. Maybe the workaround in the area of page-locking (see viewtopic.php?p=262013#p262013) will help here. I will probably also add an option where you can select the GPUs that can be used for tonemapping. Maybe that helps, too.

-> Please wait for the next alpha release and check again.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
User avatar
abstrax
OctaneRender Team
Posts: 5510
Joined: Tue May 18, 2010 11:01 am
Location: Auckland, New Zealand

smicha wrote:
abstrax wrote:
What could be a reason for inverted normals for any geometry with a displacement map applied (from zbrush if it helps)?
Could you show a screen shot? Normals shouldn't be inverted by displacement mapping.
Here is the screenshot. Are vertex normals as should be with displacement on ?
Thanks for the screenshot. The problem occurs when the polygon normals are pointing in the opposite direction than the vertex normals and is caused by the fact that the vertex/shading normals of displacement triangles are calculated during rendering using the polygon normals and therefor end up being always aligned. The calculated vertex/shading normals are actually incorrect in that case.

In other words: This problem should only occur when face and vertex normals are not aligned, i.e. don't match, which indicates incorrect import settings and shouldn't happen in the first place. You can control the polygon normal import using the winding order option in the mesh node and it should be set in a way that the polygon normals align with the vertex normals.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
User avatar
stratified
OctaneRender Team
Posts: 945
Joined: Wed Aug 15, 2012 6:32 am
Location: Auckland, New Zealand

grimm wrote:
stratified wrote: Unless I'm missing something, loading this MTL file works as expected...

There are no image textures described in this MTL file (via directives map_ka, map_kd, map_ks, map_ns,, map_bump, bump, map_d). The 2 materials are identical and Octane correctly creates a glossy material for them using:
  • Kd 0.640000 0.640000 0.640000 -> diffuse RGB texture with value (0.64, 0.64, 0.64).
  • Ks 0.500000 0.500000 0.500000 -> specular RGB texture with value (0.5, 0.5, 0.5).
cheers,
Thomas
Thanks for looking at this too Thomas, It looks like a problem on Blender's side then. Both textures should have come in as diffuse with no textures, just color. Could this be an issue with the plugin as the materials were Octane nodes and I was using it?

Jason
Don't understand that last sentence. I think the obj exporting is done by Blender itself and has nothing to do with the plugin.
User avatar
FrankPooleFloating
Licensed Customer
Posts: 1669
Joined: Thu Nov 29, 2012 3:48 pm

So Marcus, with many many more kernel calls, should we then assume that VRMs & VRAM are seeing much much more action, and thus getting piping hot like with high-end AAA games? I ask, because some of us with custom and/or bastardized cooling scenarios might be very interested in this.
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles
prehabitat
Licensed Customer
Posts: 495
Joined: Fri Aug 16, 2013 10:30 am
Location: Victoria, Australia

abstrax wrote:... Sometimes I really wonder what you guys are smoking...
... just saw this :lol: :lol: :lol: ...



:lol: :lol:
Win10/3770/16gb/K600(display)/GTX780(Octane)/GTX590/372.70
Octane 3.x: GH Lands VARQ Rhino5 -Rhino.io- C4D R16 / Revit17
User avatar
abstrax
OctaneRender Team
Posts: 5510
Joined: Tue May 18, 2010 11:01 am
Location: Auckland, New Zealand

FrankPooleFloating wrote:So Marcus, with many many more kernel calls, should we then assume that VRMs & VRAM are seeing much much more action, and thus getting piping hot like with high-end AAA games? I ask, because some of us with custom and/or bastardized cooling scenarios might be very interested in this.
I don't know the answer but wouldn't expect that to be the case. I did a quick race with the ATV scene and the GPU heats up more or less at the same rate as in v2. Maybe a bit slower.

The main difference is the stress on the system/CPU/RAM/PCIe bus is higher than in v2. I.e. those parts may be affected, but the GPUs shouldn't. It's more likely that it will behave the other way around: The GPU may be throttled by the system (which has to do more work now), and therefor produce less heat.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
User avatar
Notiusweb
Licensed Customer
Posts: 1285
Joined: Mon Nov 10, 2014 4:51 am

Marcus,

Looking more generally, let's say with 1 GPU, on either an Octane Render standalone V2 or V3 build...
When a scene is being rendered high res and cannot be navigated as smoothly in viewport (pan, zoom, rotate), what is it exactly that is getting taxed more on the hardware end. Is it the GPU clock, GPU VRam, CPU, system RAM, etc...
Wondering what piece of hardware, if strengthened, would handle the high res scenes smoothly in the viewport (not rendering). Is it a CPU to software thing, or does the GPU affect it? (strictly viewport, not rendering)
Thx!
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
User avatar
abstrax
OctaneRender Team
Posts: 5510
Joined: Tue May 18, 2010 11:01 am
Location: Auckland, New Zealand

Notiusweb wrote:Marcus,

Looking more generally, let's say with 1 GPU, on either an Octane Render standalone V2 or V3 build...
When a scene is being rendered high res and cannot be navigated as smoothly in viewport (pan, zoom, rotate), what is it exactly that is getting taxed more on the hardware end. Is it the GPU clock, GPU VRam, CPU, system RAM, etc...
Wondering what piece of hardware, if strengthened, would handle the high res scenes smoothly in the viewport (not rendering). Is it a CPU to software thing, or does the GPU affect it? (strictly viewport, not rendering)
Thx!
The GPU performance is the important part in this case. Everything that speeds up rendering speeds up navigation in the viewport. The amount of RAM or VRAM doesn't matter as long as the system doesn't swap and the whole scene fits into GPU memory.

You can decrease latency by enabling sub-sampling which allows you to reduce the amount of work by a factor of 4 (2x2 sub-sampling) or 16 (4x4 sub-sampling).

It's also important to not try to render too big resolutions if you want fast feedback, because the number of pixels that need to be calculated are growing by the square of the resolution, i.e. doubling the resolution quadruples the amount of work. Most of the time small resolutions (say 800x600) are good enough for settings up stuff.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Post Reply

Return to “Development Build Releases”