new features of OctaneRender™ 3.00 (updated 10.8.)

Forums: new features of OctaneRender™ 3.00 (updated 10.8.)
VIP Information, news and announcements regarding new Octane Render commercial products and releases.

new features of OctaneRender™ 3.00 (updated 10.8.)

Postby abstrax » Wed Dec 09, 2015 3:49 am

abstrax Wed Dec 09, 2015 3:49 am
This post describes the major changes of 3.00 compared to the last version 2. We split this out of the main release post to avoid that the information gets lost/buried by subsequent releases. Now that OctaneRender v3 has entered the alpha development phase and things are bound to change fairly quickly. We will update this post as we go until we release the first stable 3.00 release.


New single sign-on and licensing system

We have simplified the way users activate both Octane standalone and plugins. From 3.00 alpha 10 on, Octane will request you to enter your OTOY credentials and will attempt to retrieve an available license that matches your plugin or standalone from the OctaneLive server.

This means that there's no need to deal with the actual license IDs and passwords anymore.

Octane Standalone requires one available (deactivated) standalone license on OctaneLive, while plugins require one available standalone license plus one available license for that specific plugin. Standalone licenses are bound to one machine, which means that can be shared across multiple plugins running on that machine. Also, you may run multiple instances of standalone or a plugin on a single machine using the same license.

Licenses are released (deactivated) when standalone or the plugin is closed, similar to a floating license scheme. In the case of Octane Standalone just the standalone license is released, while plugins will release both Standalone and their respective license. In either case, licenses are just released if there is not another instance of Octane 3.x Standalone or a plugin making use of that specific license. Note that if there's an older version of Octane or a plugin running when this happens the license(s) will be released anyway which will effectively deactivate your plugin or standalone instance.

Deactivation via the Octane live licenses administration page is not necessary anymore as this is done now automatically by the application so it has been disabled. This allows you to use Octane somewhere else without explicitly releasing (deactivating) any licenses.

SSO sign-in

When you open Octane 3 for the first time you will be prompted with a sign in screen like this:

signin.png


After entering your credentials and successfully signing in, Octane will keep a session alive as long there's a continuous usage of Octane or an Octane plug-in, so in most of cases there should not be any need to re-login. This session will also allow you to link your local installation to other OTOY services such as ORC.

Note that Octane 2.x and 3.x instances can still co-exist on the same machine.

SSO sign-out

In order to close an SSO session, the user may go to the Account tab under File > Account... and click the Sign out button. This will close the current session and release all licenses bound to the current machine. If any plugin or another standalone instance are running at that time, they should be closed before continuing with the sign out process.

Offline licensing mode

Support for offline machines works slightly differently to how it's been working until now: Upon sign-in to either Standalone, an Octane plugin or a net render slave, the user will be asked whether to enable "Offline licensing" on the current machine, in which case licenses will not be released upon the application/plugin exit but instead will be locked to the current machine until released explicitly so we advise to not to use this mode unless you really know what you are doing.

This means that licenses will be grabbed by request of Octane Standalone or the various plugins used on the machine in a similar way as the standard mode as long as there is Internet connection. Those licenses can now be used on this machine even if there is no network connection at all. However in order to release licenses activated in this mode, the machine should be brought temporary online. All individual plugins should then be deactivated independently by their own means first and Standalone in last place as explained above (we are planning to provide a way so this can be done in one go). The latter will also close the SSO session on the current computer.

Note: When using offline licensing, after a long period of time with no internet connection or no usage, if you try to activate a new plugin on your machine you may find that Octane is asking you for credentials again. This is because Octane needs an active SSO session in order to retrieve the new license and your's might have expired, so in order to create a new one to retrieve your license from our servers your user information is be required.

Proxy support

Since version 3.02 OctaneRender can run behind a proxy. The details are explained in a separate post.

Overhaul of the integration kernels

Since the beginning of Octane the integration kernels had one CUDA thread calculate one complete sample. We changed this for various reasons, the main one being the fact that the integration kernels got really huge and impossible to optimize. Also OSL and OpenCL are pretty much impossible to implement this way. To solve the problem, we split the big task of calculating a sample into smaller steps which are then processed one by one by the CUDA threads. I.e. there are a lot more kernel calls are happening than in the past.

There are two major consequences coming with this new approach: Octane needs to keep information for every sample that is calculated in parallel between kernel calls, which requires additional GPU memory. And the CPU is stressed a bit more since it has to do more work to do many more kernel launches. To give you some control over the kernel execution we added two options to the direct lighting / path tracing / info channel kernel nodes:

  • "Parallel samples" controls how many samples we calculate in parallel. If you set it to a small value, Octane requires less memory to store the samples state, but most likely renders a bit slower. If you set it to a high value, more graphics memory is needed rendering becomes faster. The change in performance depends on the scene, the GPU architecture and the number of shader processors the GPU has.
  • "Max. tile samples" controls the number of samples per pixel Octane renders until it takes the result and stores it in the film buffer. A higher number means that results arrive less often at the film buffer, but reduce the CPU overhead during rendering and as a consequence can improve performance, too.

Speed
It's hard to quantify the performance impact, but what we have seen during testing is that in simple scenes (like the chess set or Cornell boxes etc.) the old system was hard to beat. That is because in this type of scenes, samples of neighbouring pixels are very coherent (similar) which is what GPUs like and can process very fast, because CUDA threads did almost the same task and didn't have to wait for each other. In these cases you usually have plenty of VRAM left, which means you can bump up the "parallel samples" to the maximum making the new system as fast or almost as fast as the old system.

The problem is that in real production scenes the execution of CUDA threads diverges very quickly causing CUDA threads to wait a long time for other CUDA threads to finish some work, i.e. twiddling thumbs. And for these more complex scenes the new system usually works better since the coherency is increased by the way how each step is processed. And we can optimize the kernels more, because the scope of their task is much more narrow. So you usually see a speed up for complex scenes, even with the default parallel samples setting or a lower value (in case you are struggling with memory).

TLDR version
In simple scenes where you've got plenty of VRAM left: Increase "parallel samples" to the maximum.
In complex scenes where VRAM is sparse: Set it to the highest value without running out of memory. It should usually still be faster than before or at least render with roughly the same speed.


Moved film buffers to the host and tiled rendering

The second major refactoring in the render core was the way we store render results. Until v3 each GPU had its own film buffer where part of the calculated samples were aggregated. This has various drawbacks: For example, a CUDA error usually means that you lose the samples calculated by that GPU or a crashing/disconnected slave means you lost its samples. Another problem was that large images mean a large film buffer, especially if you enable render passes. And yes, deep image rendering would have been pretty much impossible since it's very very memory hungry. And implementing save and resume would have been a pain...

To solve these issues we moved the film buffer into host memory. Doesn't sound exciting, but has some major consequences. The biggest one is that now Octane has to deal with a huge amount of data the GPUs produce. Especially in multi-GPU setups or when network rendering is used. As a solution, we introduced tiled rendering for all integration kernels except PMC (where tiled rendering is not possible). The tiles are relatively large (compared to most other renders), and we tried to hide tile rendering as much as possible.

Of course, the film buffer in system memory means more memory usage, so make sure that you have enough RAM installed before you crank up the resolution (which is now straight forward to do). Another consequence is that the CPU has to merge render results from the various sources like local GPUs or net render slaves into the film buffers which requires some computational power. We tried to optimize that area, but there is obviously an impact on the CPU usage. Let us know if you run into issues here. Again, increasing the "max. tile samples" option in the kernels allows you to reduce the overhead accordingly (see above).

Info passes are now rendered in parallel, too, since we can now just reuse the same tile buffer on the GPU that is used for rendering beauty passes.


Overhauled work distribution in network rendering

We also had to modify how render work is distributed to net render slaves and how their results are sent back, to make it work with the new film buffer. The biggest problem to solve was the fact that transmitting samples to the master is 1 to 2 magnitudes slower than generating them on the slave. The only way to solve this is to aggregate samples on the slaves and de-coupling the work distribution from the result transmission, which has the nice side effect that while rendering large resolutions (like stereo GearVR cube maps) doesn't throttle slaves anymore.

Of course, caching results on the slaves means that they require more system memory than in the past and if the tiles rendered by a slave are distributed uniformly, the slave will produce a big pile of cached tiles that needs to be be transmitted to the master eventually. I.e. after all samples have been rendered, the master still needs to receive all those cached results from the slaves, which can take quite some time. To solve this problem we introduced an additional option to the kernel nodes that support tiled rendering:

    "Minimize net traffic", if enabled, distributes only the same tile to the net render slaves, until the max samples/pixel has been reached for that tile and only then the next tile is distributed to slaves. Work done by local GPUs is not affected by this option. This way a slave can merge all its results into the same cached tile until the master switches to a different tile. Of course, you should set the maximum samples/pixel to something reasonable or the network rendering will focus on the first tile for a very long time...


Volume rendering

Probably the most beautiful new feature is rendering of volume grids. These can either be provided directly by a plugin/script or via OpenVDB files. Creating a VDB file is typically done in a package such as Houdini or with a third party plugin available to many other packages such as TurbulenceFD. Once created, it can be loaded into Octane for rendering.

volume-emission.png


Getting Started
  1. Right click in the node graph, select "geometry" then "volume".
  2. Select a VDB file to import (sample).
  3. Create and connect a Volume Medium to the volume (right click node graph select "Medium" then "Volume medium").
  4. Click on the volume node to start rendering it or right click and select "render". You may need to zoom out to see the volume and you should have some non-black environment set up, since the default volume render setting is some absorption only.

Volume step length
step-setting.png

The "volume step length" parameter on the volume node may need to be adjusted depending on your volume. The default value for the step length is 4m. Should your volume be smaller than this, you will likely need to decrease the step length. Please note that decreasing this will reduce the render speed. Increasing this value will cause the ray marching algorithm to take longer steps. Should the step length far exceed the volume's dimensions, then the ray marching algorithm will take a single step through the whole volume. Most accurate results are obtained when the step length is as small as possible. For simplifying workflow, the volume step length should be set first to an acceptable value.

Scattering, absorption, emission, phase and scale
scattering-setting.png

Similarly to the Medium node, volumes may also have scattering, emission and absorption. These colours influence the appearance of a volume significantly. The phase function also affects a volume as it would affect a medium node, and modifying the scale value of the volume scales the density values of the volume linearly. This will also potentially increase emission as absorption values are also used as particle density.
depth-setting.png

Volumes are rendered in an unbiased way, and therefore are able to scatter multiple times, and also cause self-shadowing effects. Should you wish to reduce the maximum scatter events in a volume, please reduce the Diffuse Depth in your kernel node.

Multiple channels
VDB files contain one or more volume datasets (or "grids"). For VDBs saved from a fire simulation, these would include temperature grids and density grids. These roughly correspond to absorption and emission characteristics of the volume in terms of rendering. You can edit the import preferences (similar to the mesh node) of your volume node to change which volume dataset (or grid) is applied to scattering, absorption and emission. When you export the volume VDB from your simulation software, you can choose what name to give to each grid. Enter these into the import preferences in Octane as you desire.

Level sets
Some volume datasets are known as "level sets" and are essentially an encoding to store a thin "egg-shell" surface. Octane supports loading these volumes also. A setting known as the "isovalue" is provided which allows you to set the thickness of this surface.

Instancing
As with meshes, data is not duplicated for multiple instances of the same volume, so you are free to duplicate the same volume as much as you like.

Animation
To animate a fire, or plume of smoke, you will need to generate a number of VDB files and set a volume animation on the volume node. This is done in a similar way to texture animation. As an example, if rendering at 24 frames per second, and you assign 24 VDBs to a volume node using the texture animation settings, then every second, Octane will load the next volume and render it.

Volumetric Emission Modes
For emission, the Medium node can have either a blackbody emission node, or a texture emission node.

When using the blackbody emission node, it is important to ensure that the data used for the emission grid (see import preferences) contains temperatures in Kelvin. It is common to find VDBs that have unitless "temperatures" with arbitrary ranges such as 0 to 1, or even 0 to 45, as is the case with some sample VDBs from openvdb.org. Typical temperature values range between 0 to 6500, where lower values tend towards longer wavelengths (red colours) and higher values tend towards blue/white. In order to get realistic results from the blackbody emission for volumes, you must disable Normalize in the emission node. Lower temperatures give off less light than higher temperatures, but when normalized, the radiance emitted by all temperatures are equal.

When using the texture emission node, the input temperature grid is interpreted as emission power, not emission temperature. This is more linear in that the higher the "temperature" value, the more light will be given off at that point. Using volume gradients, you are able to control the colour (of eg a flame) more precisely.

Volume colour ramps
colourful-smoke.png

The volume node accepts a volume medium, scattering medium, or absorption medium. If you attach a volume medium, you have the ability to apply colour ramps independently to each of absorption, scattering and emission, and the ability to adjust the step length. In order to make use of a ramp, you must have a colour specified for the corresponding channel. For example, to use the absorption ramp, you must select a colour texture for absorption.

new-volume-setup.png


Please note that volume ramps are restricted to static colours for performance reasons (ie. it is not possible to attach a series of other texture mappings/generators to colours in the ramps).

There is an important consequence of volume animations specifically related to volume ramps: There is a "Max value" on the ramps, which you must set to a reasonable value. This value is used to scale grid values to between 0 and 1, so that the ramp can map these back to colours in the colour gradient. This is needed because maximum values in the grids sometimes differ greatly throughout VDB sequences. If you set a Max value too high or too low, this will still work, but you will only see a subset of the colours in the gradient that you specify. The maximum values for grids in the current VDB selected are now shown in the volume node's inspector pane. A good rule of thumb is to choose a value near to these, but you are free to customise as you like. The max value should be set to the max value of the channel for all the volumes in a sequence. For an indication, please see the info provided by the volume node in the node stack.

ramp.png

The absorption ramp takes the grid value as input. In the colour gradient, the colours near "0" on the left are used for mapping low grid values to some custom colour (as in this case, the lowest values are mapped to white). Higher grid values are mapped to colours on the right of the colour gradient. Bear in mind that less saturated colours will cause there to be less pronounced absorption. Emission and scattering ramps operate in the same way.

Also bear in mind that it is important to ensure the volume is not too dense. One may be tricked into thinking a volume is less dense by a higher volume density and a higher volume step length in the kernel. It is recommended that you reduce the volume step length to an acceptable performance and accuracy level, and then reduce the volume density. Otherwise you may risk rendering a solid object at a high step length, giving deceiving results sometimes.

Volume motion blur
Volume motion blur is supported for VDBs that have a velocity grid of type Vec3. In other words, velocity grids must be provided as a vec3s type grid. In houdini, you must use a Vector Merge SOP node in order to merge the velocity channels into a single vec3s type grid. The shutter alignment and shutter time settings in the new AnimationSettings node affect the look of a volume with motion blur.
motionblur-disabled.png

motionblur-enabled.png


If you want to ignore a velocity grid, you can uncheck Motion blur in the volume node's import preferences. You can also rescale the velocities if you need to from there.

Limitations
Multiple volumes are fully supported, as are overlapping volumes. However, up to a maximum of 4 volumes can overlap in any one position. This decision was a trade off to allow any number of instanced volumes while still keeping performance high.


Environment medium

Setting up participating media has been quite painful so far: You had to create a volume geometry, which encloses the whole scene. Then you had to create a "bubble" around the camera, with the normals pointing inwards, so that camera rays properly enter the medium. And this bubble had to be animated with the camera...

environment_medium.png

Now you can just specify a medium node in the daylight and texture environment nodes. If specified, the medium will be applied to a "virtual" sphere around the camera with the radius "medium radius".


Visible environment

It is possible to connect an additional environment node to the render target. This additional environment will be used as the visible environment. The visible environment overrides the normal environment in some specific use cases, giving more control over the final look of the render. If a medium is configured in the environment, the medium will be ignored when the environment is used as a visible environment.

Environment nodes (both daylight and texture environment) have extra options controlling the behaviour of the environment when used as the visible environment. When the node is used as a normal environment, these options are ignored.

visible_environment_node.png

  • Backplate: The visible environment will be used as a backplate image.
  • Reflections: The visible environment will override the normal environment when calculating reflections for specular and glossy materials.
  • Refractions: The visible environment will override the normal environment when calculating refractions for specular materials.

In the example renders, the same daylight environment is used for both environments except that the normal environment is at noon while the visible environment is at sunset.
without_visible_environment.png

visible_environment_backplate.png

visible_environment_reflections.png

visible_environment_refractions.png


Here is the example project so you can try it out for yourself:
visible_environment_example.orbx


Deep image rendering

The goal of deep image rendering is to improve the compositing workflow by storing Z-depths with samples. It works well in scenarios where traditional compositing fails like masking out overlapping objects, working with images that have depth-of-field or motion blur, compositing footage in rendered volumes, ... As far as we know, the only application that supports deep image compositing is Nuke. The standard output format is OpenEXR. The disadvantage of deep image rendering is the large amounts of memory and disk space required to render and store deep images.

What is a deep image?
Instead of having single RGBA values for a pixel, a deep images stores multiple RGBA channel values per pixel together with a front and back Z-depth (Z and ZBack channels respectively). This tuple (R, G, B, A, Z, ZBack) is called a deep sample. Deep samples come in 2 flavours: point samples which have only a front depth specified (more formally Z >= ZBack) and volume samples which have a front and a back depth (Z < ZBack). Hard surfaces visible through a pixel are point samples and visible volumes are (you guessed it) volume samples. From these samples, two functions can be calculated: A(Z) and C(Z) representing the alpha and color of the pixel not further away than Z. These 2 functions are the basis of depth compositing and allow to compose footage together at any distance Z instead of just composing image A over image B. (Please note, that these functions are calculated by the compositing application. Octane only calculates the samples). Of course, this is a very rough overview with a lot of hand waving. Scratch-a-pixel has a very accessible explanation here. The ultimate references are "Interpreting OpenEXR Deep Pixels" and "Theory of OpenEXR Deep Samples".

How to enable deep image rendering?
Deep image rendering is enabled via the kernel node and is only supported for the path tracing and direct light kernels. Deep image can be enabled by checking the Deep image checkbox:
deep_node.png

For a typical scene, we render thousands of samples per pixel but we only have a limited amount of VRAM so we need to make the number of samples we store manageable. For this you can configure these parameters:
  • "Max. depth samples" specifies an upper limit for the number of deep samples we can store per pixel.
  • "Depth tolerance" specifies a merge tolerance, i.e. when 2 samples have a (relative) depth difference within the depth tolerance they are merged together.

Calculation of the deep bin distribution
The maximum number of samples per deep pixel is 32, but don't worry we don't throw away all the other samples. When we start rendering we collect a number of seed samples which is a multiple of "max. depth samples". With these seed samples we calculate a deep bin distribution, which is a good set of bins characterizing the various depth depths of the samples of a pixel. There is an upper limit of 32 bins and the bins are non-overlapping. When we render thousands of samples, each sample that overlaps with a bin is accumulated into that bin. Until this distribution has been created you can't save the render result and the option "deep image" in the save image drop down is disabled.

Limitations
Using deep bins is only an approximation and there are limitations to this approach. When rendering deep volumes (deep meaning a large Z extend), it might be that there aren't enough bins to represent this volume all the way to the end. What happens is that the volume will be cut off in the back. You can clearly see this if you would display the deep pixels as a point cloud in Nuke. You can still use this volume for compositing but only up to where the first pixel is cut-off. If there aren't enough bins for all visible surfaces, some surfaces can be invisible in some pixels. This situation is more problematic and the best option is to re-render the scene with a bigger upper limit for the deep samples.

A word of warning: After the deep bin distribution has been created it needs to be uploaded onto the devices for the whole render film, i.e. even with tiled rendering deep image rendering can use a lot of VRAM so don't be surprised if the devices fail when starting the render. Likely, the amount of buffers required on the device can be too big for the configuration (check to log to make sure). The only thing you can do here is reduce the maximum deep samples or the resolution.

Here you can find an example project and a deep OpenEXR file rendered with it:
deep-image-example.zip



Photoshop compositing extension

The Octane Photoshop Compositing Extension provides tools for importing and compositing Octane render passes in Photoshop and adds support for loading multi-layer OpenEXR files (16 and 32 bits).

It can be obtained via the Adobe Add-ons website but it's usually included in every new Octane Standalone release post too.

Installation

BundleConfirmation.png

Once the extension is installed you will find new entries for each plugin within the Help > About Plug-In menu on Windows or Photoshop CC > About Plug-In on OSX:
PluginsMenu.PNG

Also, the File > Automate menu has been updated with two new entries.
AutomateMenu.PNG


OTOY EXR Plug-in
This plugin provides support for loading multi-layer OpenEXR files (16 and 32 bits) into Photoshop. When installed, it overrides the default EXR Photoshop loader which supports just single layer EXR files and allows loading them through the standard File > Open..... Upon file load the plugin allows to un-premultiply your data if it's been exported using pre-multiplied alpha as well as adjusting the gamma level in case the data is not in linear color-space:
EXRImport.png

Note: The plugin does not support saving into the OpenEXR files.

Load OctaneRender™ Compositing Project Plug-in
This is the central part of the extension and allows you to load an OctaneRender Compositing Project (*.ocprj) file into Photoshop. To create such a project file you have to enable appropriate option in the multi-pass saving dialog:
GenCompositingProject.png


Whether you've exported multi-layer EXR or discrete files, you can browse your compositing project file by clicking on File > Automate > Load OctaneRender Compositing Project..... The plug-in will load all your project files in a single document, un-premultipying the data if necessary and setting up all layers blending and grouping as needed. Once loaded, you may start compositing your image, save this document as a PSD file or export it in any other format you wish.

Setup OctaneRender™ Render Layers Plug-in
This plug-in arranges render passes exported from Octane to be correctly displayed as layers in Photoshop using the right layer grouping and blending, achieving exactly the same image composition as it would be displayed by Octane. This can be used independently from whether you've loaded your document from a compositing project or created by other means. Once the render passes are loaded as layers into a Photoshop document, just go to File > Automate > Setup OctaneRender Render Layers.....

The plugin will go through all your document layers, set the proper layer order and blending and create the required layer groups. Layers recognized as render passes will be highlighted in GREEN. Layers that are not render passes will be disabled and marked in YELLOW as a warning to the user.

Material render passes
Once you've loaded your material render passes they may look something like this:
MaterialPassesLoaded.png

Note that the beauty pass is being shown first, hiding the rest of layers. After running the plugin, the layers are separated into foreground and environment. The transparency is removed from the foreground layers and applied to the foreground group as an alpha mask. Also blending is applied according to each render pass settings in Octane:
MaterialPassesSetup.png


Lighting render passes
Once you've loaded your lighting render passes they may look something like this:
LightPassesLoaded.png

After running the plugin, the layers are grouped and the blending will be set to "Linear Dodge (Add)", resulting in the right blending.
LightPassesSetup.png


Render layers render passes
Once you've loaded your render layers they may look something like this:
RenderPassesLoaded.png

After running the plugin render passes layers are grouped and the right blending is set. The beauty layer, opposite to the previous render passes types is enabled. An additional background placeholder layer is also created for convenience so that a background image can be easily placed:
RenderPassesSetup.png

Note: The shadows pass layer is just enabled if none of 'black shadows' or 'colored shadows' are not present. If just one of them is present, it will be disabled.

Take into account that if you are using an environment you should enable 'Alpha channel' in your kernel settings.

Known issues:
  • When exporting beauty passes make sure to not to use the 'Raw' flag as the extension blending does not take it into account.
  • In Photoshop CS6, if there are any render layer passes present, the layer arrangement will fail.

Animated image textures

Animated image textures are implemented by animating the file name attribute in the image texture nodes. To set it up we have added a user interface that can be opened by clicking on the animation button in the node UI:
animted_texture_button.png

This will open the following where you can then add the files you want via the "Add files" button (you can pick multiple files in the file chooser):
animted_texture_ui.png


To specify the way how the animation runs through the file sequence, you can specify the "mode", which allows currently these modes:
  • "Once" iterates through the sequence exactly once.
  • "Loop" iterates through the sequence indefinitely.
  • "Ping-pong" iterates from start to end to start to end ... indefinitely.

To control how long and how quickly the animation runs you can specify:
  • Frames per file to set the number of frames one image of the sequence stays. The frames per second itself is defined in the time slider in the render viewport, i.e. comes from the project. It's displayed in the dialog just for convenience.
  • Total frames to set the length of the animation.

When you save a project as a package all images that were specified in the sequence will be stored in the package, too. After opening this package you can still remove image from the sequence and change its order, but you can't add new files from the file system anymore. That's a limitation of how the animated file name attribute works (i.e. we can't have files coming from multiple packages or from the file system in the same sequence).


Raw and filter render passes

Filter passes and raw rendering were introduced to allow more control over the final look in post-production. Filter passes capture the BxDF colour of the material on the first bounce and are available for diffuse reflection, diffuse transmission, specular reflection and specular refraction. Raw passes are passes where the filter colour is divided out of the matching render pass. Dividing out the colour is done during tonemapping, toggling raw passes doesn't restart the render. Because we're doing a division, it doesn't work well for saturated or almost saturated colours (i.e. one of the RGB components is zero or near zero). All this functionality is controlled via the render passes node:
render_passes_node.png


These passes can be composed together by multiplying the raw pass with it's matching filter pass:
  • diffuse = diffuse_filter * diffuse_raw
  • reflection = reflection_filter * reflection_raw
  • refraction = refraction_filter * refraction_raw
  • transmission = transmission_filter * transmission_raw

This is what a sample composition looks like in Nuke:
nuke_composite.png


To get good result, make sure that all images are saved in HDR and linear colour space (i.e. gamma is 1 and linear camera response). This is an example Octane project with the rendered multi-pass EXR and an example Nuke project:
raw_example.zip



Texture baking

The texture baking system allows extracting lighting information from a mesh's surface using by using its UV map to generate a texture that can be mapped back to the mesh later on.

On Octane, texture baking is implemented as a special type of camera, which in contrast to to the thin lens and panoramic cameras, has one position and direction per sample. The way these are calculated depends on the input UV geometry and the actual geometry being baked.

For each sample, the camera calculates the geometry position and normal and generates a ray that points towards it, using the same direction as the normal, from a distance of the configured kernel's ray epsilon. Once calculated, the ray is traced in the same way as it would usually do with other types of camera.

mars_to_baked.png


Mesh pre-requirements

In order for a mesh to be used for texture baking it should be setup to fulfill the following requirements:

  1. The mesh should contain at least one UV set. In the case of Alembic, up to 3 sets can be used. As a way of example, one of them could be used for texture/normal mapping and the second one for baking.
  2. There should NOT be different geometry primitives mapped to the same UV region. Otherwise you may find artifacts due to overlapping geometry.
Getting started

Assuming you've already created a scene with which contains some geometry which lighting, material information, etc. you want to bake, the easiest way to start is to create a copy of your render target and replace its camera with a baking camera.

baking_camera.png


Baking group ID
Specifies which baking group should be baked. By default all objects belong to the default baking group number 1. New baking groups can be arranged making use of object layers or object layer maps similar to the way render layers work.

UV set
Determines which UV set to use for baking.

Revert baking
If checked, the camera directions are flipped. This can be used to use the mesh as a to render the rest of the scene.

Padding
Due to interpolation when mapping a texture to a mesh, sometimes a black edge may appear. This is due to that the texture is black (there's no data) beyond the UV mesh. In order to avoid this, a padding can be added around the edges of the baked data.
The padding size is specified in pixels. The default padding size is set to 4 pixels, being 0 the minimum and 16 the maximum size.
Optionally, an edge noise tolerance can be specified, which allows removing hotpixels appearing near the edge UV geometry. Values close to 1 do not remove any hot pixels while those to 0 will try to remove them all.

padding.png


UV region
Specifies the area that the baking camera takes into account. This can be used to pan and zoom in and out the camera in case your UV geometry is not within the [0,0]->[1,1] region.

Baking position
If a baking position is used, camera rays will be traced from the specified coordinates in world space instead of using the mesh surface as reference but still point towards the same surface point.
This is useful when baking position-dependent artifacts such as the ones produced by glossy or specular materials. If backface culling is not enabled backfaces will be rendered using the mirror vector at the surface point.

Baking groups
In order to tell the baking camera which geometry to bake, the geometry should be connected to the baking render target and, in case of having multiple objects and baking groups, the right baking group ID should be selected in the baking camera.

As a way of example, here is a minimal baking configuration node graph:

baking_graph.png


Note that render layers, passes, imager settings, etc. can be used the same as with other types of cameras, allowing extracting lighting and material information.

layers.png


Baking tips

  • Kernel's filter size is ignored in baking mode and always used as 1.0 (no filtering).
  • Set Imager's response to "Linear/Off" to disable specific camera response curves.

baking_example.orbx



Other changes

Motion blur for panoramic cameras
Camera motion blur wasn't available in v2 but is now implemented for it, too.

Rotation around camera
We added rotation around the camera to the Standalone camera navigation. The defaults are ALT+LMB to control pitch and yaw and ALT+RMB to control roll around the viewing axis. These modifier can be changed in the application preferences.

Static noise
There have been some complaints about the fact that even if the option "static noise" in the path tracing / direct lighting kernel nodes was enabled, there were still some differences in the noise pattern every time rendering restarted the frame. With v3 we made the noise fully static, as long as you use only the same GPU architecture. Unfortunately different architectures produce slightly different numerical errors, which become visible as small differences in noise. There is nothing we can do about it.

Neutral response
If enabled, the camera response curve doesn't tint the render result anymore. If you look at the following example: The left image is the material ball rendered with no response curve and gamma set to 2.2. The center image uses the Agfacolor HDC 200 curve and a gamma of 1. The right image shows the same curve with "neutral response" enabled.
neutral_response_curve.png


New render passes
We added two new info render passes:
  • Tangent normal renders the normal in the shading coordinate space. Since geometric normals are always (0,0,1), this pass usually just renders a slight blue image (0.5, 0.5, 1). Only if the normal is perturbed by a bump or normal map you get different colours. Again, this is mainly meant for baking normal maps, when texture baking becomes available.
  • Opacity renders the opacity channel, i.e. the alpha channel of the material. There is one thing to consider though: We added a new option "opacity threshold" to the info passes node, which allows you to define until which alpha value, the material is considered opaque. Any alpha above this threshold will render with the set opacity. So if you want to render the opacity channel, you want to do it on the first bounce and therefor need to make the material fully opaque, i.e. you have to set the "opacity threshold" to 0.

Displacement
The UV map in the displacement texture is now taken into account. The "shift" parameter in the displacement node has been replaced by a new option "mid-level". The old parameter defined the shift of the displacement in meters, while the new parameter defines it in texture value range. So if you for example have a Zbrush export that has 0-displacement defined at 0.5, you set the "mid-level" to 0.5 and can then scale the displacement height independently. Should be more convenient than the old system.

Animation settings
A new animation settings node to control the shutter interval was added. The shutter interval is specified relative to the frame time that is set in the time slider via the FPS option.

Subframe interval
To the new animation settings node we added two pins subframe start and subframe end which allow you to reduce the rendered shutter interval to a sub-interval. This is useful if you want to double or triple the frame rate without having to re-compile the scene 2 or 3 times more often. In the Standalone you can do that very easily by increasing the subframes value in the batch render script.

Interactive region rendering
The interactive region rendering via the region render tool has been slightly changed: Any samples calculated for the interactive render region are now counted separately (in squared brackets) and not added to the samples statistics of the film anymore, which caused a lot of confusion in the past:
interactive_region_rendering.png

As a consequence you can now start interactive region rendering any time, even after the maximum samples/pixel have been reached. It will continue rendering up to 256000 region samples/pixel or until stopped.
Another tweak to the region render tool is that when it's disabled, the region rectangle will not be drawn on top of the rendering anymore, while region rendering continues. To disable the interactive region rendering you just click somewhere into the viewport without drawing a new region rectangle while the region rendering tool is enabled.

Non-interactive region rendering
Sometimes people just want (re-)render a subsection of the whole frame and only up to the max. samples settings of the kernel node. For this, we moved the resolution pin from the render target node into a new film settings node and added two additional pins region start and region size. This way you can also define a render region where everything else gets rendered black and which will stop at the max. samples setting:
non_interactive_region_rendering.png


Increased triangle limit
The maximum number of triangles that can be rendered is 76 million. The limit when rounded edges and motion blur are enabled is 48 million. This requires about 8.5GB of VRAM.

Selecting devices for tonemapping
Via the preferences, it is possible to selectively enable/disable devices used for tonemapping. These devices will still be used for rendering. When no devices are selected for tonemapping, rendering will continue but the viewport is not updated. We introduced this features to work around issues with devices connected via a slow bus (e.g. in GPU extenders).
use_for_tonemapping.png
You do not have the required permissions to view the files attached to this post.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
User avatar
abstrax
OctaneRender Team
OctaneRender Team
 
Posts: 5483
Joined: Tue May 18, 2010 11:01 am
Location: Auckland, New Zealand

Re: new features and changes of OctaneRender™ 3.00

Postby smicha » Wed Dec 09, 2015 8:49 am

smicha Wed Dec 09, 2015 8:49 am
Marcus, OTOY,

I am so excited that don't know what to start with. Huge thank you for all of you guys!
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
User avatar
smicha
Licensed Customer
Licensed Customer
 
Posts: 3151
Joined: Wed Sep 21, 2011 4:13 pm
Location: Warsaw, Poland

Re: new features and changes of OctaneRender™ 3.00

Postby Refracty » Wed Dec 09, 2015 10:23 am

Refracty Wed Dec 09, 2015 10:23 am
Hi Marcus & Otoy-Team,

congratulations for the release.
It took some time and sweat and now we will taste the new features bit by bit.

V3 - Wow !
User avatar
Refracty
Licensed Customer
Licensed Customer
 
Posts: 1598
Joined: Wed Dec 01, 2010 6:42 pm
Location: 3D-Visualisierung Köln

Re: new features and changes of OctaneRender™ 3.00

Postby prehabitat » Wed Dec 09, 2015 10:52 am

prehabitat Wed Dec 09, 2015 10:52 am
Hi Marcus,

Firstly stoked with the octane updates, really looking forward to the techy stuff & trying some volume & medium stuff!

Not sure if I'm being daft; can I install the Adobe plugins you guys have made on my CS6? Or do I have to wait for the Adobe exchange approval? (Or perhaps not even then?)

EDIT: :cry:

EDIT2: if the buffer is broken into tiles based on the new parameters; the GPU VRAM load is obviously reduced in a general sense (where VRAM only holds 1x tile now rather than samples of the whole buffer). Are we closer to being able to voxelize the scene; divide the paths into chunks that fit into the VRAM of each GPU? (ie no longer constrained by smallest GPU VRAM pool) ....


p.s. :cry: CS6 :cry:
Win10/3770/16gb/K600(display)/GTX780(Octane)/GTX590/372.70
Octane 3.x: GH Lands VARQ Rhino5 -Rhino.io- C4D R16 / Revit17
prehabitat
Licensed Customer
Licensed Customer
 
Posts: 495
Joined: Fri Aug 16, 2013 10:30 am
Location: Victoria, Australia

Re: new features and changes of OctaneRender™ 3.00

Postby gabrielefx » Wed Dec 09, 2015 11:31 am

gabrielefx Wed Dec 09, 2015 11:31 am
I read in the Rhino plugin topic "compiled with Octane 3.0"
What means exactly?
quad Titan Kepler 6GB + quad Titan X Pascal 12GB + quad GTX1080 8GB + dual GTX1080Ti 11GB
User avatar
gabrielefx
Licensed Customer
Licensed Customer
 
Posts: 1701
Joined: Wed Sep 28, 2011 2:00 pm

Re: new features and changes of OctaneRender™ 3.00

Postby prehabitat » Wed Dec 09, 2015 11:55 am

prehabitat Wed Dec 09, 2015 11:55 am
Paul (face_off) usually says as one of the development notes which kernel it was compiled to use...

(Meaning he's release a plugin that uses the 3.0 features/kernel already)
Win10/3770/16gb/K600(display)/GTX780(Octane)/GTX590/372.70
Octane 3.x: GH Lands VARQ Rhino5 -Rhino.io- C4D R16 / Revit17
prehabitat
Licensed Customer
Licensed Customer
 
Posts: 495
Joined: Fri Aug 16, 2013 10:30 am
Location: Victoria, Australia

Re: new features and changes of OctaneRender™ 3.00

Postby glimpse » Wed Dec 09, 2015 12:13 pm

glimpse Wed Dec 09, 2015 12:13 pm
Thanks for taking time to explain all this Guys =) going to have soem tests now! =)
User avatar
glimpse
Licensed Customer
Licensed Customer
 
Posts: 3714
Joined: Wed Jan 26, 2011 2:17 pm

Re: new features and changes of OctaneRender™ 3.00

Postby Jolbertoquini » Wed Dec 09, 2015 2:23 pm

Jolbertoquini Wed Dec 09, 2015 2:23 pm
Excellent Guys,

For the plug Photoshop what about the ID? any help on that? but is already a big help. Thanks again. :D
Octane Render for Maya.
https://vimeo.com/jocg/videos
https://www.linkedin.com/in/jocgtd
http://www.hmxmedia.com/
--------------------
Join MAYA OCTANE USERS Skype discussion here :
https://join.skype.com/LXEQaqqfN15w
User avatar
Jolbertoquini
Licensed Customer
Licensed Customer
 
Posts: 1067
Joined: Sun Aug 31, 2014 7:08 am
Location: London

Re: new features and changes of OctaneRender™ 3.00

Postby funk » Wed Dec 09, 2015 4:03 pm

funk Wed Dec 09, 2015 4:03 pm
Im still running Photoshop CS5.5 and the extension manager wasn't installing the extension. I unpacked the files manually and the EXR loader seems to be loading layered EXR files OK.

Will there be official support for pre CC PS?
Win10 Pro/ Ryzen 5950X / 128GB / RTX 4090 / MODO
"I am the resurrection, and the life: he that believeth in me, though he were dead, yet shall he live" - Jesus Christ
User avatar
funk
Licensed Customer
Licensed Customer
 
Posts: 1204
Joined: Mon Feb 07, 2011 1:24 pm
Location: Australia

Re: new features and changes of OctaneRender™ 3.00

Postby oguzbir » Wed Dec 09, 2015 4:51 pm

oguzbir Wed Dec 09, 2015 4:51 pm
Well done for all the efforts. You rock!! :)

A quick questions here.
Will there be displacement improvement or features in alpha or beta stage?
- I would really would love to see managing the displacement textures, with color correction or such. Maybe procedural textures might drive the displacement, No? .
- And are there any speed and converging improvements using ies light or standard Octane lights? will there be one?
- Regarding 3dsMax plugin I would really appreciate you give us a roadmap to introduce more features. Like a scene Light lister, or Global Material override.
- What about Anisotropic reflections, Wire Texture to (acts like Polygon Side tex) use wire with renders? should we wait for them longer?
- Rebuilding the core from groundup must have been a real challange. But I kind of waiting more and more of artistical feature. If you know what I mean.

- Lastly, I was amazed to see that volumetric object you posted with the announce of V3!
Link:
https://home.otoy.com/otoy-unveils-octa ... rimitives/
Here is two of them.
How is it possible to do this? is this a VDB object?
03-OctaneRender-3-Volumetric-Primitives-600x351.png

Any help how to create this chap below is greatly appreciated.
05-OctaneRender-3-Volumetric-Primitives.jpg


Cheers,
You do not have the required permissions to view the files attached to this post.
My Portfolio
windows 10 Pro. |1070 + 1070 + 1070 + 1070 | i7 @4.5Ghz
User avatar
oguzbir
Licensed Customer
Licensed Customer
 
Posts: 715
Joined: Fri Jun 25, 2010 12:30 am
Next

Return to Commercial Product News & Releases (Download here)


Who is online

Users browsing this forum: No registered users and 1 guest

Tue Mar 19, 2024 3:13 am [ UTC ]