Material pipeline: manipulating textures

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Post Reply
User avatar
matej
Licensed Customer
Posts: 2083
Joined: Fri Jun 25, 2010 7:54 pm
Location: Slovenia

Hi!

I wold like to propose some ideas / changes to the material pipeline, that would allow more flexibility with less used VRAM. I would like to hear from the devs about the possibility / feasibility of such changes and from other users their opinion / suggestions about the workflow.

So, here I'm using the Blender material nodes to transform an input bitmap into 3 textures that are most typicaly applied to materials. Note that I use Blender to show the graph just because it's the only compositing package I own, not because I would like to enforce its way of doing things. I'm sure other programs are equally good or better at this things. The logic behind creating this textures is from the "oxidized copper" material presented here.

The basic idea is to be able to load one image into Octane, which then becomes a 'texture resource'. We would then apply some (quite basic) transformations to create new texture resources from it. This 'texture resources' would then be available throughout the pipeline to use multiple times on one or more materials or nodes. When we would hit render, Octane would calculate pixel data on the fly from the original image, without loading additional data into VRAM. The engine is ofcourse already doing this to some extent, but I think it could be even more flexible and powerful.

The graph below is representing this workflow, and should be obvious to anyone familiar with nodes. I'll explain some things for those who never used nodes before.
texture.nodes.jpg
An image is loaded into Octane that is our input texture "diff_1". (note: image is not equal to texture). This texture can be mapped to the diffuse channel of one or multiple materials, so itself is a 'texture resource'. But we could use it also to create new textures (or more precisely: transformation procedures).

The same texture with a little modifications could be also mapped to specularity. So we first mix it with a pinkish color to produce the intermediate result "spec_1". We also take the same input completely desaturate it with a HSV node, then change contrast with a RGB Curves node to produce the intermediate result "spec_2". This two intermediate results are then mixed together to produce the final specular texture "spec_3". Note that "spec_2" is also used to modulate the amount of mixing ('factor' in the last Mix node). The motivation behind this setup is that copper needs tinted reflections but not where the oxidation flecks are. Also the power of reflection should be lower there (darker areas).

Since we desaturated the input with a HSV node we could use that for bump information, so we create a texture resource form that also ("bump_1"). In the end we have three textures to use in our environment, but only one image is needed to be loaded in VRAM. Basically this are the same operations that you do in your image manipulation software, here you just achieve this with nodes instead of layers. That's all the difference, so this should be easy to adopt even for people who never used nodes before.

Some observations:

* To do this we would need a separate workspace, say a "texture creation mode" in the Graph Editor.
* Some sort of non-lighted 2D preview mechanism would be needed, as it's hard to picture what exactly the transformations do, without 'viewer nodes' that show you the intermediate steps. The rendering loop should be disabled during this. This could be implemented entirely on the CPU, so not to complicate things. When you are done, you exit the "texture creation mode", the rendering loop restarts and those textures are now available throughout the system to use in nodes.
* Additional nodes for pixel math, that are missing: HSV values, RGB curves, blending modes (Add, Substract, Multiply, Overlay... - all standard pixel blending modes), RGBA channel (de)composition... These operations are fairly typical and can be found in every composition / image manipulation program.

Ofcourse this will consist of some additional hard work, but the basis for it is already there. Since GPU programs will always lag behind CPU programs in terms of RAM availability, optimizations in this segment should be worthwhile. Also most users can't afford overpriced high-end cards with lots of RAM, like Teslas, etc.

Additional thoughts on mapping mechanism:

The mapping controls (power, scale, invert...) that are now tightly linked (global) to textures, should be de-coupled from textures and used per-mapping. Texture reusability would be easier this way - as you wouldn't need to load the same image multiple times or create identical procedurals, just to be able to use different power value in different materials / nodes. Everything that deals with how is a texture applied somewhere, should not affect the original texture.

Support for multiple UV coordinate sets per material would also come handy. UV sets should be also controlled at mapping level, not at texture level.


So, I hope that all I wrote makes sense, if not, please ask. And I would ofcourse like to hear everbodys opinion.


Cheers! :D
SW: Octane 3.05 | Linux Mint 18.1 64bit | Blender 2.78 HW: EVGA GTX 1070 | i5 2500K | 16GB RAM Drivers: 375.26
cgmo.net
Post Reply

Return to “General Discussion”