Which feature do you need the most?
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
It's seemingly not much different from Nishita / Planetary. Any paper available? Documentations from these mentioned renderers are not informative.e-s wrote:PRG Clear Sky Daylight model would be so nice. That's what Vray and Corona use, and it looks great
Octane resources
OCTANE POSTS (URLs have changed, which will break some links but all content remains available).
OCTANE POSTS (URLs have changed, which will break some links but all content remains available).
Admittedly, I don't understand all of the technical nitty-gritty, but here is a link to the paper and presentation from SIGGRAPH:skientia wrote: It's seemingly not much different from Nishita / Planetary. Any paper available? Documentations from these mentioned renderers are not informative.
https://cgg.mff.cuni.cz/publications/skymodel-2021/
I had posted this request in the C4D section but was informed that it's a limitation of OctaneCore, so I'm reposting here.
It would be great to have a streamlined workflow / UI for light linking. I think Redshift is a good reference here, where each light has a tab that can exclude/include specific objects - it's fast and straightforward to use with a couple of clicks, and easy to understand which lights are affecting each object.
It would be great to have a streamlined workflow / UI for light linking. I think Redshift is a good reference here, where each light has a tab that can exclude/include specific objects - it's fast and straightforward to use with a couple of clicks, and easy to understand which lights are affecting each object.
- Attachments
-
- screenshot_250130_2092.png (11.54 KiB) Viewed 66114 times
It would be very interesting to implement a function that generates a Gaussian splat cloud from a 3D model.
The advantage would be real time navigation including reflections and transparencies, which, especially in VR environments, would be great.
I think that Gaussian splat works by adding color information to each individual point of the cloud in relation to the viewing angle, thus allowing a point to take on multiple aspects depending on how you look at it.
Obviously this information, which current software generates from photos or videos captured from different angles, could be generated by the rendering engine.
In essence, the rendering engine would have to divide the model into many small equidistant points lying on the faces and calculate for each of them the information coming from the various directions.
This would lead to a Gaussian splat environment that is enormously more precise than anything that can be generated with photos and above all observable from any point of view.
The advantage would be real time navigation including reflections and transparencies, which, especially in VR environments, would be great.
I think that Gaussian splat works by adding color information to each individual point of the cloud in relation to the viewing angle, thus allowing a point to take on multiple aspects depending on how you look at it.
Obviously this information, which current software generates from photos or videos captured from different angles, could be generated by the rendering engine.
In essence, the rendering engine would have to divide the model into many small equidistant points lying on the faces and calculate for each of them the information coming from the various directions.
This would lead to a Gaussian splat environment that is enormously more precise than anything that can be generated with photos and above all observable from any point of view.