SSmolak wrote:What we see on digital monitor even best calibrated is not the same that our eye see in real world.
Exactly, yet, unsurprisingly! A monitor **emits light**, what we see is **reflected & transmitted light**.
SSmolak wrote:
I spent two months to make photos of vegetation and buildings in different light conditions. Compared this with my own eye after that and color picker, simulated the same in Octane I can say that ACES can do that - but with using proper camera exposure.
You are missing the point.
1. ACES is, from the ground up, from its core and design, a broken C.E.S..
Nobody can't deny these actual facts. That's not even up to a debate or argumentation. Whoever is free to use it but can not claim anything pragmatically efficacious.
2. To sum it up: two months of digital photography, taken with a specific sensor, raw data formed into a "viewable imagery" using camera manufacturer basis and a software (likely PS or LR). Do you see where I am heading?
SSmolak wrote:
Human eye has very wide exposure. Camera exposure system works different. It has point. Octane exposure is another different thing. To use ACES it should be set different camera exposure for every shot.
"Dynamic range"*, not exposure. The rest of the phrase is equivocal.
•The human visual system is paired with the brain. Think of it as lens + sensor + the processing to produce a viewable result of what is seen - all of which is automatically and constantly adjusting (including the many visual illusions!), such as chromatic adaption which anyone can experience with a couple of lights ranging from a "warm" to "cold" (likely color correlated LEDs these days) kelvin temperature. Even more with strong "RGB" light sources. The same goes for the iris, which automatically adjust itself depending on the situation, and so on.
• A camera (film or digital) is a naïve tool controlled by a user, featuring a light-sensitive surface. Its exposing to light is always user-controlled.
• Octane and any other renderer are computer-simulated imagery software with a particularly major distinction from scanned film and digital cameras: floating point in-renderer and at encoding level as opposed to integer encoded digital imagery.
It's not up to the developers to build a specific subjective look for every user's taste. That's an
infinite amount of work and not feasible. Not to mention that it would require a dedicated department (R&D + financial and human resources) to develop a viable solution, which only a handful of companies are close to, these days. I let you imagine what it takes.
The benefits of EXR encoded file is to offer to the knowledgeable and skilled user, the choice of the color pipeline. Putting aside any in-renderer bugs, issues and whatnot.
Even
if developers implement some built-in "looks" (apart from the already deprecated ones present), it will never be as solid and customizable as to what can be done in post or through OCIO (which one of its goals is to bring the post stage as a preview during rendering, in such "CGI" context), not without what was previously mentioned (i.e. the necessary resources).
That's the tradeoff of producing "images" via computers instead of photographing on film or a digital camera (where for the latter, camera manufacturers do a partial work on the digital imaging development). Everything is done from scratch with a renderer. From the bits and line of codes to the viewable results of a simulated 3D scene.