Please vote for Resume Render

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
gueoct
Licensed Customer
Posts: 402
Joined: Mon Jul 11, 2011 3:10 pm

is this the final resolution?

Have you checked this one?
http://render.otoy.com/forum/viewtopic. ... =despeckle
Intel i7-970 @3,20 GHz / 24 GB RAM / 3 x EVGA GTX 580 - 3GB
User avatar
matej
Licensed Customer
Posts: 2083
Joined: Fri Jun 25, 2010 7:54 pm
Location: Slovenia

Geometrically this scenes are very simple (most of it are flat surfaces). There is no way this needs 2GB of memory, even with lots of textures. You are probably doing something wrong, like needlessly using excessive geometry (which would explain such absurd render times).
SW: Octane 3.05 | Linux Mint 18.1 64bit | Blender 2.78 HW: EVGA GTX 1070 | i5 2500K | 16GB RAM Drivers: 375.26
cgmo.net
gueoct
Licensed Customer
Posts: 402
Joined: Mon Jul 11, 2011 3:10 pm

This is exactly what i think, too.
Absurd is the right word for it. :-)
What you need is not more render speed, but optimization of the whole setup
Don´t get me wrong but it´s a completely unprofessional approach to let
one frame render up to 90 days....
Intel i7-970 @3,20 GHz / 24 GB RAM / 3 x EVGA GTX 580 - 3GB
treddie
Licensed Customer
Posts: 739
Joined: Fri Mar 23, 2012 5:44 am

Geometrically this scenes are very simple (most of it are flat surfaces).
Not when you realize that everything in those renders has rounded edges, and nature is that way too...As I mentioned above, a six-sided cube (12 triangles) can easily bloat out to hundreds of polys when the edges are rounded. If you restrict yourself to chamfers, then you also restrict any highlights you might get from your light sources or they may not show up at all. If I have a big flat disk composed of say a thousand triangles to eliminate any tessellation along the edges, and if that disk has circular holes punched in it, then with rounded edges, a thousand polys easily balloons out to tens of thousands of polys. And then the flat surfaces of the disk have to be broken up into even more polys so that the dreaded "sliver triangles" are kept to a minimum.
Don´t get me wrong but it´s a completely unprofessional approach to let
one frame render up to 90 days....
gueoct > I agree with you when it comes to jobs I do for CLIENTS and I do drop my rounding down to only those areas where it is really necessary, but I respectfully disagree with you when what I am doing otherwise are really experiments in "complete" reality. Don't get me wrong, there are a zillion great renders by a lot of people, and we all have businesses to run, but as a trained illustrator, whenever I see two flat surfaces butt up against each other to a perfect corner, my eye goes to it right away and it is not realistic. There will ALWAYS be a highlight along any edge that will catch light, or curve away from the light source. Obviously, it is not realistic to tell a client he has to wait 12 days if you have a 3-day deadline to complete everything. But a lot of what I do in my business ends up being printed large format, on average maybe 15 feet by 7 feet, and many times up to 60 feet by 20 feet and larger, like race car transporters. Although I have never done much 3D rendering for projects that large, it has always been in my mind that whatever I put up there will be scrutinized at nose distance, and everyone wants to be blown away when they walk up to a racing trailer. They actually have transporter graphics competitions at the big races.

But back to those two images I uploaded. You will notice that when you look at where any two edges meet, like a wall meeting a floor, the edges are rounded and you can see and feel that two "real world" objects have been assembled together. Also, the decals on the hatch HAD to be hires in order to read the instructions, even though the decal is very small compared to the UV map that needs to hold its detail, although that particular render was a tad to low in resolution to read them completely. What I am doing IS over the top (call me crazy if you like!), but I am also very excited since I have watched 3D graphics technology improve over the last 30 years, and we are not that far now from when we really CAN be absurdly over-the-top when it comes to how we want to render. Even now, I can let a render go and go and go, while I do money-making work in parallel on the same machine or on another, and with a little Zen patience, the results are well worth it for high res imagery. Would I try animation that way? Not in a million years. I don't think I will ever see in my time an affordable way to do it.
gueoct > Thank you for the link! I will use that method on my next renders.
Win7 | Geforce TitanX w/ 12Gb | Geforce GTX-560 w/ 2Gb | 6-Core 3.5GHz | 32Gb | Cinema4D w RipTide Importer and OctaneExporter Plugs.
User avatar
matej
Licensed Customer
Posts: 2083
Joined: Fri Jun 25, 2010 7:54 pm
Location: Slovenia

@treddie, can you show the wireframe of that panel / cabinet inside the wall, for example. How much is the scene polycount, btw?

If you have problems to fit this scene into memory, then you are most likely exaggerating with subdivisions of your objects (small edges that will be 1 pixel wide in the final render, really dont need that much). Since most objects are embedded into other objects, you have to subdivide everything further back. Some of the details you could easily do with normal-mapping (which works best on flat objects, so your scene is perfect for it).

Using this "all-geometry" approach wont get you anywhere (one week render time is not acceptable, not even if you are a hobbyist). You'll have to optimize your scene:

* where is possible detach objects from parent objects, so you can control their geometry separately and child objects don't affect the geometry of parent
* use normalmapping or bumpmapping
* don't go to extreme subdivision levels on parts that are barely visible in your final render
scene.jpg
SW: Octane 3.05 | Linux Mint 18.1 64bit | Blender 2.78 HW: EVGA GTX 1070 | i5 2500K | 16GB RAM Drivers: 375.26
cgmo.net
treddie
Licensed Customer
Posts: 739
Joined: Fri Mar 23, 2012 5:44 am

To start off...I just want to say that I truly respect everyone's comments here. In the attached image with your and my markups, I respectfully disagree, but only on "purist" grounds. I am a very open (and maybe abrupt) type of person, so if I don't seem to be accepting your suggestions, always feel free to debate and call me crazy. After all...Every site needs at least one crazy poster! :) And a crazy poster that can be taken to task for his approach.

In my commented markups, I stress again the concept, or at least the ideal of letting the mesh speak for itself, just as reality does without kludge methods that simulate reality. For me, 3D rendering is about pushing the technology envelope to that point that kludges are not necessary. Normal and bump maps are just that...commercial kludges we use and accept because the technology is not yet there to put all of the responsibility on the mesh to represent reality, as far as 3D professional business is concerned. From a solid modeling point of view (programs like Creo and SolidWorks), you simply cannot beat the speed with setting up rounds and such, AND being able to edit the parameters of those features simply by editing their values. Normal and bump mapping can never exceed that simplicity. Displacement is the only exception in certain cases. But obviously, the downside of this approach for today's technology is high poly counts that push the envelope. But that is my intent! Again, from an earlier post, we are sooooo close to that ideal that within perhaps another 5-10 years it won't be an issue anymore for static frames. I believe that doing animation this way will be possible affordably and quickly in , who knows...10-20 years?

So for our marked up image, matej, here are my comments in response to yours:
Jup-2 Interior Markup.jpg
To further my point, please see the next three images. The first is most of the entire model. The second is a closeup of the manner by which the panels interleave with one another, if you zoom in on one of the joints. The third image is a closeup of the panels. This is something not possible with bump, normal, or displacement mapping. I found that at this level of zoom, applying displacement for the rivets and their indents was close to impossible to do it without rendering artifacts up the kazoo or simply to get the correct rivet shape, which is a no brainer in a solid modeler:
Port Gemini (Centered CSsys) 2c (Composite) w CRite.jpg
And the closeup:
Port Gemini (Centered CSsys) 5 (Composite) w CRite.jpg
Finally, a detail of the interleaving of panels:
Port Gemini Panels Closeup.jpg
Win7 | Geforce TitanX w/ 12Gb | Geforce GTX-560 w/ 2Gb | 6-Core 3.5GHz | 32Gb | Cinema4D w RipTide Importer and OctaneExporter Plugs.
User avatar
matej
Licensed Customer
Posts: 2083
Joined: Fri Jun 25, 2010 7:54 pm
Location: Slovenia

@treddie; I understand what are you saying. Of course everything depends on the level of detail - the question is, do you really need such close ups? If the answer is yes, then you need real geometry of course, but for the distance in your first scene images, a N-map (baked at the end, when everything is set-up) will do the same job.

It seems to me that you don't have much maneuvering space here, so you'll have to optimize the scene or buy a car with 3, 4 GB. I don't see your feature request as a real solution for your problems (especially considering it's not going to be implemented for a long time, if ever)
SW: Octane 3.05 | Linux Mint 18.1 64bit | Blender 2.78 HW: EVGA GTX 1070 | i5 2500K | 16GB RAM Drivers: 375.26
cgmo.net
treddie
Licensed Customer
Posts: 739
Joined: Fri Mar 23, 2012 5:44 am

You're probably correct on seeing Resume Render anytime soon. It certainly would have helped in my case. But even if a render takes just 4 hours to complete, if your system crashes at 3hrs 45min, that's a total drag. Combining two images of different seed values is a kludge that is not unbiased. And combining a 3hrs 45min image with a 15min image does not work in Octane. So really, you have to start from scratch and take your next render to somewhere around 2hrs to combine with the first (assuming you saved on a regular basis and had an earlier save around the 2hr mark). If you didn't, you would have to render again out to about 3hrs 45min, so that's 7.5 hrs total as opposed to 4hours total under all cases with Resume Render.

But for what we have been talking about, Resume Render would only be an "answer" if my system crashed somewhere along the way. It would not as you point out, solve the VRAM issue. Since I hate to have to optimize, even though I am forced to often, I can't wait to get a few 3Gb 580s. Who knows, by the time I'm ready to buy maybe they'll be something as afforadable as a 580 but with up to 4Gb.
Win7 | Geforce TitanX w/ 12Gb | Geforce GTX-560 w/ 2Gb | 6-Core 3.5GHz | 32Gb | Cinema4D w RipTide Importer and OctaneExporter Plugs.
User avatar
Proupin
Licensed Customer
Posts: 735
Joined: Wed Mar 03, 2010 12:01 am
Location: Barcelona
Contact:

Hey you're the helmet guy! I respect what you're trying to do, but no wonder you hate to optimize. The editability of this approach is close to none. 95% of this scene could be done with Subdivision modeling, where you can choose the LOD you want. You need to do a close-up of a hole? well, subdivide that to your will. You need to edit it? Much easier than your approach. With that said, raise your hand for Subdivision support! +1+1+1! (and resume render).

It's the project that dictates what is needed; trying to cover up for all possible points of views is pretty pointless. After all, you're spending days rendering, better time spent redoing more detail when a particular shot calls for more detail. And considering Normal maps and bump 'kludges' when you probably use smooth shading anyways (the oldest of kludges) is not helping you at all. Sometimes, Normal maps are enough, they look exactly the same, or even better than modeled detail, cause you can add texture to an otherwise synthetic-looking model; you could start considering using normal maps on your super detailed models to add a new layer of realism. It's not one or the other I think.
Win 7 64bits / Intel i5 750 @ 2.67Ghz / Geforce GTX 470 / 8GB Ram / 3DS Max 2012 64bits
http://proupinworks.blogspot.com/
treddie
Licensed Customer
Posts: 739
Joined: Fri Mar 23, 2012 5:44 am

Heheh. Yah. Though I like to refer to it as the helmet from hell. :)

I think I was too hard on N and B maps. I do use them, but I like to use them as you suggested at the end there...To add that subtle texture for the realism. For instance in the interior shot, one of my plans was to have a rather uneven reflectivity to the floor along with a slight, fine ridged fractal. After all, even in a new interior with a hard floor, nothing is perfect and add to that people actually walk on it. Imagine that! So of all the surfaces in there, the floor is going to see the most traffic and abuse right off the bat.

The Apollo helmet has a head pad in it that for Maxwell had two maps; a displacement map for the wrinkles in the over-fabric, and a tight-weave normal map for the Beta glass fabric texture (The real helmet actually used Teflon fabric but the weave is too fine to really show up, and the Beta fabric had such a unique texture). The bubble had hairline scratches mapped as bump as I recall. The reason I used displacement and normal maps in this case is because modeling such a thing would be incredibly difficult. That to me is where the benefit of mapping really has its place; when modeling is not up to the task. Same for the scratches. But the scratches bitmap needed to be really hires to hold the fineness of each scratch.
Incidentally, for Octane, I'm having difficulty trying to get any program to make the headpad displacement a decent quality, actual physical part of the mesh...Not because I want to, but because Octane does not yet support displacement. Bummer.
Apollo Helmet.jpg
Here was a test of the headpad alone, but with an aluminized Beta fabric just for the heck of it:
Pad Test (Alum Beta Fabric).jpg
And if I had to do a stucco wall, I would never try to model that, no way.

But one reason I model most anything that can be easily modeled is because when I do a model for myself, I know I am going to want to choose just as many closeups of things as much as long or medium shots. I just don't know what I will want to show closeup yet until the model is finished and I go in and find angles that look really cool. Along the way I find other cool angles and I just hate having to then go back in and do more mapping when at that point I just want to point and click the camera. I'm in a different mind space at that point and kind of glad the modeling is over. There's no way to avoid noticing things that need to be fixed, but other than that, I don't want the model to limit what and where I can shoot. LOD would be really nice, but I don't have that capability. What I really wish is that someone would build a plugin for ProE/Creo that allowed caging on export to whatever common format would support it. Then with your suggestion of SubD in Octane, I would be lying if I didn't say that would be totally cool.

But Rome was not built in a day, and Octane is still beta. I think there is a very bright future for the standalone Octane.
Win7 | Geforce TitanX w/ 12Gb | Geforce GTX-560 w/ 2Gb | 6-Core 3.5GHz | 32Gb | Cinema4D w RipTide Importer and OctaneExporter Plugs.
Post Reply

Return to “General Discussion”