promity wrote:We have a very thin deadline and we were hoping for Otoy's Cloud Render capabilities. But when work on the scene was completed and we started exporting the scene (animation of about 4000 frames, a scene with many video sequences and animated characters), we found that only 300 frames were packed in 4 hours.
Having recently done quite a bit of R&D and testing with ORC for a ~22,000 frame project with a tight deadline...
this is why you do a full pipeline test with as close to real-world data as possible, well before the due date.
The Otoy folks were really great in answering my questions and troubleshooting my test files and renders, (particularly @Daniel
, thanks guys!), but ORC turned out not to be a good fit for this render job.
So, to add to this thread and hopefully be helpful to future potential ORC users, here are my findings.
In my testing, as of Summer 2019, the ORC estimating algorithm is not accurate and/or ORC render efficiency is very low for large numbers of relatively fast rendering frames, i.e. less than about four minutes per frame on one 1080 ti, or one minute per frame with an OctaneBench score of 800-1000. Below that, ORC is not optimized for such quick frames, and may vastly underestimate render time and subsequent cost.
For this job we had 22,000 frames, but each frame was relatively quick; between one and two minutes on one 1080 ti. That's still several weeks of rendering 24/7 for one of our average workstations. Rendered on ORC, cost estimates for my tests scenes were coming out anywhere from 1/5 to 1/20 of the actual performance, producing costs 5 to 20+ times higher than expected. Only by cranking up the resolution and/or samples far beyond what we needed was I able to get ORC estimates and performance to begin agreeing. Ouch! At the estimated prices for our tests, it was within budget, but at actual prices, it would have wiped out a hefty chunk of profit from the project.
So, after a few hundred dollars in test renders, and a lot of back and forth with Otoy support, my conclusion is that the current incarnation of ORC is optimized for jobs with relatively beefy render demands per frame. If you are looking at OctaneBench numbers of 1000+ per frame-minute, ORC gives reasonably good estimates and is a good value. I have a few years of experience managing and rendering with other engines on AWS with Thinkbox Deadline and on-demand render node instances, and I'm guessing ORC is set up to distribute frames/contribute to frames in such a way that there is quite a bit of overhead from the back-end file management that's killing estimate accuracy. In my email exchanges, the Otoy guys agreed with my general assessment without any technical particulars.(For this particular job, and based on our own in-house Octane network rendering, as well as AWS/Deadline jobs, I suspect it would be far more efficient to have two or three GPUs working on each frame, and multiple, parallel frames, than to have many GPUs working on each frame. In Deadline speak, that's smaller Groups with more Tasks instead of larger Groups with few, or even one Task. I'm guessing ORC does the second method, or is heavily weighted in that direction. But that's just a guess.)
In our shop, if average frames for a job are taking more than two or three minutes on a single 1080 ti, it usually ends up on my workbench for optimization. That being the case, ORC (as currently configured, mid 2019) is not a good fit for us. I'll be really interested to see if/how Otoy's distributed token based render system works, once it's publicly available. It could be a huge game-changer.https://rendertoken.com/
To be clear, I'm not knocking ORC. If ORC fits your render job profile, it's fast, easy to use, and reasonably priced for the convenience. Just don't assume it's a good fit without some careful real-world testing before depending on it as a resource.