I test ORC for first time - really nice and simple interface, though I wish more stats / feedback was available during or after rendering. Is there any way to determined how many GPU's were used during a job, or what the Ms/Sec count was?
I tested a local stand alone animation render with the very same file uploaded to ORC and the difference was only 1 1/2 hrs. faster in the cloud. I was expecting much more of an increase.
My local machine uses two GTX 780's 6GB on air (non reference coolers).
I know that when ORC is released as a final commercial service there will be some way of paying for faster speeds but how do we calculate that against what our render time would be using our own local hardware?
Regardless of speed, though, I am very exciting by the prospect of sending jobs completely off my computer and freeing it for other things without the worry of it crashing during a render when doing other high processor or graphics intensive work.