What if I use more than 20 GPUs

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
mbutler2
Licensed Customer
Posts: 77
Joined: Thu Nov 05, 2015 7:58 pm

Follow up note: I had trouble getting two masters with one slave each working concurrently. So for overnight renders, it's just one master controlling two slaves. The other machine just rendering on its own.
mbutler2
Licensed Customer
Posts: 77
Joined: Thu Nov 05, 2015 7:58 pm

I'm also noticing that it's not worth it to run so many GPUs at the same time. Still testing, but it seems that lots of time is lost coordinating so many GPUs over the network. At some point you're better off running multiple separate renders in parallel, rather than a master with too many slaves.
User avatar
Goldorak
OctaneRender Team
Posts: 2321
Joined: Sun Apr 22, 2012 8:09 pm
Contact:

mbutler2 wrote:I'm also noticing that it's not worth it to run so many GPUs at the same time. Still testing, but it seems that lots of time is lost coordinating so many GPUs over the network. At some point you're better off running multiple separate renders in parallel, rather than a master with too many slaves.
Yes, it is also going to save you time for mult-frame/segment rendering, as we do on ORC.
User avatar
Seekerfinder
Licensed Customer
Posts: 1600
Joined: Tue Jan 04, 2011 11:34 am

Goldorak wrote:Just a data point: we are in a pac bell building with nearly unlimited power and decent cooling. We still strugglw to get more than 20x 1080Tis working reliably...
Hi Goldorak,
I am curious as to what the limitation is with regards to power & cooling you were referring to here(?) I understand these requirements with respect to multiple GPUs but surely we're talking a master & local slave setup in this case. So both power & heat loads are distributed. Could you perhaps clarify?

Thanks,
Seeker
Win 8(64) | P9X79-E WS | i7-3930K | 32GB | GTX Titan & GTX 780Ti | SketchUP | Revit | Beta tester for Revit & Sketchup plugins for Octane
User avatar
Goldorak
OctaneRender Team
Posts: 2321
Joined: Sun Apr 22, 2012 8:09 pm
Contact:

Seekerfinder wrote:
Goldorak wrote:Just a data point: we are in a pac bell building with nearly unlimited power and decent cooling. We still strugglw to get more than 20x 1080Tis working reliably...
Hi Goldorak,
I am curious as to what the limitation is with regards to power & cooling you were referring to here(?) I understand these requirements with respect to multiple GPUs but surely we're talking a master & local slave setup in this case. So both power & heat loads are distributed. Could you perhaps clarify?

Thanks,
Seeker
We have DC mini-room we use for internal rendering in one of our office units using 8x Linux GPU boxes - and it's given us a lot of datapoints to consider for real world use outside of a full DC.

In any office setup, while it may be in theory possible to go higher and higher with more 4U boxes x more racks - at some point power our entire floor isn't enough to take us to simple DC levels, and even before we hit that threshold we see reliability starts to drop really fast if A/C can't keep up (i.e. heat wave hits LA, even telecom building power and cooling can be affected).

That doesn't mean it isn't possible to create DC like power and cooling if you have a unique setup for non ad-hoc render farm, which is why we have Enterprise licensing team that can work with customers that really need to go past 20 GPUs per rendered frame.
User avatar
Seekerfinder
Licensed Customer
Posts: 1600
Joined: Tue Jan 04, 2011 11:34 am

Goldorak wrote:
Seekerfinder wrote:
Goldorak wrote:Just a data point: we are in a pac bell building with nearly unlimited power and decent cooling. We still strugglw to get more than 20x 1080Tis working reliably...
Hi Goldorak,
I am curious as to what the limitation is with regards to power & cooling you were referring to here(?) I understand these requirements with respect to multiple GPUs but surely we're talking a master & local slave setup in this case. So both power & heat loads are distributed. Could you perhaps clarify?

Thanks,
Seeker
We have DC mini-room we use for internal rendering in one of our office units using 8x Linux GPU boxes - and it's given us a lot of datapoints to consider for real world use outside of a full DC.

In any office setup, while it may be in theory possible to go higher and higher with more 4U boxes x more racks - at some point power our entire floor isn't enough to take us to simple DC levels, and even before we hit that threshold we see reliability starts to drop really fast if A/C can't keep up (i.e. heat wave hits LA, even telecom building power and cooling can be affected).

That doesn't mean it isn't possible to create DC like power and cooling if you have a unique setup for non ad-hoc render farm, which is why we have Enterprise licensing team that can work with customers that really need to go past 20 GPUs per rendered frame.
Makes sense, thanks Goldorak.
Best,
Seeker
Win 8(64) | P9X79-E WS | i7-3930K | 32GB | GTX Titan & GTX 780Ti | SketchUP | Revit | Beta tester for Revit & Sketchup plugins for Octane
User avatar
Lewis
Licensed Customer
Posts: 1100
Joined: Tue Feb 05, 2013 6:30 pm
Location: Croatia
Contact:

HI Goldorak, very nice info, thank you.

I have Question.

Is it possible for you guys to enable (feature request) Octane to enable multiframe rendering ? i.e. when rendering machine master send data to all available GPUs on the start and over network then give it option to render frame per GPU instead all GPUs single frame. Basically same as ORC is doing but locally on master+slave machines ? I have feeling that could make network data transfers less intensive since not all GPUs are always same type/speed so after each one renders first frame it will probably be in different time so then each new frame would send less data to free GPU(s) and by that have it done faster since CPUs is anyways sitting idle most of time while rendering. This way it could get CPU busy more during rendering.

Also with that system if any of GPUS fail we would loose only that frame and not stop rendering like currently when network slave fails and waits for user interaction to continue. Then if any frame missing/failed when new set of frame set in CPU Que list then just check for missing frames and reassign missing frame to any available/idle GPU in chain. That's how most Network Render controllers work nowdays.

cheers
--
Lewis
http://www.ram-studio.hr
Skype - lewis3d
ICQ - 7128177

WS AMD TRPro 3955WX, 256GB RAM, Win10, 2 * RTX 4090, 1 * RTX 3090
RS1 i7 9800X, 64GB RAM, Win10, 3 * RTX 3090
RS2 i7 6850K, 64GB RAM, Win10, 2 * RTX 4090
Post Reply

Return to “General Discussion”