This thread seems to be so theroetical and useless,
1. If you have a scene that renders in 2 seconds on a single GPU, you don`t need to use multi GPU`s / renderslaves in real life.
(if you are rendering a movie of 2 hours = 120 mins = 7200 seconds = 180,000 frames ,
then you`ll spend
2 seconds / frame => 50 seconds rendering for 1 second of movie => 50 x 7200 seconds => 50 x 2 hours = 100 hours = 4 days of rendering. Let it render mate, no worrys)
2. If you use Octane, the output that you`ll get from a single GPU in 2 seconds will not unleash the power of Octane. Use some other software / rendering algorithm.
3. It is so ridicilous to talk about distributing a frame of 2 seconds to render in a network or a multi GPU system in this forum.
Learn TCP/IP, try to understand SYN/ACK thing... Read about bus speeds, think of disk access timings...
But what if 200 GPUS can render it also in one Second? The other 300 GPU's bring nothing....
They are waiting for the next job.
Win7 64 & Slackware 14 64 | 3x Zotac 580 amp & 1x MSI 680 | i7 3930K @4.8 | 32 GB | Asus rampage extreme IV