prodviz wrote:Great to see the network rendering working, cheers for posting.
On the render speed, would it fair to say that the rig with 4 x 580's (which each have 512 cores) could be compared to just under 1 x Titan Black (which has 2880 cores)?
And that the rig of 4 x 680's (which each have 1536 cores) would be similar to just over 2 x Titans?
So, coupled with the 4 x Titan rig, the render power is similar to 7 Titan GPU's?
Still a chunk of render power

prodviz,
I've coined a measure that I call "TE." "TE" stands for "Titan Equivalency." Since I use my Titans mostly for 3d rendering, I've defined TE in relation to OctaneRender's then current Benchmark ccene (in seconds), using Barefeats test found at [
http://www.barefeats.com/gputitan.html ], using the original reference design (oRD) Titan as a base. Here's how TE works:
Since the Titan that Barefeats used took 95 sec. to render that benchmark scene, (1) one GTX 680 that takes 190 sec. to render that scene gets a TE of .5, or (2) 2 GTX 680s that together render that scene in 95 seconds get a total TE of 1, or (3) 8 Titans that each render that scene in 95 sec., each get a single TE of 1, but combined they have a TE of 8, or (4) as in the case of my GTX 690 (tweaked) that renders that scene in 79 sec., it gets a TE of 1.20 [ 95 / 79 = 1.20253164556962]. Remember to keep the following in mind as number of GPUs rendering together exceeds two. In OctaneRender, your performance will scale linearly when using the same model GPUs with the same setting. How this works with only two GPU is easy - if one renders the scene in 200 sec., then the both of them will render the scene in 100 sec. But here's were the little tricky part starts: If you want to render the scene in 50 sec., it'll take twice the number of GPUs that it took to render the scene in 100 sec. That means it'll take 4. If you want to render the scene in 25 sec., it'll take twice the number of GPUs that it took to render the scene in 50 sec. That means it'll take 8.
Here’s some examples using most of the GPUs that I run:
GPU Performance Review
I. My CUDA GPUs’ Titan Equivalency*/ (TE) from highest to lowest (fastest OctaneRender**/ Benchmark V1.20 score to lowest):
1) EVGA GTX 780 Ti Superclock (SC) ACX / 3gig (G) = TE of 1.319 (The current Titan Black should perform a little better since it has slightly higher base, boost and memory speeds, but unfortunately for me I don’t own one, so I can’t test it);
2) EVGA GTX 690 / 4G = TE of 1.202;
3) EVGA GTX Titan SC / 6G = TE of 1.185;
4) EVGA GTX 590C = TE of 1.13;
The original Reference Design (oRD) Titan that Bare Feats tested = TE of 1.0 (95 secs);
5) EVGA GTX 480 SC / 1.5G = TE of .613;
6) EVGA GTX 580 Classified (C) / 3G = TE of .594; and
7) Galaxy 680 / 4G = TE of .593.
The TE values were derived by dividing the respective render times that it took the card to render the V. 1.20 Octane render Benchmark scene in secs. into 95 secs. That’s why and how the slower cards got a fractional TE value of less than one (their render time in secs - the denominator - was greater then 95). Conversely, a faster GPU gets a value greater than one. So my GTX 680 is only about 59% as fast as an oRD Titan and my GTX 590C is 113% faster than an oRD Titan. You can also add values for multiple slower performing cards to determine numerically how doubling their number compares to the original Titan on that benchmark metric. If, like me, you love math, go for it.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.