Where I Drove My Stake - Into The 1st Titan
Because Nvidia's release of the original ("o") reference design ("RD") GTX Titan ushered in a tremendous high water mark for low cost fast GPU computing, I coined a new measure. I call it "TE." "TE" stands for "Titan Equivalency." Since I use my Titans (and GTX 480s, 580, 590s, 680s, 690 and 780 Tis) mostly for 3d rendering, I defined TE in relation to OctaneRender's then current Benchmark Scene (version 1.20) in seconds, using Barefeats test results found at [
http://www.barefeats.com/gputitan.html ] as a base.
How I Drove My Stake - With Mathematical Ratios
What's going on, at base, is that I'm evaluating/comparing render time ratios, using division where the oRD Titan's render time of 95 secs. is divided by the render time of the GPU card in question. Here's my TE mathematical equation:
oRD Titan's render time in secs = 95; 95 divided by render time (in secs) of focus GPU card = TE of focus GPU;
where GPUs faster than oRD Titan will get values greater than 1; and slower cards will get values less than 1.
Why I Drove My Stake - To Make More Informed Purchase and Resource Allocation Decisions, As Well As More Accurate Job Completion Estimates
Since the oRD Titan that Barefeats used in his testing took 95 sec. to render that Octane version 1.2 benchmark scene, (1) one GTX 680 that takes 190 sec. to render that scene gets a TE of .5, or (2) 2 GTX 680s that together render that scene in 95 seconds get a total TE of 1, or (3) 8 Titans that each render that scene in 95 sec., each get a single TE of 1, but combined they have a TE of 8, or (4) as in the case of my GTX 690 (tweaked) that renders that scene in 79 sec., it gets a TE of 1.20 [ 95 / 79 = 1.20253164556962]. Remember to keep the following in mind as number of GPUs rendering together exceeds two. In OctaneRender, your performance will scale linearly when using the same model GPUs with the same setting. How this works with only two GPUs is easy - if one renders the scene in 200 sec., then the both of them will render the scene in 100 sec. But here's were the little tricky part starts: If you want to render that scene in 50 sec., it'll take twice the number of GPUs that it took to render the scene in 100 sec. That means it'll take 4. If you want to render the scene in 25 sec., it'll take twice the number of GPUs that it took to render the scene in 50 sec. That means it'll take 8. So if you keep this math in mind and just mull it over a little, it'll begin making perfect sense.
With this information, I can precisely define, for example, how many GTX 480 SCs it'll take in one of my systems to equal the performance of one oRD Titan or one GTX 690, etc.
Here are more Titan Equivalencies:
My GPU Performance Review
My CUDA GPUs’ Titan Equivalencies
*/ (TE) from highest to lowest (fastest OctaneRender/ Benchmark V1.20 score to lowest):
1) EVGA GTX 780 Ti Superclock (SC) ACX / 3gig (G) = TE of 1.319
2) EVGA GTX 690 / 4G = TE of 1.202
3) EVGA GTX Titan SC / 6G = TE of 1.185
4) EVGA GTX 590 Classified (C) = TE of 1.13
Titan that Barefeats tested = TE of 1.0
5) EVGA GTX 480 SC / 1.5G = TE of .613
6) EVGA GTX 580C / 3G = TE of .594
7) Galaxy 680 / 4G = TE of .593
*/ For example, my EVGA GTX 780 Ti Superclock (SC) ACX / 3gig (G) with a TE of 1.319 is 1.319 times faster than the Titan that Barefeats tested, but my EVGA GTX 480 SC / 1.5G with a TE of .613 is only about 60% as fast as the Titan that Barefeats tested. Because of OctaneRender's perfect linearity two EVGA GTX 480 SC / 1.5G with a TE of .613 would be 1.226 (or 2x .613) times faster than the Titan that Barefeats tested and a tad faster than my EVGA GTX 690 / 4G which has a TE of 1.202.