This is probably going to be a lengthy post. I'm no professional review or benchmarker etc. Just a guy looking for some answers.
I am currently building a workstation primarily for VFX simulations and secondarily high resolution renders/output. I've been doing a lot of research on the $2,500 difference between then Tesla K20 and the GTX Titan as this has caused quite the stir, for me anyway. I'm sure I'm not the only one torn by which to go with. It's easy to do the simple math and see that you can get 8,064 CUDA Cores (Titan x3) for the price of one Tesla K20(x) 2,688 CUDA Cores. So there must be a reason why Nvidia released this card (Titan), and is still holding their enterprise cards at the same price--turns out, there is a reason: More cards/cores, does not necessarily mean faster in some environments.
For more info on Kepler architecture and technology, see the following:
http://www.nvidia.com/content/PDF/keple ... 012_LR.pdf
http://www.nvidia.com/content/PDF/keple ... epaper.pdf
http://developer.download.nvidia.com/as ... UDA_v2.pdf
http://www.nvidia.com/object/nvidia-kepler.html
GeForce GTX Titan & Tesla K20(x):
- The following is exclusive to the Tesla K20(x) GPU's:
Professional Drivers & 24/7 Support
Stability due to ECC RAM (Important for simulations)
Hyper-Q (Crucial for simulations)
GPU Virtualisation
- The following is found on both the Tesla K20 and GTX Titan:
Dynamic Parallelism
FP32 and PF64 (Single Precision and Double Precision)
2,466 CUDA Cores -- 2,688 with the K20x
5GB RAM -- 6GB RAM with the GTX Titan, 6GB with the Tesla K20x.
Ref: I am not including a reference for these as this is common data that can be found on the back of any box or review site.
- One GK110, depending on certain code/applications, without Hyper-Q (Titan) is up to 2.5x slower than one with Hyper-Q (Tesla) -- on paper! This seems to be the main thousand dollar selling point of the generations Tesla cards.
Ref:
http://www.nvidia.com/docs/IO/122874/K2 ... -brief.pdf (See page 2 & 3)
Additionally developers and CUDA users (Including GPU Accelerated renderers) may not get the full speed without Hyper Q. The following is an excerpt from a PCMag article:
Though based on the same silicon as the Telsa K20 and Telsa K20X, this is a gaming part through and through. Don't think that you're going to build a supercomputer in your basement to unlock the secrets of the universe. Nvidia's drivers will recognize that this is a gaming card, and only give you the features you need for gaming. GPU-computing features like Hyper-Q aren't available to Titan users, but those hooks are really for developers and CUDA computing users anyway. One CUDA feature, Double Precision, is selectable in the Nvidia control panel, but it's off by default. If these terms don't mean anything to you, don't worry, your games will still look pretty at HD resolution or better.
Ref:
http://www.pcmag.com/article2/0,2817,2415528,00.asp
- The GTX Titan is primarily a high-end gamer card, and secondarily an entry level compute card (see first paragraph of reference) -- I have reached out to AnandTech to ask about a compute only benchmark comparison (like OctaneRender), they responded on twitter stating
"we didn't have access to a K20, but performance should be fairly similar. It is my interpretation that this further confirms that if all you are doing with the Titan is GPU rendering--this is the card for you.
Ref:
http://www.anandtech.com/show/6760/nvid ... n-part-1/2
- The compute and FP64 functionality need to be unlocked manually within the driver control panel if you are going to use this card for compute performance--it is not fully unlocked by default. One SMX unit is also disabled on the Titan (See second to last paragraph on reference).
Ref:
http://techreport.com/review/24381/nvid ... n-reviewed
Ref 2: Please see the PCMag reference above that reinforces this.
- Nvidia refers to this as an entry level card because these are GK110 chips that are perfectly use-able, they were not up to the stringent standards of the GK110's found in the Tesla line. This allows professionals to still enter the compute market--as opposed to being locked out completely--and still makes Nvidia money, smart move.
Ref:
https://forums.geforce.com/default/topi ... nt=3745201 (See moderator comment).
My Conclusion:
It is my understanding that if you are only going to be using the CUDA cores for rendering/GPU acceleration in the realm 3-D (like OctaneRender), then the Titan is what you want to go with--although in theory and on paper it may or may not take more than one Titan's to surpass the compute power of one Tesla K20(x).
If, however, you are in the realm of Simulations in VFX, science or manufacturing --buying a Titan will
not help you out. Due to the lack of Hyper-Q especially. Your simulations will be slower, even with the 3:1 Titan:Tesla card ratio. One Tesla in theory contains 2.5 Titan's for simulation (thanks to Hyper-Q)--and more stability for 24/7 operations. This ratio explains their $2,500 price difference and why Nvidia is still selling the Tesla line at its MSRP.
Once again, I'm no professional reviewer--I just know information comparing these two cards is scarce out there and it's nice to have it all in one place if anyone was curious. Because I will be doing simulations and more VFX based work--the Tesla looks to be where I'm heading. For others, hopefully you can save some money
Thanks to tomGlimps for discussing and pushing me to do some research

Like he said on twitter, lay out your goals first and everything else will fall into place. Thanks again.
Mikel