GeForce GTX Titan
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
I think the timeline had already slipped from 2013 to 2014 for Maxwell so we might view Titan as a stopgap measure, something that creates a bit of marketing buzz. 7xx are most likely effectively just to be an OC of existing 6xx, largely I suspect just to have something out there in the market until the next generation is ready. I would be surprised if Maxwell slips again to 2015.
i7-3820 @4.3Ghz | 24gb | Win7pro-64
GTS 250 display + 2 x GTX 780 cuda| driver 331.65
Octane v1.55
GTS 250 display + 2 x GTX 780 cuda| driver 331.65
Octane v1.55
This is probably going to be a lengthy post. I'm no professional review or benchmarker etc. Just a guy looking for some answers.
I am currently building a workstation primarily for VFX simulations and secondarily high resolution renders/output. I've been doing a lot of research on the $2,500 difference between then Tesla K20 and the GTX Titan as this has caused quite the stir, for me anyway. I'm sure I'm not the only one torn by which to go with. It's easy to do the simple math and see that you can get 8,064 CUDA Cores (Titan x3) for the price of one Tesla K20(x) 2,688 CUDA Cores. So there must be a reason why Nvidia released this card (Titan), and is still holding their enterprise cards at the same price--turns out, there is a reason: More cards/cores, does not necessarily mean faster in some environments.
For more info on Kepler architecture and technology, see the following:
http://www.nvidia.com/content/PDF/keple ... 012_LR.pdf
http://www.nvidia.com/content/PDF/keple ... epaper.pdf
http://developer.download.nvidia.com/as ... UDA_v2.pdf
http://www.nvidia.com/object/nvidia-kepler.html
GeForce GTX Titan & Tesla K20(x):
- The following is exclusive to the Tesla K20(x) GPU's:
Professional Drivers & 24/7 Support
Stability due to ECC RAM (Important for simulations)
Hyper-Q (Crucial for simulations)
GPU Virtualisation
- The following is found on both the Tesla K20 and GTX Titan:
Dynamic Parallelism
FP32 and PF64 (Single Precision and Double Precision)
2,466 CUDA Cores -- 2,688 with the K20x
5GB RAM -- 6GB RAM with the GTX Titan, 6GB with the Tesla K20x.
Ref: I am not including a reference for these as this is common data that can be found on the back of any box or review site.
- One GK110, depending on certain code/applications, without Hyper-Q (Titan) is up to 2.5x slower than one with Hyper-Q (Tesla) -- on paper! This seems to be the main thousand dollar selling point of the generations Tesla cards.
Ref: http://www.nvidia.com/docs/IO/122874/K2 ... -brief.pdf (See page 2 & 3)
Additionally developers and CUDA users (Including GPU Accelerated renderers) may not get the full speed without Hyper Q. The following is an excerpt from a PCMag article:
Though based on the same silicon as the Telsa K20 and Telsa K20X, this is a gaming part through and through. Don't think that you're going to build a supercomputer in your basement to unlock the secrets of the universe. Nvidia's drivers will recognize that this is a gaming card, and only give you the features you need for gaming. GPU-computing features like Hyper-Q aren't available to Titan users, but those hooks are really for developers and CUDA computing users anyway. One CUDA feature, Double Precision, is selectable in the Nvidia control panel, but it's off by default. If these terms don't mean anything to you, don't worry, your games will still look pretty at HD resolution or better.
Ref: http://www.pcmag.com/article2/0,2817,2415528,00.asp
- The GTX Titan is primarily a high-end gamer card, and secondarily an entry level compute card (see first paragraph of reference) -- I have reached out to AnandTech to ask about a compute only benchmark comparison (like OctaneRender), they responded on twitter stating "we didn't have access to a K20, but performance should be fairly similar. It is my interpretation that this further confirms that if all you are doing with the Titan is GPU rendering--this is the card for you.
Ref: http://www.anandtech.com/show/6760/nvid ... n-part-1/2
- The compute and FP64 functionality need to be unlocked manually within the driver control panel if you are going to use this card for compute performance--it is not fully unlocked by default. One SMX unit is also disabled on the Titan (See second to last paragraph on reference).
Ref: http://techreport.com/review/24381/nvid ... n-reviewed
Ref 2: Please see the PCMag reference above that reinforces this.
- Nvidia refers to this as an entry level card because these are GK110 chips that are perfectly use-able, they were not up to the stringent standards of the GK110's found in the Tesla line. This allows professionals to still enter the compute market--as opposed to being locked out completely--and still makes Nvidia money, smart move.
Ref: https://forums.geforce.com/default/topi ... nt=3745201 (See moderator comment).
My Conclusion:
It is my understanding that if you are only going to be using the CUDA cores for rendering/GPU acceleration in the realm 3-D (like OctaneRender), then the Titan is what you want to go with--although in theory and on paper it may or may not take more than one Titan's to surpass the compute power of one Tesla K20(x).
If, however, you are in the realm of Simulations in VFX, science or manufacturing --buying a Titan will not help you out. Due to the lack of Hyper-Q especially. Your simulations will be slower, even with the 3:1 Titan:Tesla card ratio. One Tesla in theory contains 2.5 Titan's for simulation (thanks to Hyper-Q)--and more stability for 24/7 operations. This ratio explains their $2,500 price difference and why Nvidia is still selling the Tesla line at its MSRP.
Once again, I'm no professional reviewer--I just know information comparing these two cards is scarce out there and it's nice to have it all in one place if anyone was curious. Because I will be doing simulations and more VFX based work--the Tesla looks to be where I'm heading. For others, hopefully you can save some money
Thanks to tomGlimps for discussing and pushing me to do some research
Like he said on twitter, lay out your goals first and everything else will fall into place. Thanks again.
Mikel
I am currently building a workstation primarily for VFX simulations and secondarily high resolution renders/output. I've been doing a lot of research on the $2,500 difference between then Tesla K20 and the GTX Titan as this has caused quite the stir, for me anyway. I'm sure I'm not the only one torn by which to go with. It's easy to do the simple math and see that you can get 8,064 CUDA Cores (Titan x3) for the price of one Tesla K20(x) 2,688 CUDA Cores. So there must be a reason why Nvidia released this card (Titan), and is still holding their enterprise cards at the same price--turns out, there is a reason: More cards/cores, does not necessarily mean faster in some environments.
For more info on Kepler architecture and technology, see the following:
http://www.nvidia.com/content/PDF/keple ... 012_LR.pdf
http://www.nvidia.com/content/PDF/keple ... epaper.pdf
http://developer.download.nvidia.com/as ... UDA_v2.pdf
http://www.nvidia.com/object/nvidia-kepler.html
GeForce GTX Titan & Tesla K20(x):
- The following is exclusive to the Tesla K20(x) GPU's:
Professional Drivers & 24/7 Support
Stability due to ECC RAM (Important for simulations)
Hyper-Q (Crucial for simulations)
GPU Virtualisation
- The following is found on both the Tesla K20 and GTX Titan:
Dynamic Parallelism
FP32 and PF64 (Single Precision and Double Precision)
2,466 CUDA Cores -- 2,688 with the K20x
5GB RAM -- 6GB RAM with the GTX Titan, 6GB with the Tesla K20x.
Ref: I am not including a reference for these as this is common data that can be found on the back of any box or review site.
- One GK110, depending on certain code/applications, without Hyper-Q (Titan) is up to 2.5x slower than one with Hyper-Q (Tesla) -- on paper! This seems to be the main thousand dollar selling point of the generations Tesla cards.
Ref: http://www.nvidia.com/docs/IO/122874/K2 ... -brief.pdf (See page 2 & 3)
Additionally developers and CUDA users (Including GPU Accelerated renderers) may not get the full speed without Hyper Q. The following is an excerpt from a PCMag article:
Though based on the same silicon as the Telsa K20 and Telsa K20X, this is a gaming part through and through. Don't think that you're going to build a supercomputer in your basement to unlock the secrets of the universe. Nvidia's drivers will recognize that this is a gaming card, and only give you the features you need for gaming. GPU-computing features like Hyper-Q aren't available to Titan users, but those hooks are really for developers and CUDA computing users anyway. One CUDA feature, Double Precision, is selectable in the Nvidia control panel, but it's off by default. If these terms don't mean anything to you, don't worry, your games will still look pretty at HD resolution or better.
Ref: http://www.pcmag.com/article2/0,2817,2415528,00.asp
- The GTX Titan is primarily a high-end gamer card, and secondarily an entry level compute card (see first paragraph of reference) -- I have reached out to AnandTech to ask about a compute only benchmark comparison (like OctaneRender), they responded on twitter stating "we didn't have access to a K20, but performance should be fairly similar. It is my interpretation that this further confirms that if all you are doing with the Titan is GPU rendering--this is the card for you.
Ref: http://www.anandtech.com/show/6760/nvid ... n-part-1/2
- The compute and FP64 functionality need to be unlocked manually within the driver control panel if you are going to use this card for compute performance--it is not fully unlocked by default. One SMX unit is also disabled on the Titan (See second to last paragraph on reference).
Ref: http://techreport.com/review/24381/nvid ... n-reviewed
Ref 2: Please see the PCMag reference above that reinforces this.
- Nvidia refers to this as an entry level card because these are GK110 chips that are perfectly use-able, they were not up to the stringent standards of the GK110's found in the Tesla line. This allows professionals to still enter the compute market--as opposed to being locked out completely--and still makes Nvidia money, smart move.
Ref: https://forums.geforce.com/default/topi ... nt=3745201 (See moderator comment).
My Conclusion:
It is my understanding that if you are only going to be using the CUDA cores for rendering/GPU acceleration in the realm 3-D (like OctaneRender), then the Titan is what you want to go with--although in theory and on paper it may or may not take more than one Titan's to surpass the compute power of one Tesla K20(x).
If, however, you are in the realm of Simulations in VFX, science or manufacturing --buying a Titan will not help you out. Due to the lack of Hyper-Q especially. Your simulations will be slower, even with the 3:1 Titan:Tesla card ratio. One Tesla in theory contains 2.5 Titan's for simulation (thanks to Hyper-Q)--and more stability for 24/7 operations. This ratio explains their $2,500 price difference and why Nvidia is still selling the Tesla line at its MSRP.
Once again, I'm no professional reviewer--I just know information comparing these two cards is scarce out there and it's nice to have it all in one place if anyone was curious. Because I will be doing simulations and more VFX based work--the Tesla looks to be where I'm heading. For others, hopefully you can save some money

Thanks to tomGlimps for discussing and pushing me to do some research

Mikel
Windows 8.1 | OctaneRender® v2.0 | Intel Core i7 4770K | 32GB RAM | x2 GeForce GTX 780 Ti SC (3GB/ea.)
Twitter: MikelMNJ
Twitter: MikelMNJ
- gabrielefx
- Posts: 1701
- Joined: Wed Sep 28, 2011 2:00 pm
I see you work for Nvidia company....
we will see if Titan will outperform Tesla K20x that I've tested yet.
For 4.000€ (4 cards)I think that Titan is a good deal for us.
With Tesla20x we can't build a system with 4 gpus because we have to add one K5000
Because the K6000 is not available yet our professional gpu rig will be castrated.
With 4 Titans we can build a workstation that stay under our desks.

we will see if Titan will outperform Tesla K20x that I've tested yet.
For 4.000€ (4 cards)I think that Titan is a good deal for us.
With Tesla20x we can't build a system with 4 gpus because we have to add one K5000
Because the K6000 is not available yet our professional gpu rig will be castrated.
With 4 Titans we can build a workstation that stay under our desks.
quad Titan Kepler 6GB + quad Titan X Pascal 12GB + quad GTX1080 8GB + dual GTX1080Ti 11GB
I ordered two of them today from Gigabyte. I think vendor doesn't matter and they are all the same. I get back taxes so it is about 1600€ for two.
The shop phone called me and asked if I'm sure to order two because of the price lol.
If they are that fast and I get some good paying jobs getting my ROI I will maybe get another one or two. But would need new power supply and not in the mood to desassemble my workstation while doing some jobs here. I will see next week. Maybe even Saturday.
The shop phone called me and asked if I'm sure to order two because of the price lol.
If they are that fast and I get some good paying jobs getting my ROI I will maybe get another one or two. But would need new power supply and not in the mood to desassemble my workstation while doing some jobs here. I will see next week. Maybe even Saturday.
PURE3D Visualisierungen
Sys: Intel Core i9-12900K, 128GB RAM, 2x 4090 RTX, Windows 11 Pro x64, 3ds Max 2024.2
Sys: Intel Core i9-12900K, 128GB RAM, 2x 4090 RTX, Windows 11 Pro x64, 3ds Max 2024.2
I don't work for Nvidia, haha. Just wanted to bring to light that the Tesla is more for simulation, and Titan is more for rendering. I would love to see a benchmark in both environments with both these cards. No one out there seems to want to do this :/ The Quadro K5000 is really good at pushing 100 million polygons by the way
, if you deal with engineering assemblies and CAD data a lot.
Mikel

Mikel
Windows 8.1 | OctaneRender® v2.0 | Intel Core i7 4770K | 32GB RAM | x2 GeForce GTX 780 Ti SC (3GB/ea.)
Twitter: MikelMNJ
Twitter: MikelMNJ
- FrankPooleFloating
- Posts: 1669
- Joined: Thu Nov 29, 2012 3:48 pm
Since the GPU rendering market is heating up so much -- it would kick ass if they
could somehow come up with a (proprietary, I suppose) PCI-E card that has like
4 connections (proprietary, I guess.. since USB3 and eSata etc could never do the
job) that you could hook up 4 external, self-water-cooled and stackable GPU's -
(er, mini GPU boxes) that would be like external USB drives... and make them in
Titans and 690 versions etc... and ultimately be able to hook 4 external GPU's
x 4 PCI-E cards.. 16 total!!.. not that I would ever be able to afford this.. but some
of you dudes obviously could.
I am not talking about something like Cubix et al... but something more flexible
and cheap. Plus, they would not need to run off of the power supply.
Who would not be on this like a bum on a baloney sammich?... I would love this!..
and be one of the first blokes in line.... one can dream.
I am planning on building a pretty insane - all watercooled 4-GPU rig this summer...
nVidia and partners, if you are reading this, get busy and make this happen before
I build this bad-boy... if you come out with something like this a month after beast
is built.. I will be quite sad.
could somehow come up with a (proprietary, I suppose) PCI-E card that has like
4 connections (proprietary, I guess.. since USB3 and eSata etc could never do the
job) that you could hook up 4 external, self-water-cooled and stackable GPU's -
(er, mini GPU boxes) that would be like external USB drives... and make them in
Titans and 690 versions etc... and ultimately be able to hook 4 external GPU's
x 4 PCI-E cards.. 16 total!!.. not that I would ever be able to afford this.. but some
of you dudes obviously could.
I am not talking about something like Cubix et al... but something more flexible
and cheap. Plus, they would not need to run off of the power supply.
Who would not be on this like a bum on a baloney sammich?... I would love this!..
and be one of the first blokes in line.... one can dream.
I am planning on building a pretty insane - all watercooled 4-GPU rig this summer...
nVidia and partners, if you are reading this, get busy and make this happen before
I build this bad-boy... if you come out with something like this a month after beast
is built.. I will be quite sad.
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles
The Cubix rack mount 16 card solution sounds exactly like what you are describing. Proprietery interface, 16 GPU's (Titans, 690's, Tesla's), all in one unit --without needing four separate enclosures. The only thing missing from your description would be the none power supply dependency--which you need to power extra units :O -- and the water cooling.
http://www.cubixgpu.com/Products/Rackmount
Mikel
http://www.cubixgpu.com/Products/Rackmount
Mikel
Windows 8.1 | OctaneRender® v2.0 | Intel Core i7 4770K | 32GB RAM | x2 GeForce GTX 780 Ti SC (3GB/ea.)
Twitter: MikelMNJ
Twitter: MikelMNJ
- FrankPooleFloating
- Posts: 1669
- Joined: Thu Nov 29, 2012 3:48 pm
No thanks. I am not interested in buying their $564,251 (or whatever they are charging) boxes...
I want to be able to add more GPU's on my terms, as needed.. and not have to build a system around
only being able to have 4 GPU's... but also not spend tens of thousands of dollars for option B. I want
option C.
What I have described - could (and should) be done - seeing as where things are going. Buy an expansion
card for a couple hundred. Buy a self-water-cooled and enclosed GPU (for whatever that would be for card
+ enclosure and WC shite)... plug it in. Go.
Sorry if you work for Cubix.. but their boutique prices are ridiculous.
I want to be able to add more GPU's on my terms, as needed.. and not have to build a system around
only being able to have 4 GPU's... but also not spend tens of thousands of dollars for option B. I want
option C.
What I have described - could (and should) be done - seeing as where things are going. Buy an expansion
card for a couple hundred. Buy a self-water-cooled and enclosed GPU (for whatever that would be for card
+ enclosure and WC shite)... plug it in. Go.
Sorry if you work for Cubix.. but their boutique prices are ridiculous.
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles
3x GTX Titan - Win7 Pro x64 - Intel i7-3930K - ASUS Rampage IV Formula-X79