Leiurus wrote:The thing is that the product performs very poorly on all platform, it hasn't been a cold shower for the Octane community only but for all users of GPU accelerated render engine.
The thread I'm refering to is this one, from a Blender users forum (don't know if it's allowed to post links rederecting to other similar websites, however the discussion thread treats of the gtx680 specifically and of GPU rendering in general, so I guess it should be OK. If not just let me know and I'll edit my post):
http://blenderartists.org/forum/showthread.php?249681-Nvidia-GeForce-GTX680-released...A particular post retained my attention, written by Stargeizer:
Ok, let's talk about the last line of this posting: "It's all about business".
That's correct, but what kind of business are we talking about? If Nvidia thinks, that people will start to buy their 2000-4000€ Quadro and Tesla cards instead of 300-500€ gamer cards, just because of a misleading marketing strategy, they will fail miserably. Why? Because Quadro cards are mostly bought by big studios who can afford them. But here in Germany we have many small companies, struggling to survive in this sick industry. In this area it's not so much about quality, but about being cheaper than the competition, therefore prices for CGI jobs were ruined over the last few years. Some companies have to work on 2-3 projects at the same time, working their asses off through the night and over the weekends just to pay their bills.
There is only small financial headroom to spend money on software licenses and hardware. Quadros are simply not affordable. What we need are efficient solutions at fair price levels. This is why Octane is so attractive. It is available at a fair price level, runs on affordable hardware and delivers stunning results in a short amount of time. But if I have to spend 2000+ € just for a CUDA card in the future, it might be better to buy 2-3 CPU-based renderclients instead and continue to use traditonal biased renderers.
As mentioned by Stargeizer, AMD could turn things around. While CUDA seems to become slower with every newer 4.x release, OpenCL makes nice progress. The LuxGPU benchmark is an indicator for the performance that can be achieved with OpenCL on AMD's current line of gamer cards.
So, if Nvidia continues to focus on "big business" with their CUDA products, the situation could change quickly, making OpenCL the favourable solution for both, software developers and customers.
Brecht van Lommel, the developer of Cycles, the new render engine in Blender, which can use either OpenCL or CUDA, mentioned in an Interview, that OpenCL is very similar to CUDA and as long as you restrict your code to the feature set of OpenCL it's easy to convert it to CUDA and to maintain both code branches. It would be interesting so see a speed comparison between Cycles OpenCL on a Radeon 7970 and Cycles CUDA on a GTX680. Wouldn't make me wonder, if the Radeon would be faster.
But time will tell. Maybe all the drama will settle down in a few weeks/months, when drivers and software are optimized for the new architecture. It's too early to abandon all hope.
Using Octane 1.11 on Intel Core i7 3770K @ 4,4 GHz / 16GB RAM / EVGA GTX670 SC+ 4GB driver 306.97 / Win7 x64 SP1