I am in the same boat.. I have a GTX 560 Ti which has 380 CUDA cores on the old Fermi architecture.. and was thinking of going to a GTX 570 which has 1300 odd Cores. But apparently this card only *just( matches CUDA scores with a GTX 580 on some CUDA based software and on other it comes in at under 50% as fast!!
So I assume that the dev team has some rough benchmarks between the older and newer cards? It all seems very vague at the moment..
Can we get a list of benchmarks, even if rough, or at the very least, a defined test so that those with the different cards can render and post benchmarks?
I am leaning towards just getting another GTX 560 Ti at the moment - $200 in Australia and 800 Fermi CUDA cores might be better value (if my power supply can handle it!)
Thanks!
(looking forward to the Lightwave Plugin
edit: Since posting this message I found this other page which had all the 500 series and down benchmarks done by users, however this does need an update to the 600 series being that its very out of date..
1. I am very interested in the 500 to 600 series specs difference, as the 700 series is speculated to be mid 2013 from reading but still not the 110 chipset that is needed to get CUDA performance back to 500 series levels..
2. Back to back with the CUDA issue, is the question, is Octane render optimised for Keplar cards fully yet, in other words is v1.0 as good as it gets? and if not, then what are the anticipated performance gains on the 600x cards in the very near future?
Thanks folks.. (did I say very excited about a working Lightwave plugin?
Also CUDA 5 SDK (new Nvidia beta driver) with blender is showing slowdowns of up to 30% on 500x and less cards.. don't know about 600x though..