New Build GTX 580 3G or GTX680 4GB using Octane render

A public forum for discussing and asking questions about the demo version of Octane Render.
Forum rules
For new users: this forum is moderated. Your first post will appear only after it has been reviewed by a moderator, so it will not show up immediately.
This is necessary to avoid this forum being flooded by spam.
User avatar
Refracty
Licensed Customer
Posts: 1599
Joined: Wed Dec 01, 2010 6:42 pm
Location: 3D-Visualisierung Köln
Contact:

I would get 2 580s instead of 1 680, unless you need a few more textures and a gig more VRam.
2 580 will give you about 215 % render speed compared to one 680.
alessandro boncio
Posts: 3
Joined: Fri Nov 16, 2012 10:46 pm

Thanx Refracty,
just another question: the reason you told me that is a driver problem? Maybe a 680 will be more fast in 3 or 6 months with new driver?
Anyway thanks for your kindness, i want ask you another thing, thats the PC i'm going to buy, can you suggest me if will be ok?
Intel 17 - 3930k 3.20 ghz
MB asus p9x79
gtx 680 or 2 gtx 580
16 gb ram 1600 hyper

thanks again,
i can't wait to buy Octane please help me to buy a PC before ;)
Alessandro
bwise1701
Posts: 3
Joined: Thu Nov 29, 2012 11:54 pm

So I've been trying too figure what the best current Octane performance cards for Mac configurations would be. I only care about rendering not game performance. If I understand what I've read correctly it would be the GTX 570 if I don't want to deal with an external expansion chassis, so my question would be this,

with a Jan 2012 MacPro upgraded to 10.8 can I run 2 GTX 570 cards and will Octane take advantage of them both during single frame renders?

And would I need to remove my ATI card?

thanks
PeterN
Licensed Customer
Posts: 5
Joined: Wed Apr 04, 2012 12:17 pm

I am in the same boat.. I have a GTX 560 Ti which has 380 CUDA cores on the old Fermi architecture.. and was thinking of going to a GTX 570 which has 1300 odd Cores. But apparently this card only *just( matches CUDA scores with a GTX 580 on some CUDA based software and on other it comes in at under 50% as fast!!

So I assume that the dev team has some rough benchmarks between the older and newer cards? It all seems very vague at the moment..

Can we get a list of benchmarks, even if rough, or at the very least, a defined test so that those with the different cards can render and post benchmarks?

I am leaning towards just getting another GTX 560 Ti at the moment - $200 in Australia and 800 Fermi CUDA cores might be better value (if my power supply can handle it!)

Thanks!

(looking forward to the Lightwave Plugin :)

edit: Since posting this message I found this other page which had all the 500 series and down benchmarks done by users, however this does need an update to the 600 series being that its very out of date..

1. I am very interested in the 500 to 600 series specs difference, as the 700 series is speculated to be mid 2013 from reading but still not the 110 chipset that is needed to get CUDA performance back to 500 series levels..

2. Back to back with the CUDA issue, is the question, is Octane render optimised for Keplar cards fully yet, in other words is v1.0 as good as it gets? and if not, then what are the anticipated performance gains on the 600x cards in the very near future?

Thanks folks.. (did I say very excited about a working Lightwave plugin? :)

Also CUDA 5 SDK (new Nvidia beta driver) with blender is showing slowdowns of up to 30% on 500x and less cards.. don't know about 600x though..
Post Reply

Return to “Demo Version Questions & Discussion”