590 gtx + two 580 gtx - scalability/performance test

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
User avatar
acc24ex
Licensed Customer
Posts: 1481
Joined: Fri Mar 19, 2010 10:58 pm
Location: Croatia
Contact:

No way, everthing at stock voltages, as I had no control over it with the gainwards overclock utility, and it was runing stable like nothing special happened..

I'll keep in mind the 720mhz limit for the next session

I was just sliding those sliders like a chimp would :).. It works just fine.. I figured the software was safe to play with, did a couple of test runs, and then just abused those sliders all the way up it goes.. I couldn't control the voltage so that's it, gainwards overclocking utility was all it took.. And it changed the clocked speeds on all of the cards simultaneously.
Changed values while octane was working, it crashed it quietly two times, then I just kept the values in the butter zone, and worked OK.. now I am seeing that a water cooling rig would come quite in handy, (any suggestions?) ,and a couple of 590gtx would easily be overclocked at 1500 mhz no prob, you can get gains like 15-20 %
So now I can really inspect the bottlenecks in octane, in some heavy scenes, I lowered the clock and the octane speed was the same?! Assuming that there was some unoptimized processing going on, either my scene or internal programming..

And could you have a go at the benchmark scene please, so we can get some information cycling through here.

Now I really hope this GPGPU programming thing, catches on :)
Timmaigh
Licensed Customer
Posts: 168
Joined: Mon Nov 01, 2010 9:52 pm

acc24ex wrote:No way, everthing at stock voltages, as I had no control over it with the gainwards overclock utility, and it was runing stable like nothing special happened..

I'll keep in mind the 720mhz limit for the next session

I was just sliding those sliders like a chimp would :).. It works just fine.. I figured the software was safe to play with, did a couple of test runs, and then just abused those sliders all the way up it goes.. I couldn't control the voltage so that's it, gainwards overclocking utility was all it took.. And it changed the clocked speeds on all of the cards simultaneously.
Changed values while octane was working, it crashed it quietly two times, then I just kept the values in the butter zone, and worked OK.. now I am seeing that a water cooling rig would come quite in handy, (any suggestions?) ,and a couple of 590gtx would easily be overclocked at 1500 mhz no prob, you can get gains like 15-20 %
So now I can really inspect the bottlenecks in octane, in some heavy scenes, I lowered the clock and the octane speed was the same?! Assuming that there was some unoptimized processing going on, either my scene or internal programming..

And could you have a go at the benchmark scene please, so we can get some information cycling through here.

Now I really hope this GPGPU programming thing, catches on :)

Well, if it works, it works... i reckon there is really no danger anymore with the latest drivers, in case of overclock too big -> too much powerdraw, OCP will shut the card down/downclock it to prevent any damage. BTW if you read the link, i posted about the power consumption of CUDA apps, maybe Octane is really not that power hungry as benchmarks/furmark/games, therefore can be run at higher clocks and the OCP wont kick up.
But as you speak about the bottlenecks, lowering the clocks...this seems to be pretty much as OCP in action. The GPU-z will probably still read the higher clocks, but the performance does not scale up as it should, cause OCP is already limiting the power draw. It certainly works in 3dMark 11 this way (at least from what i read), past certain clocks (according to guy testing the card on Overclock.net forum 720-730 MHz range) he starts to get lower scores than on stock, most probably cause of the OCP.

I reckon you need to bump frequency very slowly and check, if the Megasamples keep increasing. I can confirm that on 700MHz i get 6,03 Ms, which is 0,2 more than on 670 and 0,7 more than on default 607, so the performance certainly scales as expected up to this frequency. Did not go any higher so far, as my temps are already 88 under load :mrgreen: BTW i use the gainwards tool as well, no way i am touching the voltage via Afterburner or what...

Last interesting thing, i get less Megasamples on Trench benchmark scene on 1200*800 than on 1920*1200... i suppose the GPU is not loaded to full on smaller resolution or what.
Intel Core i7 980x @ 3,78GHz - Gigabyte X58A UD7 rev 1.0 - 24GB DDR3 RAM - Gainward GTX590 3GB @ 700/1400/3900 Mhz- 2x Intel X25-M G2 80GB SSD - WD Caviar Black 2TB - WD Caviar Green 2TB - Fractal Design Define R2 - Win7 64bit - Octane 2.57
Timmaigh
Licensed Customer
Posts: 168
Joined: Mon Nov 01, 2010 9:52 pm

Checking again those screens of GPU-z, there you have some wierd numbers in the stock clocks area, and it says DDR4 instead of DDR5... you will probably need latest version of GPU-z, i reckon that is some older one you have

I have to say, i am very tempted to try it at 775 MHz myself :mrgreen: ...need...to...resist :lol:
Intel Core i7 980x @ 3,78GHz - Gigabyte X58A UD7 rev 1.0 - 24GB DDR3 RAM - Gainward GTX590 3GB @ 700/1400/3900 Mhz- 2x Intel X25-M G2 80GB SSD - WD Caviar Black 2TB - WD Caviar Green 2TB - Fractal Design Define R2 - Win7 64bit - Octane 2.57
Timmaigh
Licensed Customer
Posts: 168
Joined: Mon Nov 01, 2010 9:52 pm

Ok, i could not resist:

Image

It ran for 30 min stable on 772/1545/4000 MHz, aka GTX580 default clocks, on stock voltage. The speed increased to 6,65 Megasamples on Trench/1920*1200 pathtracing, compared to 5,34/5,39 on stock clocks (607MHz).... 6,65/5,39 = 1,23 so its 23 percent perf. improvement, thats quite nice. I still need to render something for 2-3 hours to be 100percent sure its stable.

However, i played Starcraft 2 as well, 1920*1200 full details, with MultiGPU OFF - with one core only, and eventually it crashed, needed to reboot the computer. I was skyping with my friend while playing, and Skype kept working, so it was probably down to the OC.

Now this leads me to conclusion, Octane is really not as "heavy" on the GPU as games, i reckon 3Dmark would crash on 772 MHz and stock voltage within 2 minutes. It would be really nice, if the devs could bring bit of light into this topic, what parts of core does Octane use, if its expected to be less demanding as games etc...
Intel Core i7 980x @ 3,78GHz - Gigabyte X58A UD7 rev 1.0 - 24GB DDR3 RAM - Gainward GTX590 3GB @ 700/1400/3900 Mhz- 2x Intel X25-M G2 80GB SSD - WD Caviar Black 2TB - WD Caviar Green 2TB - Fractal Design Define R2 - Win7 64bit - Octane 2.57
GeoPappas
Licensed Customer
Posts: 429
Joined: Fri Mar 26, 2010 5:31 pm

Timmaigh wrote:Now this leads me to conclusion, Octane is really not as "heavy" on the GPU as games
I'm not sure how you got to this conclusion, but Octane is about as "heavy" on a GPU as you can get. If you look at the GPU-z screenshot that you have, it is showing GPU Load at 94%.

At this point, my understanding is that multiple GPUs are not being used to their full effect, but that is currently being worked on and should be fixed in the next release.
Timmaigh
Licensed Customer
Posts: 168
Joined: Mon Nov 01, 2010 9:52 pm

GeoPappas wrote:
Timmaigh wrote:Now this leads me to conclusion, Octane is really not as "heavy" on the GPU as games
I'm not sure how you got to this conclusion, but Octane is about as "heavy" on a GPU as you can get. If you look at the GPU-z screenshot that you have, it is showing GPU Load at 94%.

At this point, my understanding is that multiple GPUs are not being used to their full effect, but that is currently being worked on and should be fixed in the next release.

Well it crashed on SC2, when i played with one core only, but can render with both cores on Octane no probs? This is definitely NOT, what i expected.
My conclusion is partially based on this test and then on the link i posted on previous site of this topic, the one to Nvidia forums regarding CUDA apps power consumption. There might be some parts of the chip, like something called rasterizers or polymorph engines, and god know what else, which are used with games, but not with Octane. I do not know, thats why i would like to see some basic explanation from Octane devs... how things work in regard to Nvidia hardware...

Just FYI, in any case somebody could somehow accidentaly take these tests and my conclusions as some kind of attack on Octane, that it does not work as it should, cause it does not crash :D, its definitely not that... i am doing this out of pure curiousity, i want to know, how far i can actually go with the overclock and why can i go so far compared to games, etc... it is actually great, i can run Octane on the same speed as with 2 gtx580s @ stock, its very satisfactory feeling as those cards cost together 400 EUROs more than my 590...
Intel Core i7 980x @ 3,78GHz - Gigabyte X58A UD7 rev 1.0 - 24GB DDR3 RAM - Gainward GTX590 3GB @ 700/1400/3900 Mhz- 2x Intel X25-M G2 80GB SSD - WD Caviar Black 2TB - WD Caviar Green 2TB - Fractal Design Define R2 - Win7 64bit - Octane 2.57
GeoPappas
Licensed Customer
Posts: 429
Joined: Fri Mar 26, 2010 5:31 pm

Timmaigh wrote:
GeoPappas wrote:
Timmaigh wrote:Now this leads me to conclusion, Octane is really not as "heavy" on the GPU as games
I'm not sure how you got to this conclusion, but Octane is about as "heavy" on a GPU as you can get. If you look at the GPU-z screenshot that you have, it is showing GPU Load at 94%.

At this point, my understanding is that multiple GPUs are not being used to their full effect, but that is currently being worked on and should be fixed in the next release.

Well it crashed on SC2, when i played with one core only, but can render with both cores on Octane no probs? This is definitely NOT, what i expected.
My conclusion is partially based on this test and then on the link i posted on previous site of this topic, the one to Nvidia forums regarding CUDA apps power consumption. There might be some parts of the chip, like something called rasterizers or polymorph engines, and god know what else, which are used with games, but not with Octane. I do not know, thats why i would like to see some basic explanation from Octane devs... how things work in regard to Nvidia hardware...
Thanks for the response.

The link that you posted is pretty interesting, but I am still skeptical of the claims. The OP of the article states "CUDA apps use significantly lower wattage than graphics apps" and "a fully loaded CUDA app would use about 65% of the wattage of a fully loaded graphics apps", but never really backs up those claims. He shows power usage for a homegrown CUDA app, but then never compares it to anything else (such as a game). So the statements that he is making are just pure speculation at this point.
Timmaigh
Licensed Customer
Posts: 168
Joined: Mon Nov 01, 2010 9:52 pm

GeoPappas wrote:
Thanks for the response.

The link that you posted is pretty interesting, but I am still skeptical of the claims. The OP of the article states "CUDA apps use significantly lower wattage than graphics apps" and "a fully loaded CUDA app would use about 65% of the wattage of a fully loaded graphics apps", but never really backs up those claims. He shows power usage for a homegrown CUDA app, but then never compares it to anything else (such as a game). So the statements that he is making are just pure speculation at this point.
Yup, its not good enough proof for me either. You can however find out, what is the power consumption in games, every single hardware site, when reviewing GPUs, measures power consumption, and they test GPUs with games only. This is BTW part of the problem, if they - the reviewers - finally realized, its not 2007 anymore and GPUs are used not for playing games only anymore, we would not need to speculate now.
Anyway he was comparing the TDP of the GPU (which can be with stress apps and perhaps games too in case of gtx590 easily crossed when overclocked) with the power consumption of his CUDA app, i do think there is anything wrong with this. The actual question is, whether this lower power consumption is more of a rule or an exception, when it comes to CUDA apps. Basically we would need to measure it with Octane ourselves to make any definitive conclusions, otherwise its really about assumptions and speculations.
Intel Core i7 980x @ 3,78GHz - Gigabyte X58A UD7 rev 1.0 - 24GB DDR3 RAM - Gainward GTX590 3GB @ 700/1400/3900 Mhz- 2x Intel X25-M G2 80GB SSD - WD Caviar Black 2TB - WD Caviar Green 2TB - Fractal Design Define R2 - Win7 64bit - Octane 2.57
Post Reply

Return to “General Discussion”