Page 7 of 7

Re: GPU performance

Posted: Thu Jul 29, 2010 2:54 pm
by mib2berlin
Hi, lightwave plugin is in early state but it works:
http://www.refractivesoftware.com/forum ... f=5&t=2925
2/2a.
http://www.refractivesoftware.com/forum ... f=5&t=2754
2b.
No, you could have a small quadro/cuda card for your display and two cuda monster (GTX480) for rendering with octane.
Or you could use 3 similar cards and during work you use one for display. For final rendering switch to all 3 cards.
If you have different cards the one with the smallest memory certain your memory. All file information (textures) must fit into the memory of every card.
Different cards don't scale so well.
3.
Irc. Instances comes in the next or the following beta but Displacement/SSS later on.

We have an expert here for cubix, so maybe he could it explain more exact.

Cheers mib

Re: GPU performance

Posted: Thu Jul 29, 2010 3:22 pm
by radiance
the next 2.3 version supports up to 8096x8096 render resolution, and it takes approx 1500MB to get a 8096x8096 film into your GPU.
So you'd need a GPU with at least 2GB memory to add some geometry to it.

I think if you render at 4000x4000 or max 6000x6000 you should have ample memory to add a decent scene with a 2GB sporting GPU.

Radiance

Re: GPU performance

Posted: Fri Jul 30, 2010 9:20 am
by kacperspala
Thanks for very quick reply, it cleared a lot.

Another question arised considering "If you have different cards the one with the smallest memory certain your memory. All file information (textures) must fit into the memory of every card."

Does that mean that vRAM dont scale with units ? I mean having 3 x 1.5 GB does not equall to having 4.5 GB of vRAM total, as whole scene/geometry/textures must fit into every GTXunit vRAM ?
Meaning i would need to get 2xTesla with 4gb to be able to render high res scenes.

Aside from vRAM, what is the estimated % of rendering/viewport speed difference between Tesla1060 and GTX480 ?

Thanks again :)

Re: GPU performance

Posted: Fri Jul 30, 2010 9:22 am
by radiance
yes it does.
I have not yet tested the speed difference between those 2 cards.

Radiance

Re: GPU performance

Posted: Fri Jul 30, 2010 2:20 pm
by IndyBlueprints
Hi Guys, new here.

I am running an XFX Nvidea 9800GX2, dual GPU card. I just downloaded the demo yesterday. I tried doing a render on the spaceships. Running one gpu I got a total render time of 43 minutes, running both, I got a render time of 58 minutes. Isn't this backwards??

I am not running in SLI.

I just opened the benchmark OBJ file, didn't change the camera view at all, and running one GPU, I am getting 4.3 megasamples, and around 8 frames per second, with a total render time of 37 minutes

With running 2 GPU's, I'm getting 2.25 megasamples and 4.25 frames per second. I don't know the total time yet, but it will certainly be more.

Any experience with this card? Last year when I bought it, it was one of the better cards (not the best, but good reviews).

Re: GPU performance

Posted: Fri Jul 30, 2010 3:07 pm
by radiance
that's definately not normal.
in case SLI is enabled or some other issue, you should at least get the same speed with one GPU or 2, not slower.

Can you try with a complex scene ?
like an interior with pathtracing, and set the pathtracing kernel to maxdepth = 1024 and the rrprob parameter to 0.65 or so.
that will make both cards work hard and if that does'nt offer more than just one, there's a configuration/driver issue on your system.

Radiance

Re: GPU performance

Posted: Fri Jul 30, 2010 4:48 pm
by IndyBlueprints
OK, most of that is Greek to me, sorry.

At this point, I don't think I am interested in exploring using 2 gpu's at once, because when I am rendering with Octane & 2 gpu's, Octane appears to completely tie up the video cards; I tried creating a 3D view with my program, and the computer locked up.

Can you tell me how the speed & frames per second I listed above are in comparison to the 200 or 400 series nvidea cards? Is my card super slow, average, or pretty good?

Thanks,

Re: GPU performance

Posted: Fri Jul 30, 2010 7:03 pm
by radiance
Hi,

You should run the benchmark scene, and before you start it, change the kernel type from directlighting to pathtracing.
Then, look in the resources forum, there's a lot of posts in there with benchmarks from all kinds of setups on that scene.

Just open the scene, double click on the 'preview configuration' macro node, and then click on render kernel 'directlighting' and swap it with 'pathtracing' in the node inspector on the right,
then click on the meshnode to start the render.

I think you have somehow still SLI enabled somehow, otherwise you would get better performance with more than one GPU instead of the reverse.
GTX400 series do offer more linear speedup though, GTX200 series cards is ~1.7x for 2 GPUs and GTX400 series is ~1.99x for 2 GPUs.

Radiance