Using simultaneously Fermi and non Fermi cards ?

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Post Reply
User avatar
ROUBAL
Licensed Customer
Posts: 2199
Joined: Mon Jan 25, 2010 5:25 pm
Location: FRANCE
Contact:

Hi, I'm still trying to save money in the goal of purchasing better graphic cards at the end of the year.

As it is really difficult at an affordable cost (I mean without purchasing a tesla C2050 or C2070)
to get both high rendering speed (for animations for example) and a large amount of memory (for high resolution billboards and very detailed scenes), I was thinking of purchasing a GTX 480 (Fermi) and a Tesla C1060 (Not Fermi, but the only card I could afford with so much VRAM).

The GTX 480 has 480 cores and 1.5 GB of VRAM.
The Tesla C1060 has 240 cores and 4 GB of VRAM.

In theory, using simultaneously the GTX 480 and C1060 in my Cubix box, I would get 720 cores and 1.5GB of VRAM when I need speed, and 240 cores and 4 GB of VRAM (Tesla C1060 only) when I need high memory space.

This is the theory, but in fact I don't know if Fermi and non Fermi cards can operate together, and if yes, it there is not a slow down due to the slowest card.

It is important, because not using the power of the 240 cores of the Tesla card when working on a simple scene, would be a waste of ressources.

If someone has some experience about mixing the two technologies, it would be helpful !

Thanks in advance.

Philippe.
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
User avatar
timbarnes
Licensed Customer
Posts: 125
Joined: Thu May 20, 2010 1:56 am
Location: California

I have tried to use my GTX 470 at the same time as the Quadro FX 580. It was not a success, because the 580 is too slow, and holds the 470 back. The 470 alone is faster (and I get a usable display running on the 580).

I don't know the relative performance of the two cards you're looking at, however. In general, I gather it is best practice to use similar cards if using two or more for Octane simultaneously.
Mac Pro 3,1 / Lion / 14G RAM / ATI HD 2600 / nVidia GTX 470
i5-750 / Windows 7 Pro 64bit / 8G RAM / Quadro FX 580
Revit 2011, SketchUp 8, Rhino
User avatar
Nostromo
Posts: 261
Joined: Thu Feb 18, 2010 5:13 pm
Location: Brussels
Contact:

Yeah. Also, when using two card, there's other factors that play too like, for example, switching openGL context from one card to the other (to compose the full image) and there's no number for that unfortunately. Fermis seems to be very good at switching contexts (that's why they provide good scalabilty) while Quadro's seem to be a little slower to do this.

I think indeed using 2 equivalent architecture is indeed the best to avoid deception.

/M
User avatar
ROUBAL
Licensed Customer
Posts: 2199
Joined: Mon Jan 25, 2010 5:25 pm
Location: FRANCE
Contact:

Thanks for the answer.

Well, as I can't afford two Tesla, even old C1060, I think that the most reasonable solution remains two GTX 480. Currently, I reach the limit of my GTX 260 (877MB).

One Tesla C1060 only would be convenient in term of memory amount and not much more expensive than two GTX 480, but almost as slow as my GTX 260.

1.5 GB is not much, but for still pictures I can remove some hidden parts... and I will get much speed for animation... (when we'll have a faster obj exporter !)
Last edited by ROUBAL on Tue Sep 07, 2010 5:37 pm, edited 1 time in total.
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
User avatar
n1k
Posts: 401
Joined: Mon Jan 11, 2010 7:55 pm
Contact:

For animation it shouldn't be much of a problem having fermi and non fermi cards rendering same animation. Lets say you have 600 frames to render. You can run two instances of octane and choose to render, for example, 400 frames on faster and 200 frames on slower card. That way you can have perfect scalability:)

Cheers,
n1k
[email protected], 8gb RAM, Gainward GF 460 GTX 2048mb,Win7 64bit.

http://continuum3d.blogspot.com/
User avatar
ROUBAL
Licensed Customer
Posts: 2199
Joined: Mon Jan 25, 2010 5:25 pm
Location: FRANCE
Contact:

That's a weird idea ! But if it works, it is very interesting ! ;)

I wouldn't have thought that it was possible to open two instances of Octane...
mainly because of license managing with Octane live.

Have you tried ? As I have currently only one card in my Cubix, I can't make switching cards tests myself, except just opening an instance.
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
User avatar
Nostromo
Posts: 261
Joined: Thu Feb 18, 2010 5:13 pm
Location: Brussels
Contact:

In theory it should work since the host is going to be the same.

/M
havensole
Licensed Customer
Posts: 463
Joined: Fri Jan 15, 2010 9:31 pm
Location: Rialto, CA USA
Contact:

I haven't tried it on the new pre-2.3 version, but I've had it work with older versions of Octane. When I had 3 gpu's in my system at one point (2x gtx470's and a gts250) I would run 3 instances of Octane all rendering different things. I did this just to test how much I could abuse my system with Octane. never had an issue with it. I'll probably try it here with the pre-2.3 later today.
System 1: EVGA gtx470 1280Mb and MSI gtx470 1280 in Cubix Xpander for Octane, AMD 945, 4Gb Ram
All systems are at stock speeds and settings.
User avatar
kubo
Posts: 1377
Joined: Wed Apr 21, 2010 4:11 am
Location: Madrizzzz

I didn't know that, hey that's a great tip, I thought my 260 was getting lazy, lol. Time to burn that sucker!
windows 7 x64 | 2xGTX570 (warming up the planet 1ºC at a time) | i7 920 | 12GB
User avatar
radiance
Posts: 7633
Joined: Wed Oct 21, 2009 2:33 pm

Hi guys,

an important fact, you cannot combine GTX200 and GTX400 cards during a render.
they are 2 different architectures, much like the difference between an intel CPU and a PowerPC RISC CPU.

cuda decides what architecture to use based on the first chosen GPU in your active gpu list, so if you have a GTX200 and a GTX400 type GPU in there, the GTX200 kernel will be loaded into the GTX400 and the application will crash.

there is'nt much we can do about this, it's a cuda limitation.

if it does, please report, but it should'nt.

Radiance
Win 7 x64 & ubuntu | 2x GTX480 | Quad 2.66GHz | 8GB
Post Reply

Return to “General Discussion”