I purchased Octane a while back after having built a system especially for it.
My system has two GTX 460's for rendering, and a built-in Hybrid-SLI 980a motherboard that is used exclusively to power display redraw etc. Basically, built from the ground up for Octane.
But then I find out that I can't use the power of these two graphics cards because of a limitation in CUDA 3.0. I thought, ah well, just forget about Octane and leave it until all this CUDA stuff is sorted. But now, months later, a new CUDA release and an new Octane release to make use of the new CUDA is here, but I'm still not getting the full power of these cards. My question, therefore, is what's going on, and is there any idea yet, when multiple GTX 460's will be supported properly?
Is this something that isn't going to work properly until it's out of beta?
Still No Multi-GPU GTX 460?
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Hi Pumeco,pumeco wrote:I purchased Octane a while back after having built a system especially for it.
My system has two GTX 460's for rendering, and a built-in Hybrid-SLI 980a motherboard that is used exclusively to power display redraw etc. Basically, built from the ground up for Octane.
But then I find out that I can't use the power of these two graphics cards because of a limitation in CUDA 3.0. I thought, ah well, just forget about Octane and leave it until all this CUDA stuff is sorted. But now, months later, a new CUDA release and an new Octane release to make use of the new CUDA is here, but I'm still not getting the full power of these cards. My question, therefore, is what's going on, and is there any idea yet, when multiple GTX 460's will be supported properly?
Is this something that isn't going to work properly until it's out of beta?
It's working properly, just use the CUDA 3.0 build as explained in the release notes. The rewrite of our CUDA framework is on its way, but code doesn't drop from the sky, you know

Cheers,
Marcus
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Cheers Marcus, I didn't realise I could do that.
I thought I had to use the new build for the new CUDA
Anyway, keep up the good work on Octane because even running on the current build, I can hardly keep the grin off my face.
The speed is incredible!
I thought I had to use the new build for the new CUDA

Anyway, keep up the good work on Octane because even running on the current build, I can hardly keep the grin off my face.
The speed is incredible!
Windows 7 64-bit | 2X GeForce GTX 460 | nForce 980a Hybrid-SLI | AMD Phenom II X4 3.40GHz | 16GB
Just to avoid more confusion:pumeco wrote:Cheers Marcus, I didn't realise I could do that.
I thought I had to use the new build for the new CUDA![]()
Anyway, keep up the good work on Octane because even running on the current build, I can hardly keep the grin off my face.
The speed is incredible!
- The latest build is beta 2.44 and can be found in the commercial release forum.
- There are two "flavours" of it: One linked against the CUDA 3.0 libraries and one against the CUDA 3.2 libraries.
- Both work with one or more GTX 460, but only the CUDA 3.0 build gives you an actual performance boost, when you add a second card.
- To solve this problem, we are currently rewriting our CUDA framework. It will hopefully have another positive side-effect, which I will explain if it actually comes true.
- I also had a look at CUDA 4.0 and there don't seem to be many major changes regarding multi-GPU development at least not many that are of interest to us, but there is a good chance that the next version will be linked against CUDA 4.0 - depending on when it will be available as final version. This will be mainly to support any future cards that potentially won't get CUDA 3.2 compatible drivers anymore.
Cheers,
Marcus
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Thanks again, Marcus, much appreciated.
Here's how my CUDA device setup looks now, so this is after removing the Octane 3.2 build, then installing 3.0.
It's noticeably faster now, but what I'm curious about is what it says at the top. It says the CUDA driver version is 3.20, but the CUDA Runtime version is 3.00. So does this mean I'm getting the current maximum performance for now, or should I downgrade the driver to match the Runtime?
As for the possibly big secret positive side-effect with CUDA 4 ...
All I can say is that if it turns out to be the ability to share the graphics RAM between cards, that would be fantastic. This CUDA stuff is all pretty new to me, but even from my beginner point of view, I can think of no better icing on the cake as being able to pool the RAM from multiple cards and make the total available to Octane.
Yeah I know, I live in a fantasy land
Here's how my CUDA device setup looks now, so this is after removing the Octane 3.2 build, then installing 3.0.
It's noticeably faster now, but what I'm curious about is what it says at the top. It says the CUDA driver version is 3.20, but the CUDA Runtime version is 3.00. So does this mean I'm getting the current maximum performance for now, or should I downgrade the driver to match the Runtime?
As for the possibly big secret positive side-effect with CUDA 4 ...
All I can say is that if it turns out to be the ability to share the graphics RAM between cards, that would be fantastic. This CUDA stuff is all pretty new to me, but even from my beginner point of view, I can think of no better icing on the cake as being able to pool the RAM from multiple cards and make the total available to Octane.
Yeah I know, I live in a fantasy land

Windows 7 64-bit | 2X GeForce GTX 460 | nForce 980a Hybrid-SLI | AMD Phenom II X4 3.40GHz | 16GB
- Jaberwocky
- Posts: 976
- Joined: Tue Sep 07, 2010 3:03 pm
Pumeco
This question has laready been answered by Abstrax
Please see the Cuda 4.0 thread in General discussion.
Abstrax replied to my query
"Unfortunately, the devil lies in the detail
Each GPU needs access to everything. If you would distribute the scene data over several GPUs or even the CPU, you would then have to fetch the data from the other GPUs or the CPU. And everything via PCI ... That's superslow and not practical for our uses."
I had the same thought once i saw the Nvidia Cuda 4.0 specs.Unfortunately it does not look possible to link multi cards memories.It's a latency issue and would slow down Octane if they were to implement it.
See that thread for how the conversation went.
This question has laready been answered by Abstrax
Please see the Cuda 4.0 thread in General discussion.
Abstrax replied to my query
"Unfortunately, the devil lies in the detail

I had the same thought once i saw the Nvidia Cuda 4.0 specs.Unfortunately it does not look possible to link multi cards memories.It's a latency issue and would slow down Octane if they were to implement it.

See that thread for how the conversation went.
CPU:-AMD 1055T 6 core, Motherboard:-Gigabyte 990FXA-UD3 AM3+, Gigabyte GTX 460-1GB, RAM:-8GB Kingston hyper X Genesis DDR3 1600Mhz D/Ch, Hard Disk:-500GB samsung F3 , OS:-Win7 64bit
Cheers Jaberwocky!
Yup, I already saw that thread when I did a search before posting this one. I suppose there's some hope though, I mean think about it, we all know Octane's big selling point is speed, but there's so much of it that maybe even an option to eat into the speed a little to enable the pooling of RAM, might be a possible option?
Like maybe have Octane setup the way it is now, but give it a 'Resource Control' tab in the options which allows the user to 'Optionally' lose a few render cores in order to allow RAM-Pooling. I'm not saying it's possible of course, because I don't understand this stuff, but if it was possible, I reckon such an option would help out more than it would hinder.
Octane is so fast that even if it took a tiny speed hit with the RAM-Pooling option switched-on, would the users really care if it gives them all that newly-found RAM to play with?
On the other hand I'm probably talking absolute crap, but it's a thought, anyway
I'm totally behind you in wanting this no matter how unlikely it is to happen.
Yup, I already saw that thread when I did a search before posting this one. I suppose there's some hope though, I mean think about it, we all know Octane's big selling point is speed, but there's so much of it that maybe even an option to eat into the speed a little to enable the pooling of RAM, might be a possible option?
Like maybe have Octane setup the way it is now, but give it a 'Resource Control' tab in the options which allows the user to 'Optionally' lose a few render cores in order to allow RAM-Pooling. I'm not saying it's possible of course, because I don't understand this stuff, but if it was possible, I reckon such an option would help out more than it would hinder.
Octane is so fast that even if it took a tiny speed hit with the RAM-Pooling option switched-on, would the users really care if it gives them all that newly-found RAM to play with?
On the other hand I'm probably talking absolute crap, but it's a thought, anyway

I'm totally behind you in wanting this no matter how unlikely it is to happen.
Windows 7 64-bit | 2X GeForce GTX 460 | nForce 980a Hybrid-SLI | AMD Phenom II X4 3.40GHz | 16GB
- Jaberwocky
- Posts: 976
- Joined: Tue Sep 07, 2010 3:03 pm
Yep i see what you mean. A switch on the cuda devices tab to switch on use multi card memory spanning followed by a warning splash screen saying something like........
**** Warning,using this function will totally destroy your fast rendering speed.Use at your own risk*****

**** Warning,using this function will totally destroy your fast rendering speed.Use at your own risk*****

CPU:-AMD 1055T 6 core, Motherboard:-Gigabyte 990FXA-UD3 AM3+, Gigabyte GTX 460-1GB, RAM:-8GB Kingston hyper X Genesis DDR3 1600Mhz D/Ch, Hard Disk:-500GB samsung F3 , OS:-Win7 64bit
You know what I mean, there's hundreds of cores in use with Octane, many hundreds with multiple cards. Even if I had to sacrifice a hundred of them, Octane would still be insanely fast. If a user was running out of RAM, they're stuffed. At least with such an option, all it takes is to tick the checkbox and the problem disappears.
If I had a choice between not being able to complete a project due to RAM limitation, I'd rather take a speed hit in order to get the project done, than to not complete it at all. As long as it's an option, it should please everyone because they wouldn't have to use it if they don't need it.
Anyway, that's what I'm hoping, maybe the discovery would allow such an option.
To me, it all seems a bit lax on NVIDIA's part though; to design those graphics cards built around the idea of using multiple cards (SLI), and then not allow the RAM to be used in such a manner. Seems totally idiotic to me.
If I had a choice between not being able to complete a project due to RAM limitation, I'd rather take a speed hit in order to get the project done, than to not complete it at all. As long as it's an option, it should please everyone because they wouldn't have to use it if they don't need it.
Anyway, that's what I'm hoping, maybe the discovery would allow such an option.
To me, it all seems a bit lax on NVIDIA's part though; to design those graphics cards built around the idea of using multiple cards (SLI), and then not allow the RAM to be used in such a manner. Seems totally idiotic to me.
Windows 7 64-bit | 2X GeForce GTX 460 | nForce 980a Hybrid-SLI | AMD Phenom II X4 3.40GHz | 16GB
You could always purchase video cards with more memory, such as a GTX 560 w/ 2GB or a GTX 580 w/ 3GB.pumeco wrote: If I had a choice between not being able to complete a project due to RAM limitation, I'd rather take a speed hit in order to get the project done, than to not complete it at all.