
Cuda Compute 6
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
This could be a game changer if its implemented in octane I recon
http://www.theregister.co.uk/2013/11/16 ... ory_party/

RedSpec TGX for Poser / OctaneRender - OUT NOW visit http://redspec-sss.com/
I hope you are wrong
High hopes I guess.

RedSpec TGX for Poser / OctaneRender - OUT NOW visit http://redspec-sss.com/
This dream is old...
In the 90th, the Matrox Millennium had a 6MB addon module to expand the VRam
face
In the 90th, the Matrox Millennium had a 6MB addon module to expand the VRam

face
Win10 Pro, Driver 378.78, Softimage 2015SP2 & Octane 3.05 RC1,
64GB Ram, i7-6950X, GTX1080TI 11GB
http://vimeo.com/user2509578
64GB Ram, i7-6950X, GTX1080TI 11GB
http://vimeo.com/user2509578
Actually, I think witpapier is right.
If you look at nVidia's own example here:
http://devblogs.nvidia.com/parallelfora ... in-cuda-6/
Perhaps someone with a deeper understanding of the process mentioned here can chime in?
If you look at nVidia's own example here:
http://devblogs.nvidia.com/parallelfora ... in-cuda-6/
Perhaps someone with a deeper understanding of the process mentioned here can chime in?
Win 7 pro 64 | 3 x Asus GTX Titan | i7 3930 OC 4.2 | 32GB | Octane Cinema4D plugin | R14
CUDA toolkit 6 just seems (it's not released yet) to hide the complexity of shuffling data from and to the GPU. It doesn't magically solve the problem that you have to shuffle data between CPU and GPU. Data is sent/received to/from the GPU via the PCI bus which is orders of magnitudes slower than than the memory transfer directly from VRAM to the GPU.
-> So even it may look like you are using one big memory blob, spanning the host and GPU RAM, you are not and I expect the performance loss to be the same as with previous toolkits. The only difference is, it's easier to program.
-> So even it may look like you are using one big memory blob, spanning the host and GPU RAM, you are not and I expect the performance loss to be the same as with previous toolkits. The only difference is, it's easier to program.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Thanks for the insight Abstrax!abstrax wrote:CUDA toolkit 6 just seems (it's not released yet) to hide the complexity of shuffling data from and to the GPU. It doesn't magically solve the problem that you have to shuffle data between CPU and GPU. Data is sent/received to/from the GPU via the PCI bus which is orders of magnitudes slower than than the memory transfer directly from VRAM to the GPU.
-> So even it may look like you are using one big memory blob, spanning the host and GPU RAM, you are not and I expect the performance loss to be the same as with previous toolkits. The only difference is, it's easier to program.
Seems that unified memory is not the silver bullet that we were perhaps hoping for.
Win 7 pro 64 | 3 x Asus GTX Titan | i7 3930 OC 4.2 | 32GB | Octane Cinema4D plugin | R14
- maya-heyes
- Posts: 14
- Joined: Wed Jul 10, 2013 9:14 am
But I think this feature would be very helpful in implementing hybrid (CPU / GPU) CG algorithms .. or algorithms that are difficult to be implemented purely on GPU