Page 1 of 1
Cuda Compute 6
Posted: Sun Jan 26, 2014 10:10 am
by Witpapier
This could be a game changer if its implemented in octane I recon
http://www.theregister.co.uk/2013/11/16 ... ory_party/
Re: Cuda Compute 6
Posted: Sun Jan 26, 2014 7:19 pm
by MOSFET
From what I understand, Cuda's "Unified Memory" has been misreported and does not mean that the GPU will treat system memory as slower form of VRAM. Instead it streamlines the process of memory management for developers. I could be wrong though.
Re: Cuda Compute 6
Posted: Sun Jan 26, 2014 9:47 pm
by Witpapier
I hope you are wrong

High hopes I guess.
Re: Cuda Compute 6
Posted: Tue Feb 04, 2014 1:01 pm
by gueoct
I have a dream......
one day, we will be able to pop additional VRAM into our graphics cards like into our mainboards....
Re: Cuda Compute 6
Posted: Tue Feb 04, 2014 1:17 pm
by face
This dream is old...
In the 90th, the Matrox Millennium had a 6MB addon module to expand the VRam
face
Re: Cuda Compute 6
Posted: Wed Feb 12, 2014 11:59 am
by iFloris
Actually, I think witpapier is right.
If you look at nVidia's own example here:
http://devblogs.nvidia.com/parallelfora ... in-cuda-6/
Perhaps someone with a deeper understanding of the process mentioned here can chime in?
Re: Cuda Compute 6
Posted: Wed Feb 12, 2014 6:12 pm
by abstrax
CUDA toolkit 6 just seems (it's not released yet) to hide the complexity of shuffling data from and to the GPU. It doesn't magically solve the problem that you have to shuffle data between CPU and GPU. Data is sent/received to/from the GPU via the PCI bus which is orders of magnitudes slower than than the memory transfer directly from VRAM to the GPU.
-> So even it may look like you are using one big memory blob, spanning the host and GPU RAM, you are not and I expect the performance loss to be the same as with previous toolkits. The only difference is, it's easier to program.
Re: Cuda Compute 6
Posted: Thu Feb 13, 2014 9:03 pm
by iFloris
abstrax wrote:CUDA toolkit 6 just seems (it's not released yet) to hide the complexity of shuffling data from and to the GPU. It doesn't magically solve the problem that you have to shuffle data between CPU and GPU. Data is sent/received to/from the GPU via the PCI bus which is orders of magnitudes slower than than the memory transfer directly from VRAM to the GPU.
-> So even it may look like you are using one big memory blob, spanning the host and GPU RAM, you are not and I expect the performance loss to be the same as with previous toolkits. The only difference is, it's easier to program.
Thanks for the insight Abstrax!
Seems that unified memory is not the silver bullet that we were perhaps hoping for.
Re: Cuda Compute 6
Posted: Fri Feb 14, 2014 10:07 am
by maya-heyes
But I think this feature would be very helpful in implementing hybrid (CPU / GPU) CG algorithms .. or algorithms that are difficult to be implemented purely on GPU