RAM instead of VRAM ?
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
I wonder if a graphic card with 3GB RAM would be usable on a Windows XP Pro 32 bits (I have enabled the 3GB switch in the boot.ini) ?
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
- joelegecko
- Posts: 57
- Joined: Mon Jan 18, 2010 3:57 am
I may be wrong but 32bit OS will use a max of 4GB of memory and this includes video memory. Even by lifting the "2GB RAM by program" cap, you'd still be limiting yourself. I can't say which type of memory has the priority but when I was using 4GB with a 512MB video card on a 32bit system, I only had 3.5 GB of RAM available ( the other 0.5 was video memory from the graphic card ).
The only way to use a 3GB cards AND more than 1GB of RAM would be to move to the 64bit version of your OS ( or upgrade to a new one ).
The only way to use a 3GB cards AND more than 1GB of RAM would be to move to the 64bit version of your OS ( or upgrade to a new one ).
Raaaaah ! This may explain why I have difficulties to use more than half of my 512 MB VRAM memory with Octane. I have 4GB of RAM on my system and /3GB enabled.
I thought that RAM and VRAM had the same capacity limitation due to the 32 bits bus (64 hardware limited to 32 by the OS), but not that their amounts were cumulative.
I thought that VRAM and RAM amount were counted separately as they are not on the same bus (am I wrong ?)
This is very worrying, because if its true it means that I will not be able to upgrade my graphic card on this machine !
I thought that RAM and VRAM had the same capacity limitation due to the 32 bits bus (64 hardware limited to 32 by the OS), but not that their amounts were cumulative.
I thought that VRAM and RAM amount were counted separately as they are not on the same bus (am I wrong ?)

This is very worrying, because if its true it means that I will not be able to upgrade my graphic card on this machine !

French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
Great, I just switched to 64bits a few weeks ago. Windows will never be the same under 4GB
And I didn't even know the limitation counted for Ram PLUS Vram...
Happy customer here

Happy customer here

Athlon X2 @3200Mhz, 8Gb RAM, Win7 x64, Sparkle GeForce GTX 285,
3D connexion SpaceNavigator, Blender x64, 2xEizo 24" TFT
3D connexion SpaceNavigator, Blender x64, 2xEizo 24" TFT
I've just read on an other forum that this is false, and VRAM amount is not cumulative with RAM.And I didn't even know the limitation counted for Ram PLUS Vram...
So, it seems that with my 4GB RAM, 3.25GB are seen by Windows XP PRo 32 bits (thanks to the /3GB switch added in the boot.ini file) ,and I should be able to use at least 2GB of VRAM on one graphic card... I don't know how it is managed with 2 graphic cards...
Who and what must I believe ?

Some more accurate explanations would be welcome for people like me who can't change for a 64 bits OS.
Edit : From other sources (in french), It appears that even if the VRAM amount is not cumulated with the RAM amount, a certain amount of the RAM is affected to the mangement of the VRAM(between 256 and 512 MB according to the graphic cards installed) . So, if you have several graphic cards installed, the usable RAM amount will decrease drastically and can be critical on a 32 bit system. GPU that will be used only when rendering reserve a RAM amount even when you are rendering on the CPU or working on any application (fluid simulations for example could be slowed down), a very annoying thing that could lead me to change my graphic card instead of adding one more.
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
Unfortunately there is no documentation on this behaviour. (that i can easily find, not much time currently)
It's up the the video driver's internal implementation.
I'll have to do some research in this area soon.
A GPU's video memory is a separate device with it's own memory.
It's not shared or accumulated to the OS memory.
So imaging you have a fermi GPU with 6GB memory (must be fermi because the fermi chip has a 48bit memory space, GTX200 and lower is 32bit), this 'should' fully useable on a 32bit OS.
The memory is not part of the OS, and it's like having a 2nd computer that's 48bits in your PC that you can send data to.
However i'm not %100 sure of this, there is'nt any documentation about it in the CUDA manual.
It could be such that the actual driver manages the video memory, and if this is a 32bit OS, it might use 32bit adresses to manage it, or it might not. It's not in the CUDA docu.
If anyone knows, please let me know
Radiance
It's up the the video driver's internal implementation.
I'll have to do some research in this area soon.
A GPU's video memory is a separate device with it's own memory.
It's not shared or accumulated to the OS memory.
So imaging you have a fermi GPU with 6GB memory (must be fermi because the fermi chip has a 48bit memory space, GTX200 and lower is 32bit), this 'should' fully useable on a 32bit OS.
The memory is not part of the OS, and it's like having a 2nd computer that's 48bits in your PC that you can send data to.
However i'm not %100 sure of this, there is'nt any documentation about it in the CUDA manual.
It could be such that the actual driver manages the video memory, and if this is a 32bit OS, it might use 32bit adresses to manage it, or it might not. It's not in the CUDA docu.
If anyone knows, please let me know

Radiance
Win 7 x64 & ubuntu | 2x GTX480 | Quad 2.66GHz | 8GB
I hope that the following article answers the question how the video memory of the GT200-generation of cards work:
NVIDIA's GT200: Inside a Parallel Processor - CUDA Memory Model
PS: That is only for GT200!
PS2: Radiance, you can purchase the following book if you find it interesting:
Programming Massively Parallel Processors: A Hands-On Approach
One of the negative reviewers (:D) mentions that there is something interesting about the memory allocation of the Nvidia architecture, e.g. "there are some interesting insights there about memory allocation and thread assignment..."
NVIDIA's GT200: Inside a Parallel Processor - CUDA Memory Model
PS: That is only for GT200!
PS2: Radiance, you can purchase the following book if you find it interesting:
Programming Massively Parallel Processors: A Hands-On Approach
One of the negative reviewers (:D) mentions that there is something interesting about the memory allocation of the Nvidia architecture, e.g. "there are some interesting insights there about memory allocation and thread assignment..."
Win XP 32 | Geforce GT240 + Onboard 8200 (Driver ver. 197.13 )| Phenom 9550 | 2GB
Would it be possible to have mutilple graphic cards and access the memory beyond the current Octane occupied GPU? That would be SWEET! That way we can get some extra VRAM at low cost. Quadro 5800 is just too expensive... even for a FX4800 owner like me.
Dual Lindenhurst single core Xeon 3.6Ghz with Hyperthreads, 8GB DDR-2 ECC, Geforce GTX 260
Dual Xeon 5680 3.2Ghz with Hyperthreads, 24GB DDR-3 ECC, Quadro 4800 & GTX 480
Dual Xeon 5680 3.2Ghz with Hyperthreads, 24GB DDR-3 ECC, Quadro 4800 & GTX 480