let me get this straight
Posted: Mon Aug 19, 2013 3:26 am
I'm a step closer to a solution on my headless, VNC controlled remote media server/render slave idea: about to test the proof of concept on some unused hardware I've got lying around (haha)
Potentially have a couple of GPU rendering hardware limitation hurdles to overcome:
A render 'Scene' (everything with the visible FOV of the virtual render camera) and its size in VRAM is a factor of geometry being calculated, plus the uncompressed size of all the textures visible: I am very new to creative use of textures but you can use many 4k textures within one scene (4000pixel x 4000pixel? this seems massive for a 'typical surface image' i.e. texture??)
Anyway, these 4k textures are uncompressed at 100mb or so, (2k about 30mb?) and an large-ish architectural scene (external) can be 7-900mb just of geometry. My current video card is only 1.5gb...
Rendering software 'voxeling' is very cpu & more importantly memory intensive for about 1-5 minutes: but then cpu memory not tested at all. HOWEVER, the rearranging of the geometry & texture data so it can be processed by the GPU costs about 4x the VRAM scene size in RAM: i.e., 1.5gb video card = 6gb Voxeling cost (can be swapped out, but costs ALOT of time - and time saving is the whole reason to use GPU rendering) - also: its a parallel computing model: so the 6gb cost can feed as many 1.5gb GPU's as you have connected?
Current test system is is a [email protected], 8gb ram(1200mhz), SSD, with two x16 pcie slots, 1xGTX580 1.5gb (single gpu) 1x monitor GPU (for display only, not rendering).
Hardware/Upgrade problem/considerations:
**If 1.5gb is always enough for my typical GPU rendering jobs then I can sell some surplus stuff and buy two GTX590's (2x gtx580's on the one pcb, w/ 1.5gb each)
Upgrade path: later can buy a cheap-ish (haha) QUAD SLI capable motherboard +CPU&RAM to suite: which can allow me 4x gtx590's for a 8x setup.
**if 1.5gb isn't enough I need to sell my gtx580 1.5gb and buy two gtx580 3gb's - but then I run into the RAM limitation/setback which means I probably need to upgrade M/B, CPU & RAM anyway: would still buy 4xPCI-E x16 capable motherboard and could get 2 more gtx580's for 4x setup.
I suppose firstly I'm looking for corrections on the above statements;
and secondly a resource of arch-vis renders (internal & external) which have been rendered with alot of massive textures (ie 1.5gb VRAM-3gb VRAM cost) and annotated as such (so I can compare them to my current outputs and make up my mind if I'm willing to suffer the limitation of 1.5gb VRAM for the sake of speed...
Even if someone here could supply two test renders(arch pref.): one at 2.5gb VRAM cost, and one at 1.25gb VRAM cost for comparison?
Thanks for your help!
Potentially have a couple of GPU rendering hardware limitation hurdles to overcome:
A render 'Scene' (everything with the visible FOV of the virtual render camera) and its size in VRAM is a factor of geometry being calculated, plus the uncompressed size of all the textures visible: I am very new to creative use of textures but you can use many 4k textures within one scene (4000pixel x 4000pixel? this seems massive for a 'typical surface image' i.e. texture??)
Anyway, these 4k textures are uncompressed at 100mb or so, (2k about 30mb?) and an large-ish architectural scene (external) can be 7-900mb just of geometry. My current video card is only 1.5gb...
Rendering software 'voxeling' is very cpu & more importantly memory intensive for about 1-5 minutes: but then cpu memory not tested at all. HOWEVER, the rearranging of the geometry & texture data so it can be processed by the GPU costs about 4x the VRAM scene size in RAM: i.e., 1.5gb video card = 6gb Voxeling cost (can be swapped out, but costs ALOT of time - and time saving is the whole reason to use GPU rendering) - also: its a parallel computing model: so the 6gb cost can feed as many 1.5gb GPU's as you have connected?
Current test system is is a [email protected], 8gb ram(1200mhz), SSD, with two x16 pcie slots, 1xGTX580 1.5gb (single gpu) 1x monitor GPU (for display only, not rendering).
Hardware/Upgrade problem/considerations:
**If 1.5gb is always enough for my typical GPU rendering jobs then I can sell some surplus stuff and buy two GTX590's (2x gtx580's on the one pcb, w/ 1.5gb each)
Upgrade path: later can buy a cheap-ish (haha) QUAD SLI capable motherboard +CPU&RAM to suite: which can allow me 4x gtx590's for a 8x setup.
**if 1.5gb isn't enough I need to sell my gtx580 1.5gb and buy two gtx580 3gb's - but then I run into the RAM limitation/setback which means I probably need to upgrade M/B, CPU & RAM anyway: would still buy 4xPCI-E x16 capable motherboard and could get 2 more gtx580's for 4x setup.
I suppose firstly I'm looking for corrections on the above statements;
and secondly a resource of arch-vis renders (internal & external) which have been rendered with alot of massive textures (ie 1.5gb VRAM-3gb VRAM cost) and annotated as such (so I can compare them to my current outputs and make up my mind if I'm willing to suffer the limitation of 1.5gb VRAM for the sake of speed...
Even if someone here could supply two test renders(arch pref.): one at 2.5gb VRAM cost, and one at 1.25gb VRAM cost for comparison?
Thanks for your help!