Hi,
what is the maximum texture size and polygon count supported by octane renderer ?
I have digital terrain models (DTMs) with about 2 Billion triangles (around 10 GB memory model size)
Currently I render those models with my own self written ray-caster that makes heavy use of LOD techniques and a large CPU memory (12 GB RAM) in order to be able to render such large models.
Now it would be very interesting to use an GPU-based renderere like octane instead of the CPU based one.
However, my question is: Am I limited with octane to the GPU's memory (i.e. "only" 1-2 GB) or does octane implement some kind of streaming/GPU-caching/LOD mechanisms in order to support such very large models ?
Thanks & Regards,
ZX81
P.S.: what about the new "Fermi"-cards with 6GB+ GPGPU-RAM ? Would octane support those cards to make full use of the 6GB ?
maximum polygon count and texture size (10+ Gigabyte models)
Forum rules
For new users: this forum is moderated. Your first post will appear only after it has been reviewed by a moderator, so it will not show up immediately.
This is necessary to avoid this forum being flooded by spam.
For new users: this forum is moderated. Your first post will appear only after it has been reviewed by a moderator, so it will not show up immediately.
This is necessary to avoid this forum being flooded by spam.
Hi,
As octane is an unbiased ray tracer it needs to be able to find every triangle at once while rendering the scene, so the whole model needs to be loaded into the gnu memory at this time.
PCI bus speed and lag is too slow to use main system memory.
The latest 6gb frame based tesla boards will allow a large polycount, but billions is definitely not possible.
I think, depending on your type of models that 10 to 30 million is the maximum with 4 to 6 gb of video memory at this time.
Radiance
As octane is an unbiased ray tracer it needs to be able to find every triangle at once while rendering the scene, so the whole model needs to be loaded into the gnu memory at this time.
PCI bus speed and lag is too slow to use main system memory.
The latest 6gb frame based tesla boards will allow a large polycount, but billions is definitely not possible.
I think, depending on your type of models that 10 to 30 million is the maximum with 4 to 6 gb of video memory at this time.
Radiance
Win 7 x64 & ubuntu | 2x GTX480 | Quad 2.66GHz | 8GB
ok, I see. thanks for the clarifying answer !so the whole model needs to be loaded into the gnu memory at this time.
I think I don't quite understand those numbers. I would have thought that at least 100 million should be possible with 4GB GPU memory.I think, depending on your type of models that 10 to 30 million is the maximum with 4 to 6 gb of video memory at this time.
Does octane actually use around 300 bytes per model point ?
I thought something like 30 bytes (i.e. a dozen or so floats for vertex coordinates and surface color descriptor) would suffice...
Thanks & Regards,
ZX-81
P.S.: I have invested much time in writing optimized CPU-based renderes but have now reached a "critical point" at which I think that the future definitely belongs to the GPGPU approach ...
My current "2-gigapolygons engine" already utilizes any available CPU cores (easily loads all 24 cores in a dual Xeon hyperthreaded Hexacore workstation) but the render times for *really* large models (i.e. Gigapolygon-models), while acceptable for still images, are still unsatisfactory for animations.
A rough calculation of the raw rendering powers show that a modern GPGPU hardware should outperform
the fastest available CPU-based hardware (like a 2x6 core MacPro) by about a factor of ten ... and at a factor of ten less the cost
ZX81 wrote:ok, I see. thanks for the clarifying answer !so the whole model needs to be loaded into the gnu memory at this time.
I think I don't quite understand those numbers. I would have thought that at least 100 million should be possible with 4GB GPU memory.I think, depending on your type of models that 10 to 30 million is the maximum with 4 to 6 gb of video memory at this time.
Does octane actually use around 300 bytes per model point ?
I thought something like 30 bytes (i.e. a dozen or so floats for vertex coordinates and surface color descriptor) would suffice...
Thanks & Regards,
ZX-81
P.S.: I have invested much time in writing optimized CPU-based renderes but have now reached a "critical point" at which I think that the future definitely belongs to the GPGPU approach ...
My current "2-gigapolygons engine" already utilizes any available CPU cores (easily loads all 24 cores in a dual Xeon hyperthreaded Hexacore workstation) but the render times for *really* large models (i.e. Gigapolygon-models), while acceptable for still images, are still unsatisfactory for animations.
A rough calculation of the raw rendering powers show that a modern GPGPU hardware should outperform
the fastest available CPU-based hardware (like a 2x6 core MacPro) by about a factor of ten ... and at a factor of ten less the cost
Hi, I was talking about practical scenes, including triangles, acceleration structures, vertex normals for smoothing, material data, textures, etc...
Unfortunately we cannot disclose technical information about the engine's internal beyond that, for commercial reasons (we have to pay our rent too!
Eg, I can see customers working comfortably for various industries with datasets up to 20M polies (depending on the type of model) on a 4-6GB tesla card, not billions.
Yours,
Radiance
Win 7 x64 & ubuntu | 2x GTX480 | Quad 2.66GHz | 8GB
Hey,
in relation to this topic, is there some sort of efficient instancing planned for Octane render? I often work with scenes where I want poly environment like a forest for backgrounds, specially for large scale scenes for arch viz like a tourist settlement or somesuch, you get the picture, so poly and object count can get pretty high.
Do you believe such scenes will be eventually possible with octane, or any other gpu unbiased renderer, anyway?
in relation to this topic, is there some sort of efficient instancing planned for Octane render? I often work with scenes where I want poly environment like a forest for backgrounds, specially for large scale scenes for arch viz like a tourist settlement or somesuch, you get the picture, so poly and object count can get pretty high.
Do you believe such scenes will be eventually possible with octane, or any other gpu unbiased renderer, anyway?
Asus V Extreme motherboard, i7-5930K CPU, 32 GB DDR4 Quad Channel RAM, 3x Nvidia Geforce 1080ti, Windows 7 Ultimate 64, 3DS Max 2019 64
- suhail_spa

- Posts: 229
- Joined: Tue Nov 16, 2010 9:51 am
good to hear thatradiance wrote:Instancing is planned soon
DELL Precision M4500 Laptop (win7 -64bit, Intel core i5 M520 2.4Ghz, 4Gb, Quadro FX880 1Gb, PCI express slot)
with GTX 460 -2GB (running on home-made GPU-expander)
with GTX 460 -2GB (running on home-made GPU-expander)
Off hearing this its a relieve... I mean, being in a short term list...radiance wrote:Hey,
Instancing is planned soon, it's not hard at all to implement in octane, it's on our short term list...
Radiance
Vista 64 , 2x Xeon 5440 - 24GB RAM, 1x GTX 260 & I7 3930 water cooled - 32GB RAM, 1 x GTX 480+ 1x8800 GTS 512
CGsociety gallery
My portfolio
My portfolio2 - under construction
Web site
Making of : pool scene - part1
CGsociety gallery
My portfolio
My portfolio2 - under construction
Web site
Making of : pool scene - part1