Is there a way to know the maximum mesh size in poly possible to load with Octane ?
Crash with 33 million poly with Quadro K6000, with 28 mi run on Quadro 6000 !!!
Thanks.
Maximum mesh size in poly ?
Forum rules
Please post only in English in this subforum. For alternate language discussion please go here http://render.otoy.com/forum/viewforum.php?f=18
Please post only in English in this subforum. For alternate language discussion please go here http://render.otoy.com/forum/viewforum.php?f=18
Thanks Karba,
I am completely frustrated. I did tests with ~ 26 million polygons (not 28) and a Quadro 6000, it works. So I acquired a workstation with two Quadro K6000 to load more polygon, so if the limit is 26 ......... can not load 2 mesh with ~ 25 million poly each => 50 million.
Is there a way with 2.0 version to load more polygons than 26 ......., i'm really frustrated, Karka a solution ?
I am completely frustrated. I did tests with ~ 26 million polygons (not 28) and a Quadro 6000, it works. So I acquired a workstation with two Quadro K6000 to load more polygon, so if the limit is 26 ......... can not load 2 mesh with ~ 25 million poly each => 50 million.
Is there a way with 2.0 version to load more polygons than 26 ......., i'm really frustrated, Karka a solution ?
I can't promise anything.olivierve wrote:Thanks Karba,
I am completely frustrated. I did tests with ~ 26 million polygons (not 28) and a Quadro 6000, it works. So I acquired a workstation with two Quadro K6000 to load more polygon, so if the limit is 26 ......... can not load 2 mesh with ~ 25 million poly each => 50 million.
Is there a way with 2.0 version to load more polygons than 26 ......., i'm really frustrated, Karka a solution ?
Are those 2 models the same, or different?
If one card gets overloaded with 28 million polies aka 56 million triangles then two cards will also be overloaded because the same content loads into every cards vram.
Does you system RAM also max out during voxelisation/scene preparation? A large mesh consumes a lot of RAM.
Maybe splitting the large mesh into smaller objects?
Does you system RAM also max out during voxelisation/scene preparation? A large mesh consumes a lot of RAM.
Maybe splitting the large mesh into smaller objects?
PURE3D Visualisierungen
Sys: Intel Core i9-12900K, 128GB RAM, 2x 5090 RTX, Windows 11 Pro x64, 3ds Max 2024.2
Sys: Intel Core i9-12900K, 128GB RAM, 2x 5090 RTX, Windows 11 Pro x64, 3ds Max 2024.2
what about optimizing the mesh using pro optimizer modifier?

sometimes mesh resolution is overrated.
of course this is only a shot in the dark.

sometimes mesh resolution is overrated.
of course this is only a shot in the dark.
P6T7 WS SuperComputer / i7 980 @ 3.33GHz / 24 GB / 1500W + 1200W PSUs / 6x GTX 680 4 GB + 1x Tesla M2070 6GB (placeholder
)

Hello everyone,
KARBA:
- Models are not the same, so no instance.
- About 26.843.545 polys limitation, can you explain why and more, i need more information to manage my workflow with Octane.
- I'm working with large master 3D metric models for inspection and visual scientific analyse, so geometry and accuracy are very important and i can't decimate model to much. (Master model, from 20 to 160 mi polys).
- My goal is to manage 120 mi poly with gpu rendering, i think i can easily approch 60-75 mi poly with Quadro K6000 (without image map). For information 26 843 545 poly under 3ds and Octane, => ~3gb / 11 gb (K6000).
- NAB 2014, Otoy announce out-of-core rendering, so what's about polys limitation ? Just for texture mapping ?
thank in advance, regards, Oliver.
MBETKE:
- Don't worry with ram limitation, workstation with 192 go. Splitting, not the solution.
BORIS:
- Pro optimizer not so bad for decimate, but i need to keep geometry for visual analyse.
thank you guys, see you, Oliver
KARBA:
- Models are not the same, so no instance.
- About 26.843.545 polys limitation, can you explain why and more, i need more information to manage my workflow with Octane.
- I'm working with large master 3D metric models for inspection and visual scientific analyse, so geometry and accuracy are very important and i can't decimate model to much. (Master model, from 20 to 160 mi polys).
- My goal is to manage 120 mi poly with gpu rendering, i think i can easily approch 60-75 mi poly with Quadro K6000 (without image map). For information 26 843 545 poly under 3ds and Octane, => ~3gb / 11 gb (K6000).
- NAB 2014, Otoy announce out-of-core rendering, so what's about polys limitation ? Just for texture mapping ?
thank in advance, regards, Oliver.
MBETKE:
- Don't worry with ram limitation, workstation with 192 go. Splitting, not the solution.
BORIS:
- Pro optimizer not so bad for decimate, but i need to keep geometry for visual analyse.
thank you guys, see you, Oliver
Hi Oliver.olivierve wrote:Hello everyone,
KARBA:
- Models are not the same, so no instance.
- About 26.843.545 polys limitation, can you explain why and more, i need more information to manage my workflow with Octane.
- I'm working with large master 3D metric models for inspection and visual scientific analyse, so geometry and accuracy are very important and i can't decimate model to much. (Master model, from 20 to 160 mi polys).
- My goal is to manage 120 mi poly with gpu rendering, i think i can easily approch 60-75 mi poly with Quadro K6000 (without image map). For information 26 843 545 poly under 3ds and Octane, => ~3gb / 11 gb (K6000).
- NAB 2014, Otoy announce out-of-core rendering, so what's about polys limitation ? Just for texture mapping ?
thank in advance, regards, Oliver.
MBETKE:
- Don't worry with ram limitation, workstation with 192 go. Splitting, not the solution.
BORIS:
- Pro optimizer not so bad for decimate, but i need to keep geometry for visual analyse.
thank you guys, see you, Oliver
This limitation comes from cuda.