Vram usage prediction

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Post Reply
vortexvfx
Licensed Customer
Posts: 22
Joined: Mon Sep 07, 2020 7:51 pm

Hey all,
I'm a Houdini plugin user, but figured it might be better to ask about this in the general forum.

I'm trying to squeeze some heavy fluid sim data into specific GPU memory budgets, to keep things in-core and quick... so I'm trying to get a sense of at least how to ballpark-estimate vram usage for different geometry types, and maybe cook up some adaptive-detail-level setups.

Triangle meshes seem to be the heaviest, there I've got two vectors per-point, position and velocity.
What I'd like to know - do triangle meshes with shared points get converted into discrete triangle data with unshared (duplicated) points on the GPU?
Does this data get compressed in vram, or are we talking a straight 3x32x2 bits per triangle vertex with velocity? I've already tested, and it doesn't seem to matter if I pre-cast velocity data to 16-bit, it still takes the same vram, so I assume the GPU requires 32-bit data at all times.

(Edit: Okay, answered part of my own question there - uniqued the points, and it takes identical vram, so no triangle point-sharing. The maths don't quite stack up even then though. A 22-million triangle mesh with vel vectors would come to (22m*3*6*4) ~= 1510MiB, but it seems to be clocking closer to 6GB. Where am I missing a factor-of-four here?)

Particles (rendering as spheres) I guess are a more straightforward deal, position+velocity vectors, plus a scalar radius, 7x 32-bit. Again though, I dunno if this is subject to any GPU-side compression.

I have a fluid-interior bubble-density volume being rendered as a standard volume. That seems surprisingly light on vram for quite a significant voxel resolution, so I presume Octane must be using some kind of GPU-side sparse-grid setup... guessing it's converting to and using VDBs, presumably NanoVDB?

Are there any docs/resources around that describe these kind of technical details?
GeForce RTX 4090 | R9 7950X3D | 192GB
Houdini | Windows 10
Post Reply

Return to “General Discussion”