Anyone else getting some strangeness when two VDBs are plugged into geo group (and not even necessarily occupying the same space)?
I have a simple scene with ground, explosion.vdb (scaled down to campfire size) and smoke2.vdb... geo group > placements > vdbs.. if I have just the explosion (campfire), it is sitting in middle of frame. If I plug smoke into group, campfire moves back and to the left (not sure x,z at the moment), like a couple feet and gets squooshed some horizontally... weird.
OctaneRender™ Standalone 3.00 alpha 1
Forum rules
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
- FrankPooleFloating
- Posts: 1669
- Joined: Thu Nov 29, 2012 3:48 pm
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles
- BorisGoreta
- Posts: 1413
- Joined: Fri Dec 07, 2012 6:45 pm
- Contact:
Would someone share some VDB files to test ?
19 x NVIDIA GTX http://www.borisgoreta.com
http://www.openvdb.org/download/BorisGoreta wrote:Would someone share some VDB files to test ?
Regarding the kernel performance:
In simple scenes (like the chess set or Cornell boxes etc.), the old system was hard to beat. That is because samples of neighbouring pixels are very coherent (similar) which is what GPUs like and can process very fast. In those cases CUDA threads did almost the same task and didn't have to wait for each other. The problem is that in real production scenes the execution of CUDA threads diverges very quickly causing CUDA threads to wait a long time for other CUDA threads to finish some work, i.e. twiddling thumbs.
The way the new kernels work is, that we chopped the task (to generate a sample) into smaller parts and have all threads work on the same part, increasing coherency. This helps a lot in more complex scenes. Of course, the issue now is that you need to store data between kernel calls which means that memory usage goes up. There is really nothing we can do about it, BUT if you have a simple scene you usually have plenty of memory left which means you can happily bump up the "parallel samples" without running out of memory.
In a nutshell:
In simple scenes where you've got plenty of VRAM left: Increase "parallel samples" to the maximum.
In complex scenes where VRAM is spare: Set it to the highest value without running out of memory. It should usually still be faster than before or at least render with roughly the same speed.
I will add this to the new features post.
In simple scenes (like the chess set or Cornell boxes etc.), the old system was hard to beat. That is because samples of neighbouring pixels are very coherent (similar) which is what GPUs like and can process very fast. In those cases CUDA threads did almost the same task and didn't have to wait for each other. The problem is that in real production scenes the execution of CUDA threads diverges very quickly causing CUDA threads to wait a long time for other CUDA threads to finish some work, i.e. twiddling thumbs.
The way the new kernels work is, that we chopped the task (to generate a sample) into smaller parts and have all threads work on the same part, increasing coherency. This helps a lot in more complex scenes. Of course, the issue now is that you need to store data between kernel calls which means that memory usage goes up. There is really nothing we can do about it, BUT if you have a simple scene you usually have plenty of memory left which means you can happily bump up the "parallel samples" without running out of memory.
In a nutshell:
In simple scenes where you've got plenty of VRAM left: Increase "parallel samples" to the maximum.
In complex scenes where VRAM is spare: Set it to the highest value without running out of memory. It should usually still be faster than before or at least render with roughly the same speed.
I will add this to the new features post.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Hi Jo, I have looked at your scene, thank you for posting this. It seems that the scatter node is overlapping around 30 volumes. Please be aware that there is a limit of 4 overlapping volumes (see the release posts). If you change the scale of the volume you should see correct render passes. If not, please let me know.Jolbertoquini wrote:Hi Guys,
Excellent, so farI just made some test with the Volume. and I realize the light passes doesn't work here the file attach.
Sorry if still as wip just to let you know...![]()
Best
JO
- aggiechase37
- Posts: 214
- Joined: Tue Jan 13, 2015 6:39 am
Don't know if this has already been asked, but for us "alpha testers", are there going to be any incentives for testing out this software? Perhaps a discount when v3 comes out? I'm a v2 owner with c4d plugin now.
Chase
Win 10 - Intel 4770 - 2x Nvidia 1070 - 32 gigs RAM - C4D r16
http://www.luxemediaproductions.com
Win 10 - Intel 4770 - 2x Nvidia 1070 - 32 gigs RAM - C4D r16
http://www.luxemediaproductions.com
- stratified
- Posts: 945
- Joined: Wed Aug 15, 2012 6:32 am
- Location: Auckland, New Zealand
Thanks for the detailed report! We could reproduce the banding in the fog and fix it (this was only happening with the direct light kernel).Elvissuperstar007 wrote:I made a couple of manipulations scattering in the sunabstrax wrote:What did you do to make it crash?Elvissuperstar007 wrote:error
function does not work Archived packages
also I saw these bands with fog
if you move the slider to the scale of octane Freeze for 2-3 minutes
cheers,
Thomas
Hi Miko, very nice renders. Yes we will support motion blur for volumes in two different simultaneous ways. First, the typical motion blur you see with animated mesh geometry, and second with the render time advection "vel" grids. The latter will most likely come first.miko3d wrote:really liking the shading and playing with the phase....cant wait for those ramp controls.
Is there any plans to support Mblur for volumes (via a 3 "vel" volumes for example)?how will this affect memory?Thanks again guys.
For a typical fire VDB, you've got a density grid and a temperature grid. These are independent floating point grids, and share a very similar topology. We merge and load these into a single hierarchical grid to conserve as much space as we can. If you add motion blur from these 3 separate velocity grids, then each voxel will go from storing 2 floats to 5 floats, more than doubling the memory usage. Nonetheless we will be implementing this, but if you don't use motion blur, then it won't affect your memory usage (ie. we don't load data onto gpus that isn't used).
No, there won't be any incentives.aggiechase37 wrote:Don't know if this has already been asked, but for us "alpha testers", are there going to be any incentives for testing out this software? Perhaps a discount when v3 comes out? I'm a v2 owner with c4d plugin now.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra