nejck wrote:abayliss wrote:Notiusweb wrote:Hey Divasoft and Jolbertoquini -
you know the scene in the RTX Octane Bench where you could test and get a measure of the boost factor RTX will give you, do you now get that same boost factor for that same scene if you test it in this XB1 test build?
I have tested the Octane Bench 2019 scene in 2019.2 XB1 and I receive the same boost.
If I test one of my own scenes in 2019.2 XB1 I receive almost no boost, or none at all. Even if my scene has more triangles than the Octane Bench 2019 scene. (Note: I made sure the triangles weren't displacement triangles.) The Octane Bench 2019 scene has ~ 1.9 million triangles. I tested a scene with 1.4 million triangles and saw no speed increase, and a scene with 3.2 million triangles which saw a 10% speed increase.
The only difference I could see if the Octane Bench 2019 has thousands of meshes and a lot of lights. Where as my scene with 3.2 million triangles only had 6 meshes.
From what I understand triangle count isn't that important per say. What is more important is what the rays are doing.
Example:
Studio scene is fairly easy to calculate because you get a couple of ray bounces and thats it. They even hit fairly similar surfaces / materials most of the time. RT cores can't help much here.
A large grass field has a ton of rays bouncing in all kinds of different directions and many times they have really complex paths. RT cores can help you out a lot here because they do most of the BVH and path calculations so that the CUDA cores can do their shader math and what not. It basically frees up compute time for CUDA cores to do what CUDA cores do better and RT cores do what RT cores do better.
So then a trickier scenario is an interior arch-viz shot. If the room is empty I can't imagine RT cores have a ton to do there so you'll see less of a speed gain. Put a ton of objects in there so that the light is bouncing around different complex (high triangle count?) objects and such, well then you'll probably see a fair increase.
Mind you, thats just how I understand it. I could be wrong
@Abayliss & Nejck - Fascinating finds and observations!
This is then now my theory, where X = poly-count/poly-element
So, imagine a single cube render scenario, 2 different render scenes:
1) 1 Cube is very high Poly - 1,000X
2) 1 Cube is very low Poly - 1X
But because it is a single cube being rendered, probably no RTX Boost. So here the mere poly count does not entail higher RTX Boost.
Then, a 20 cube render scenario:
1) 20 cubes very high poly - 20,000X
2) 20 cubes very low poly - 20X
In this case, probably a RTX Boost now occurs because there are a lot of cubes for light rays to contend with, but in each case Boost would be equal, because #ray interaction is same
Finally, this comparison
1) 1 Cube very high poly - 1,000x
2) 20 Cubes very low poly - 20x
Now here, the scene with 1 Cube renders faster, in absolute terms, because there is less ray interactivity.
But, the RTX-Boost, as measured independent from render time, would be higher in the case of the 20 cubes, where there were more rays to contend with.
So, you can have a situation then where RTX-Boost itself, as measured independent from render time, is actually stronger in a lower poly scenario than a higher poly scenario!
So with 'RTX-Boost' we cannot equate 'higher-poly' to meaning 'more complex'.
Rather, the 'more complex' is # interactions between light rays and objects.
Thoughts?
PS -POLARIZED LIGHTING!
WHERE THE 'F' ARE YOU!!!?....
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise