2 cards good, 3 cards bad...

Forums: 2 cards good, 3 cards bad...
Newtek Lightwave 3D (exporter developed by holocube, Integrated Plugin developed by juanjgon)

Moderator: juanjgon

2 cards good, 3 cards bad...

Postby kopperdrake » Tue Mar 29, 2022 8:40 pm

kopperdrake Tue Mar 29, 2022 8:40 pm
I have a 64Gb system running Windows 10, with one 3080 Ti in it. I then have an external GPU rig (Amfeltec Corp) with two more 3080 Ti cards on.

I'm rendering a pretty hefty scene, and was finding the PC would stop rendering, either through a crash or just stop. Raising the out-of-core values to 15Gb RAM usage limit and 1024MB GPU head room seemed to stop that, although I'm not really sure what they do (I assume they're reasnoably self-explanatory - RAM usage limit means Octane can only take that 15Gb maximum from the system RAM, and GPU head room leaves 1024Mb in the card free, for other stuff?)

Regardless, the scene was taking about 8.5 minutes to render a frame with all three GPUS active. But turning OFF the GPU in the PC actually had a frame rendering in 7 minutes!

Any pointers would be really useful! I have quite a few more frames to render and a speed increase would be so helpful!

If it makes any difference, the third GPU on the GPU rig replaced a 2080 Ti, and I made no changes to studio drivers.
Attachments
2cards.jpg
2 Cards
3cards.jpg
3 Cards
- web: http://www.albino-igil.co.uk - 2D/3D Design Studio -
- PC Spec: Intel i9-7940X @3.1GHz | 64Gb | 3 x GeForce RTX 3080 Ti & 1 x GeForce RTX 2080 Ti | Windows 10 Pro 64-bit -
User avatar
kopperdrake
Licensed Customer
Licensed Customer
 
Posts: 24
Joined: Mon Mar 27, 2017 9:42 pm
Location: Derbyshire, UK

Re: 2 cards good, 3 cards bad...

Postby kopperdrake » Wed Mar 30, 2022 11:01 am

kopperdrake Wed Mar 30, 2022 11:01 am
Update,

I turned the monitors off, went to bed, woke up and the frame render time had dropped to 4m 40s - pretty much what I'd hoped to expect!

So what's happening? Why the initial high render times? Is just 'not using' the control PC causing some sort of rebalance?
- web: http://www.albino-igil.co.uk - 2D/3D Design Studio -
- PC Spec: Intel i9-7940X @3.1GHz | 64Gb | 3 x GeForce RTX 3080 Ti & 1 x GeForce RTX 2080 Ti | Windows 10 Pro 64-bit -
User avatar
kopperdrake
Licensed Customer
Licensed Customer
 
Posts: 24
Joined: Mon Mar 27, 2017 9:42 pm
Location: Derbyshire, UK

Re: 2 cards good, 3 cards bad...

Postby NemesisCGI » Thu Jun 02, 2022 10:49 am

NemesisCGI Thu Jun 02, 2022 10:49 am
The problem is the speed the external cards send & receive data. The bottleneck is the cable. It's only a x1
Also, don't use those cards for tone mapping or denoising, this will slow things down too.
Win 7 pro 64bit GTX780
NemesisCGI
Licensed Customer
Licensed Customer
 
Posts: 110
Joined: Sat Apr 06, 2013 3:18 am

Re: 2 cards good, 3 cards bad...

Postby Lewis » Fri Jun 03, 2022 12:26 pm

Lewis Fri Jun 03, 2022 12:26 pm
Another thing what could it be is OutOfCore RAM usage, since your PC GPU is connected to monitors it has less free VRAM (windows and just on monitor eats about 350-400 MB and more resolution/monitors and layout OGL itself it can easily go to 2 GB Vram usage which is not then available for Octane and then the more OCC render uses the slower render it gets.

Margin of slowdown with increased OCC is very, very big.
I tested this few years ago and after 30-40% of OCC render can slow down like 5x or more...

So in your case those GPUs in external have more FREE VRAM, so it uses less OCC, and thus it can almost be faster than using all 3 GPUs since then OCC is bottlenecking all GPUs.


check my results with OCC:
Test 1 (4*2080 Ti) - Denoiser ON, Parallel samples=16, Max Tile samples=32
_____________________
GPU headroom = 300 MB
Out of Core = 0 MB
Rendertime = 1m 8sec (68sec)


Test 2 (4*2080 Ti) - Denoiser ON, Parallel samples=32, Max Tile samples=64
_____________________
GPU headroom = 300 MB
Out of Core = 768 MB
Rendertime = 1m 18sec (78sec) = 15% slower


Test 3 (4*2080 Ti) - Denoiser ON, Parallel samples=32, Max Tile samples=64
_____________________
GPU headroom = 512 MB
Out of Core = 1024 MB
Rendertime = 1m 32sec (92sec) = 35% slower


Test 4 (4*2080 Ti) - Denoiser ON, Parallel samples=32, Max Tile samples=64
_____________________
GPU headroom = 1024 MB
Out of Core = 1792 MB
Rendertime = 2m 04sec (124 sec) = 82% slower


Test 5 (4*2080 Ti) - Denoiser ON, Parallel samples=32, Max Tile samples=64
_____________________
GPU headroom = 2048 MB
Out of Core = 2816 MB
Rendertime = 2m 57sec (177 sec) = 160% slower


Test 6 (4*2080 Ti) - Denoiser ON, Parallel samples=32, Max Tile samples=64
_____________________
GPU headroom = 4096 MB
Out of Core = 4989 MB
Rendertime = 16m 03sec (963 sec) = 1315% slower
--
Lewis
http://www.ram-studio.hr
Skype - lewis3d
ICQ - 7128177

WS AMD TRPro 3955WX, 256GB RAM, Win10, 2 * RTX 4090, 1 * RTX 3090
RS1 i7 9800X, 64GB RAM, Win10, 3 * RTX 3090
RS2 i7 6850K, 64GB RAM, Win10, 2 * RTX 4090
User avatar
Lewis
Licensed Customer
Licensed Customer
 
Posts: 1070
Joined: Tue Feb 05, 2013 6:30 pm
Location: Croatia

Return to Lightwave 3D


Who is online

Users browsing this forum: Bing [Bot] and 31 guests

Fri Apr 26, 2024 11:11 pm [ UTC ]