Hi samsue,
to recap:
PC Master
2 x 2080TI + 1 x Nvilink bridge
PC Slave
4 x 2080Ti + 2 x Nvlink bridge
Is it correct?
Regards
Paride
CUDA errors
Forum rules
Please post only in English in this subforum. For alternate language discussion please go here http://render.otoy.com/forum/viewforum.php?f=18
Please post only in English in this subforum. For alternate language discussion please go here http://render.otoy.com/forum/viewforum.php?f=18
- paride4331
- Posts: 3808
- Joined: Fri Sep 18, 2015 7:19 am
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
- paride4331
- Posts: 3808
- Joined: Fri Sep 18, 2015 7:19 am
Hi samsue,
first: using Nvlink you have tu set on Out of core.
second: keep in mind Nvidia removed 2 x 2 Nvlink support (geforce and quadro) so it doesn't work on your slave.
In your case, you would unplug the second Nvlink, then set Out-of-core during slave setup.
You will be able to use Nvlink only using 2 GPUs master + 2 GPUs slave; you cannot use Nvlink using all of 6 GPUs.
If you really want to use Nvlink, instead of out-of-core only, you shoud have a secondary 2x GPU slave.
I hope it helps.
Rgards
Paride
first: using Nvlink you have tu set on Out of core.
second: keep in mind Nvidia removed 2 x 2 Nvlink support (geforce and quadro) so it doesn't work on your slave.
In your case, you would unplug the second Nvlink, then set Out-of-core during slave setup.
You will be able to use Nvlink only using 2 GPUs master + 2 GPUs slave; you cannot use Nvlink using all of 6 GPUs.
If you really want to use Nvlink, instead of out-of-core only, you shoud have a secondary 2x GPU slave.
I hope it helps.
Rgards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
paride4331 wrote:Hi samsue,
first: using Nvlink you have tu set on Out of core.
second: keep in mind Nvidia removed 2 x 2 Nvlink support (geforce and quadro) so it doesn't work on your slave.
In your case, you would unplug the second Nvlink, then set Out-of-core during slave setup.
You will be able to use Nvlink only using 2 GPUs master + 2 GPUs slave; you cannot use Nvlink using all of 6 GPUs.
If you really want to use Nvlink, instead of out-of-core only, you shoud have a secondary 2x GPU slave.
I hope it helps.
Rgards
Paride
First: So if I have NvLink, it is mandatory to activate Out of Core.
Second: I don't quite understand.
2.1: "In your case..." : All right, I'll take one of the nvlink, but what will that make a difference? but i already did it during thart on installation, Out-Of-Core is active on the Slave ( 4x 2080 TI is 2 pairs connected with Nvlink but not between the pairs (( i know, it's just to explain it in detail )).
2.2 "You will be able to...": Why not?
2.3. "If you really want...": I installed NvLink, because I was supposed to be able to use vram together, adding the vram of the video cards.
3 : So what's the benefit of Nvlink?
4: you recomment take out all of them NvLink?
- paride4331
- Posts: 3808
- Joined: Fri Sep 18, 2015 7:19 am
Hi samsue,
First: So if I have NvLink, it is mandatory to activate Out of Core.
Yes
Second: I don't quite understand.
Nvlink option don't work in a PC 4x RTX 2080 TI with 2x Nvlink bridge.
https://www.pugetsystems.com/labs/artic ... s-10-1688/
2.2 "You will be able to...": Why not?
The amount of RAM available to OctaneRender is not the total amount of VRAM from your GPUs, but it is the amount of VRAM from your smallest GPU.
So when Octane needs more VRAM, it will distribute the geometry to Nvlinked VRAM, in your case the slave will not be able to do that, because it will consider the small amount of VRAM in not Nvlinked GPUs.
3 : So what's the benefit of Nvlink?
https://docs.otoy.com/StandaloneH_STA/S ... NVLink.htm
4: you recomment take out all of them NvLink?
To use Nvlink with all your 6x GPUs you would consider to have 2x slave with 2GPUs nvlinked each.
Otherwise you will use (current situaztion):
6x GPUs with no NVlink (when Nvlink is not needed)
Or 4x GPUs Nvlink (whe Nvlink is needed)
Regards
Paride
First: So if I have NvLink, it is mandatory to activate Out of Core.
Yes
Second: I don't quite understand.
Nvlink option don't work in a PC 4x RTX 2080 TI with 2x Nvlink bridge.
https://www.pugetsystems.com/labs/artic ... s-10-1688/
2.2 "You will be able to...": Why not?
The amount of RAM available to OctaneRender is not the total amount of VRAM from your GPUs, but it is the amount of VRAM from your smallest GPU.
So when Octane needs more VRAM, it will distribute the geometry to Nvlinked VRAM, in your case the slave will not be able to do that, because it will consider the small amount of VRAM in not Nvlinked GPUs.
3 : So what's the benefit of Nvlink?
https://docs.otoy.com/StandaloneH_STA/S ... NVLink.htm
4: you recomment take out all of them NvLink?
To use Nvlink with all your 6x GPUs you would consider to have 2x slave with 2GPUs nvlinked each.
Otherwise you will use (current situaztion):
6x GPUs with no NVlink (when Nvlink is not needed)
Or 4x GPUs Nvlink (whe Nvlink is needed)
Regards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
So what you want to tell me basically is that, because Win/Nvidia doesn't support more than 2 video cards per PC using Nvlink, Octane will always use 11gb in case, because as I have 4 video cards connected 2 of them will be with 11 gb
So better deactivate all the nvlink and use Out of core and everything will be fine?
So better deactivate all the nvlink and use Out of core and everything will be fine?
- paride4331
- Posts: 3808
- Joined: Fri Sep 18, 2015 7:19 am
Hi samsue,
yes, just like that.
And yes, In that case better to use Out-f-core.
Regards
Paride
yes, just like that.
And yes, In that case better to use Out-f-core.
Regards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
CUDA error 700 on device 0: an illegal memory access was encountered
-> failed to bind device to current thread
device 0: failed to initialize context
device 0: failed to initialize render thread 000002536075ACA0
CUDA error 700 on device 1: an illegal memory access was encountered
-> failed to bind device to current thread
device 1: failed to initialize context
device 1: failed to initialize render thread 000002536075C6A0
-> failed to bind device to current thread
device 0: failed to initialize context
device 0: failed to initialize render thread 000002536075ACA0
CUDA error 700 on device 1: an illegal memory access was encountered
-> failed to bind device to current thread
device 1: failed to initialize context
device 1: failed to initialize render thread 000002536075C6A0
- paride4331
- Posts: 3808
- Joined: Fri Sep 18, 2015 7:19 am
Hi,
did you unpair Nvlink GPUs in device panel and enable out-of-core in your PC slave?
Regards
Paride
did you unpair Nvlink GPUs in device panel and enable out-of-core in your PC slave?
Regards
Paride
2 x Evga Titan X Hybrid / 3 x Evga RTX 2070 super Hybrid
an increase in errors after I installed the new version
OctaneRender Enterprise 2020.1.3 (8010300)
Can not initialize inference lib
CUDA error 2 on device 1: out of memory
-> failed to allocate device memory
device 1: failed to allocate up-sampling buffer
device 1: invalid memory pointers passed
denoiserThread0 : Exited with a error. Restart required
CUDA error 700 on device 0: an illegal memory access was encountered
-> failed to destroy CUDA event
CUDA error 700 on device 0: an illegal memory access was encountered
-> failed to destroy CUDA event
OctaneRender Enterprise 2020.1.3 (8010300)
Can not initialize inference lib
CUDA error 2 on device 1: out of memory
-> failed to allocate device memory
device 1: failed to allocate up-sampling buffer
device 1: invalid memory pointers passed
denoiserThread0 : Exited with a error. Restart required
CUDA error 700 on device 0: an illegal memory access was encountered
-> failed to destroy CUDA event
CUDA error 700 on device 0: an illegal memory access was encountered
-> failed to destroy CUDA event