IIRC I saw on the NVidia website a graphic of the NVLink bridge and GPU card placements...which were 4 slots apart....
But then again....does NVLink makes sense with a 24GB card?
OTOH: I never thought 30 years ago I could ever fill my first 1ooMByte SCSI drive (o;
Dunno how Octane handles GPU memory with multiple GPUs...if the same scene is loaded onto each GPU with NVLink or not....
But seems my old 1300W PSU should be enough based on this online calculator: https://outervision.com/power-supply-calculator
RTX3080 RTX2080TI OctaneBench comparison
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
there are different type of nvlink bridges,3 slot ,4 slot (turings)
what I am using is for quadro's which is 2 slot,
Yes there is much less chance to use nvlink on 24G cards, my quadro's nvlink are always turned off,
Octane's Vram management for nvlink is like this: one big scene, part of the scene is in card A(of the pair) another is in cardB, I feel there is some part actually in both A and B,
so it is not really double the size and very easily to fail,it is still not mature yet, not really practical in real work for now, out of core is a bit better...
PSU, will be an issue, actually there is no current PSU could provide enough power(for my case), when all cards are fullly utilized at the same time,
because for the input power plug there is a maximum limit of 13A limit, so literaly there is a limit arround 2850W for one single PSU (if in european standard)
but in US it will be 1600W, though I have not seen any single PSU is more than 1600W (I mean for PC's, blade servers' is a different thing)...
what I am using is for quadro's which is 2 slot,
Yes there is much less chance to use nvlink on 24G cards, my quadro's nvlink are always turned off,
Octane's Vram management for nvlink is like this: one big scene, part of the scene is in card A(of the pair) another is in cardB, I feel there is some part actually in both A and B,
so it is not really double the size and very easily to fail,it is still not mature yet, not really practical in real work for now, out of core is a bit better...
PSU, will be an issue, actually there is no current PSU could provide enough power(for my case), when all cards are fullly utilized at the same time,
because for the input power plug there is a maximum limit of 13A limit, so literaly there is a limit arround 2850W for one single PSU (if in european standard)
but in US it will be 1600W, though I have not seen any single PSU is more than 1600W (I mean for PC's, blade servers' is a different thing)...
Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
Is there huge impact of available PCIe lanes per GPU? Or only for OOC?
CPUs with enough PCIe lanes are very expensive but come with core count >= 16...and consumer grade CPUs come with a maximum of 24 lanes....though PCIe 4.0.
otoh in my case running Blender most cores do nothing...even with FlipFluids there is no speed improvement when using 14 more threads....
But for start I will stick with current mobo/CPU and use 1 RTX3090...as it should have same performance as 2 * RTX2080Ti but comes with slightly more VRAM...and draws less power (o;
CPUs with enough PCIe lanes are very expensive but come with core count >= 16...and consumer grade CPUs come with a maximum of 24 lanes....though PCIe 4.0.
otoh in my case running Blender most cores do nothing...even with FlipFluids there is no speed improvement when using 14 more threads....
But for start I will stick with current mobo/CPU and use 1 RTX3090...as it should have same performance as 2 * RTX2080Ti but comes with slightly more VRAM...and draws less power (o;
Debian 10.2 on AMD 1950X, 64GB RAM, 2 * RTX2080Ti
Octane Blender Studio 2020.1-XB3-21.3
Blender 2.83 E_Cycles
Octane Blender Studio 2020.1-XB3-21.3
Blender 2.83 E_Cycles
for each card, 8x of lane width should be good enough for most cases, and you may not even notice the difference to the 16x lanes,
the lane width is only related to how long time your scene data can be loaded into the card, means the time before the card is running on the scene.
the lane width is only related to how long time your scene data can be loaded into the card, means the time before the card is running on the scene.
Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
So it has only an impact when rendering an animation....which I do mostly...
But guessing we're talking milliseconds here between frames....
But guessing we're talking milliseconds here between frames....
Debian 10.2 on AMD 1950X, 64GB RAM, 2 * RTX2080Ti
Octane Blender Studio 2020.1-XB3-21.3
Blender 2.83 E_Cycles
Octane Blender Studio 2020.1-XB3-21.3
Blender 2.83 E_Cycles
seems the 8x and 16x lane at same gen only have 1%-2% difference (maximum) in performance, so no people will really notice that, I believe.
pugetsystem have tested on it
https://www.pugetsystems.com/labs/artic ... ring-1030/
pugetsystem have tested on it
https://www.pugetsystems.com/labs/artic ... ring-1030/
Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
Ah found the picture...was on gigabytes site not on nvidia....
https://www.gigabyte.com/Graphics-Card/ ... BO-24GD#kf
Here they mention only 4 slot spacing for the NVlink...
https://www.gigabyte.com/Graphics-Card/ ... BO-24GD#kf
Here they mention only 4 slot spacing for the NVlink...
Last edited by davorin on Tue Sep 22, 2020 9:09 pm, edited 2 times in total.
Debian 10.2 on AMD 1950X, 64GB RAM, 2 * RTX2080Ti
Octane Blender Studio 2020.1-XB3-21.3
Blender 2.83 E_Cycles
Octane Blender Studio 2020.1-XB3-21.3
Blender 2.83 E_Cycles
That will be extremely expensive at that time,Hahadavorin wrote:I never thought 30 years ago I could ever fill my first 1ooMByte SCSI drive (o;
Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
just hope there is a 2 slot type, like the quadro's...davorin wrote:Ah found the picture...was on gigabytes site not on nvidia....
Here they mention only 4 slot spacing for the NVlink...
blower type is specifically for the GPU workstations and GPU servers, which will always be very squeeze in space..
Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti
Just FYI in future maybe nvlink could work withoutout the bridge, since nvidia is already published in future it will be Devs to deal with the nvilink by themself, no need throug nvidia driver
for now, I did not put bridge on my quadros, but in octane, they still can be recognized (in condition of SLI is already actived by the first pair of 2080ti)
guess performance will be different with or with out bridge...
for now, I did not put bridge on my quadros, but in octane, they still can be recognized (in condition of SLI is already actived by the first pair of 2080ti)
guess performance will be different with or with out bridge...
Supermicro 4028GR TR2|Intel xeon E5 2697 V3| windows 10| revit 2019 |Titan V+ Quadro GV100+RTX 2080 Ti