Hi, I've been trying to look into a multi GPU setup... (on the cheap and small scale compared to what is actually in this topic, but for me, fine)
To bring some more longevity into the old 2011 3930K system... It is also to get the GPUS out of the main case to keep a bit cooler.
Currently I have 2 GTX680's mainboard and 2 GTX660ti's running external on two PCIE x1 to x16 (size) adaptors, direct off 2x PCIE x1 slots from the MB.
Going fine, but I cannot add another GPU as the system runs into "... lack of resources CODE12..." issues.
(I have an additional PCIE x8 slot on the MB that won't fire up an additional GPU making five in total).
Short story.
I did look at the Supermicro X9DR style mb but felt it would take too much money (new 2011-3 CPUS xeons, ram and mb $) to implement that for my purposes.
I bought a cheap PCIEx1 to 3x1 splitter/switch, did not work, or at least I couldn't get it to recognise attached GPU's. Would show the upstream and downstream slots, but not the GPU's when attached.
I've just taken a gamble and bought a second hand Amfeltec PCIE 4x GPU splitter http://amfeltec.com/products/flexible-x ... -oriented/ and am hopeful of some level of success.
Has anyone here used this with an Gigabyte X79 UD5 mb similar to mine? Any tips /tricks to be aware of? I'm only aiming at 8 GPU max.
I believe there is a GPU limit in Windows, but for 8 I think I should be ok, afaik it would be the MB that I'm thinking will cause issues?
I'm also suspicious that my OS (Win10pro) is a bit bloated and may need a purge, any tips, short of doing a full clean install.
In the long run, it probably is less stress to just simply buy two GTX1080's put them in the main case and would probably be faster than 8 "older 6 series" GPU's, but at the moment I have the older cards and also am just keen on having a play with alternatives..
Best Practices For Building A Multiple GPU System
- teknofreek
- Posts: 92
- Joined: Mon Apr 12, 2010 12:27 am
Win 10 Pro 64bit, AMD 1950X 3.6 Ghz, Gigabyte AorusG7 X399 MB, 1x GTX 680 1x GTX 770 mainboard, 4x GTX660ti external, 32 GB onboard mem
Nvidia drivers 388.59
Nvidia drivers 388.59
Notiusweb wrote:Oh man, getting even more excited now imagining the mind of Tutor might play a role in this somehow!smicha wrote:Tutor,
I hope you are fine lately. I need your helpPlease drop a note you are available.
Smicha, can you just answer one question...Will it involve water?
Or blood!? Wait...Are you going to be connected to the rig, as in, you ARE the rig!!!?
I put all of my heart there


3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
Sorry for delays, but work load has been extremely heavy. I have a little breathing room today, but will have more breathing room from March 22nd- 26th.smicha wrote:Tutor,
I hope you are fine lately. I need your helpPlease drop a note you are available.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
Could you please do me a favor and measure of how high the riser I PMed you rises a gpu? I mean the riser height without black PCIe socket, if a gpus goes to its very bottom.Tutor wrote:Sorry for delays, but work load has been extremely heavy. I have a little breathing room today, but will have more breathing room from March 22nd- 26th.smicha wrote:Tutor,
I hope you are fine lately. I need your helpPlease drop a note you are available.
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
"Cheap" is my middle name. I buy my Supermicro X9DRXs from Superbizz [ https://www.superbiiz.com/detail.php?name=MB-X9DRXFB ] for no more than $440 or less (USD) each. Recently, I've purchased for $90 each ($180 total) a pair of used E5-4650 V.1 ES QBEDs from a system repair shop through eBay, but see - http://www.ebay.com/bhp/intel-xeon-e5-4650 . The E5-4650s are seen by single and dual CPU systems (like the X9DRX) as E5-2680 V1 eight core CPUs*/. And further, those E5-4650s are seen by my Gigabyte X79-UP4 systems and my 8 GPU Tyan Server as E5-2680 V1 eight core CPUs.teknofreek wrote:Hi, I've been trying to look into a multi GPU setup... (on the cheap and small scale compared to what is actually in this topic, but for me, fine)
To bring some more longevity into the old 2011 3930K system... It is also to get the GPUS out of the main case to keep a bit cooler.
Currently I have 2 GTX680's mainboard and 2 GTX660ti's running external on two PCIE x1 to x16 (size) adaptors, direct off 2x PCIE x1 slots from the MB.
Going fine, but I cannot add another GPU as the system runs into "... lack of resources CODE12..." issues.
(I have an additional PCIE x8 slot on the MB that won't fire up an additional GPU making five in total).
Short story.
I did look at the Supermicro X9DR style mb but felt it would take too much money (new 2011-3 CPUS xeons, ram and mb $) to implement that for my purposes.
When a system's IO space is exhausted, splitters are no cure. Keep in mind, however, that different GPUs have different IO space requirements. There's a tendency for (1) TI cards (like the 780 TIs and 980 TIs, and maybe even the 1080 TIs) to be IO space hogs and (2) newer GPUs with greater functionality will tend to use more IO space than earlier GPUs. I don't have any Gigabyte X79 UD5 systems, but I do have six Gigabyte X79-UP4 systems. Under Windows, my UP4s can run 6-7 GPUs at best. Running Linux tends to increase the number of GPUs supported. Generally, a Windows system will top out at about 13 GPUs on any currently available motherboard.teknofreek wrote: I bought a cheap PCIEx1 to 3x1 splitter/switch, did not work, or at least I couldn't get it to recognise attached GPU's. Would show the upstream and downstream slots, but not the GPU's when attached.
I've just taken a gamble and bought a second hand Amfeltec PCIE 4x GPU splitter http://amfeltec.com/products/flexible-x ... -oriented/ and am hopeful of some level of success.
Has anyone here used this with an Gigabyte X79 UD5 mb similar to mine? Any tips /tricks to be aware of? I'm only aiming at 8 GPU max.
I believe there is a GPU limit in Windows, but for 8 I think I should be ok, afaik it would be the MB that I'm thinking will cause issues?
I'm also suspicious that my OS (Win10pro) is a bit bloated and may need a purge, any tips, short of doing a full clean install. ... ,
*/ The only performance difference between my E5-4650 V.1 ES QBEDs and the E5-2680 V1 eight cores is that the E5-4650 V.1 ES QBEDs TurboBoost higher (3600 MHz for one core per CPU) than the E5-2680 V1 (3500 MHz for one core per CPU). Moreover, I use E5-4650 V.1 ES QBEDs in my two Supermicro SuperServer SYS-8047R-7RFT+ [Quad CPU systems in each of which I'm currently running 6 GPUs using a combination of 2 X16 to X16 risers and 1 Amfeltec 4 way splitter] - https://www.superbiiz.com/detail.php?name=SY-847R7FT . Those 32 [4x8] real CPU cores and 32 hemi-demi-semi CPU cores [virtual threads] are used for CPU rendering (which can be done simultaneously with GPU rendering).
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
Bottom line - The riser raises the GPU by about 22/32 of an inch above the top of the motherboard's x8 PCIe slot. It's a tad over 9/32 of an inch from the bottom male x8 part the riser (that isn't sitting within the motherboard's PCIe slot) to the bottom of the black part of the x16 PCIe female slot on the riser into which the GPU sits. Its a tad under 22/32th of an inch from the bottom male x8 part the riser (that isn't sitting within the motherboard's PCIe slot) to the top part of the x16 PCIe female slot on the riser into which the GPU sits. Regarding the riser's x16 PCIe female slot (just the black part), it measues just about 12/32 of an inch from top to bottom.smicha wrote:Could you please do me a favor and measure of how high the riser I PMed you rises a gpu? I mean the riser height without black PCIe socket, if a gpus goes to its very bottom.Tutor wrote:Sorry for delays, but work load has been extremely heavy. I have a little breathing room today, but will have more breathing room from March 22nd- 26th.smicha wrote:Tutor,
I hope you are fine lately. I need your helpPlease drop a note you are available.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
You are the man, Tutor. I am sorry if I repeat myself 
Is my drawing correct so?

Is my drawing correct so?
You do not have the required permissions to view the files attached to this post.
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
- teknofreek
- Posts: 92
- Joined: Mon Apr 12, 2010 12:27 am
Thanks for the reply Tutor. I've just received the amfeltec switch, so I'll see how that works..
Unfortunately I need to run a Wacom tablet and 3D Connexion spacenav, so maybe they're also taking some IO ( don't really know though).
I might try to remove the MSI 680 lightnings from the main MB. They are not branded ti, but maybe are hogging resources as most of this problem came about when I replaced the 660ti's with those.
Anyway thanks again.
Unfortunately I need to run a Wacom tablet and 3D Connexion spacenav, so maybe they're also taking some IO ( don't really know though).
I might try to remove the MSI 680 lightnings from the main MB. They are not branded ti, but maybe are hogging resources as most of this problem came about when I replaced the 660ti's with those.
Anyway thanks again.
Win 10 Pro 64bit, AMD 1950X 3.6 Ghz, Gigabyte AorusG7 X399 MB, 1x GTX 680 1x GTX 770 mainboard, 4x GTX660ti external, 32 GB onboard mem
Nvidia drivers 388.59
Nvidia drivers 388.59
Whatever gets the job done. Most of the time I do render straight from C4D but I always make a separate scene optimized for rendering. I bake animation to keyframes, bake deformers to pointcache and delete all the unnecessary materials, textures etc. Also merging 1000s of objects (bolts, rivets etc.) in a single mesh. It really depends on how you organize your project. It IS always faster to render from Standalone because our 3D apps are designed for CPU rendering so some things are still single threaded. So when we have deforming geometry Octane needs to update it in every frame, plus, the actual rig in your 3D app could be very slow. When you export to SA you practically 'bake' everything and there's no riging to slow it down so that's why it's faster. For characters, one thing you could try is to work with unsubdivided meshes and then let Octane subdivide them. In C4D you could do that with a 'ObjectTag' using Subdivision options. Or maybe export unsubdivided to SA so you 'bake' less polygons. I hope that makes sense.Notiusweb wrote: Milanm, do you go out of C4D or do you ever export out a scene package to the standalone, sometimes for me I found the renders were compressed for some reason when exported out to the standalone (lower memory) and they rendered faster, only thing is that then you have to wait for the standalone to CPU compile each frame (also for me I did with Daz Studio, not C4D). Thinking maybe it could be something to try out.
With all that said, here's a case where export to Standalone didn't make sense. It's a simple rig I made for a recent project that generates geometry only in front of the camera to speedup compilation time. C4D scene is ~400kb but Orbx export is 11Gb or more. Normally for a camera move, quality and animation length that i needed this would require ~100mil. polygons animated in every (god damn) frame. 8K displacement was not good enough for 1080p closeups so I needed real geometry. So with this rig I could move the camera infinitely far and as close as I want while updating only about 3-4mil. polygons (maximum!) in every frame. Notice how there's more polygons near the camera.
Compilation and upload times should not be underestimated. Sometimes It's better to have more machines and licenses. Running two instances of Octane with different GPUs assigned to each on the same machine can result in 2X faster compilation (depending on the CPU ofc.). That also works with C4D for me.
Regards
Milan
Colorist / VFX artist / Motion Designer
macOS - Windows 7 - Cinema 4D R19.068 - GTX1070TI - GTX780
macOS - Windows 7 - Cinema 4D R19.068 - GTX1070TI - GTX780
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
Yes - Your drawing is correct. Your risers are same size as mine and many of my 90+ risers of that type came from the very same source that you referenced last week.smicha wrote:You are the man, Tutor. I am sorry if I repeat myself
Is my drawing correct so?
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.