As far as I know (regarding your earlier configuration), it was the most powerful Nvidia/Octane 2 GPU rendering, maxed out Windows system, on the planet! If you have a build diary of all that you did to get it, consider reviewing it now to try to replicate it as closely as possible. As to the future, you can probably go even much more further in an absolutely performance-based sense by taking a bold move - be bold enough to swap Linux for Windows on your main calculate rig(s). If you have two or more rigs, you could maintain the Linux rig(s) just as a/some renderslave(s) tied [as, e.g., in Octane Network Rendering (see - https://www.youtube.com/watch?v=Vvf_-toAOU8 ) or DeadLine ( https://deadline.thinkboxsoftware.com ) ] to a system running another OS that's more/most preferred by you. There's usually more than just one way to get to a preferred/chosen spot. Keep me posted about your most excellent journey.Notiusweb wrote:... .
I am maxed out as far as GPU speed goes then.
If I want faster, I have to get Volta!
Only this might pull me down to 7, or 6.....which maybe nets me to the same, or nudge better, performance after all...F#$%!
Best Practices For Building A Multiple GPU System
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
Master Tutor,
I just realized something that is different, despite having my workstation be identical in its setup.....
"Time passage"
So what does that mean? I updated NVidia's drivers since I have last used the 13 GPU setup. One reason is because The Pascals were not supported by older drivers...
Another, Octane, as well as other 3D apps I am using, now rely on the later drivers for operation.
I am thinking this could indeed be a factor here, as in driver support for later cards = more demand by the driver per card.
I wonder if there would be a way to out navigate that....
I just realized something that is different, despite having my workstation be identical in its setup.....
"Time passage"
So what does that mean? I updated NVidia's drivers since I have last used the 13 GPU setup. One reason is because The Pascals were not supported by older drivers...
Another, Octane, as well as other 3D apps I am using, now rely on the later drivers for operation.
I am thinking this could indeed be a factor here, as in driver support for later cards = more demand by the driver per card.
I wonder if there would be a way to out navigate that....

Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
Notiusweb,
We appear to be of one mind. I have Fermi, Kepler, Maxwell1, Maxwell2 and Pascal GPUs. So far, increases in GPU driver functionality/featurefullness has had the impact of lessening the IO manageable GPU count. Newer GPU drivers that support more featureful GPUs can take you to places that your old drivers can't reach. Octane, and some of the other speedy GPU renders that I use such as FurryBall, TheaRender, and Redshift, gets incrementally "upgraded" to enable us to use the fuctionality of more recent GPUs. Noteably, Otoy (for Octane 4) is dropping support for Fermi because those GPUs lack the functionality of more recent GPUs in achieving the functions that gets added by upgraded hardware and software capable of newer functionalities. I still have about 30 Fermi GPUs and intend to use them all until they've all died. That means that I have to limit their application to the uses for which they were designed and accordingly that I have to rely on older drivers that support a more limited featureset for them. Luckily (and unluckily) for me in a limited sense, not all 3d renderers are upgraded in the same way and at the same pace/time. So I can fully understand that you might consider retrograding. I do it. Luckly, in the case of my Fermi's, there's out-of-core (speed hit) rendering.
It may be unfortunate that you have to segregate your GPUs by family so that you will not be dragging down the functionality/featurefullness of your newer GPUs, which over time can have the effect of defeating the very reason for your having done so because the newer, faster GPUs may need to be separated from the older, slower ones so that the speed/featurefullness of the newer GPUs don't take a noticeable hit. Thus, in the end, you might end up where you are now, unless your practice was, initially, to max out each system with only GPUs of the same family.
More GPU functionality/featurefullness comes at a cost: more/enhanced Inputs are necessary for more/enhanced Outputs = more data to be managed. There has to be better coordination, which I seriously doubt will occur any time soon, between OS and software creaters, and system hardware and GPU manufacturers so that IO space does not get diminished in the name of "Progress." I remember that in the olden (CUDA) days (fall of 2013), zz1000 got 18 GPUs running on one system. I remain commited to trying to determine what max is possible today using the same motherboard, but newer GPUs, than were used by zz1000.
We appear to be of one mind. I have Fermi, Kepler, Maxwell1, Maxwell2 and Pascal GPUs. So far, increases in GPU driver functionality/featurefullness has had the impact of lessening the IO manageable GPU count. Newer GPU drivers that support more featureful GPUs can take you to places that your old drivers can't reach. Octane, and some of the other speedy GPU renders that I use such as FurryBall, TheaRender, and Redshift, gets incrementally "upgraded" to enable us to use the fuctionality of more recent GPUs. Noteably, Otoy (for Octane 4) is dropping support for Fermi because those GPUs lack the functionality of more recent GPUs in achieving the functions that gets added by upgraded hardware and software capable of newer functionalities. I still have about 30 Fermi GPUs and intend to use them all until they've all died. That means that I have to limit their application to the uses for which they were designed and accordingly that I have to rely on older drivers that support a more limited featureset for them. Luckily (and unluckily) for me in a limited sense, not all 3d renderers are upgraded in the same way and at the same pace/time. So I can fully understand that you might consider retrograding. I do it. Luckly, in the case of my Fermi's, there's out-of-core (speed hit) rendering.
It may be unfortunate that you have to segregate your GPUs by family so that you will not be dragging down the functionality/featurefullness of your newer GPUs, which over time can have the effect of defeating the very reason for your having done so because the newer, faster GPUs may need to be separated from the older, slower ones so that the speed/featurefullness of the newer GPUs don't take a noticeable hit. Thus, in the end, you might end up where you are now, unless your practice was, initially, to max out each system with only GPUs of the same family.
More GPU functionality/featurefullness comes at a cost: more/enhanced Inputs are necessary for more/enhanced Outputs = more data to be managed. There has to be better coordination, which I seriously doubt will occur any time soon, between OS and software creaters, and system hardware and GPU manufacturers so that IO space does not get diminished in the name of "Progress." I remember that in the olden (CUDA) days (fall of 2013), zz1000 got 18 GPUs running on one system. I remain commited to trying to determine what max is possible today using the same motherboard, but newer GPUs, than were used by zz1000.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
Who wants to try? ))
ComBox Technology - immersion cooling system for GPU mining
https://www.youtube.com/watch?v=lYKl4DTQUCM
ComBox Technology - immersion cooling system for GPU mining
https://www.youtube.com/watch?v=lYKl4DTQUCM
AMD Threadripper 1950X/64gb ram/RTX 3080 Ti + RTX 2070/Samsung SSD 870 EVO 500 gb/
Lightwave 3d
Lightwave 3d
LOL...my first thought is Fire Hazard with liquid and electric, 2nd thought is, I can't get more than 8, and they don't get that hot anymore since Pascal, so why bother.promity wrote:Who wants to try? ))
ComBox Technology - immersion cooling system for GPU mining
https://www.youtube.com/watch?v=lYKl4DTQUCM
Titan Z's got really hot as they were dual core with 1 fan, so watercooling those was helpful. But I haven't seen my Titan X Pascals external in air go > 78, and that is rare even full blast....
Titan Xp BTW is a champ...man, that card runs like 5 degree cooler than Pascal Titan X, which was about 5 cooler than regular old school Titan X. I never see Xp go > 71, 72
Okay, but, if there is some serious mega overclock going on, then I would be very interested!...
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
- FrankPooleFloating
- Posts: 1669
- Joined: Thu Nov 29, 2012 3:48 pm
Guys, I have a window of opportunity to upgrade to Win10Pro, before a poop-storm starts with multiple large projects... Has the GPU memory problem been fixed yet, for folks running Win10? (hogging way more vram than in Win7x64Pro)
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles
Nope. Still 9gb out of 11gb for 1080ti.
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
- FrankPooleFloating
- Posts: 1669
- Joined: Thu Nov 29, 2012 3:48 pm
Crap! I can't remember, but was that just for main GPU with monitors, or all of them?
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles
All, sadly 

3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
- FrankPooleFloating
- Posts: 1669
- Joined: Thu Nov 29, 2012 3:48 pm
Well, I have decided to go for it and upgrade to Win10Pro regardless.. However, I hit a snag when upgrading... I get that error where I don't have enough System Reserved Partition space available (only 7mb free - Win10 needs 15mb). So after tons of research and finding a shitload of conflicting information, I have determined that I am only going to trust my pals here on Octane for a definitive answer on if I can extend System Reserved Partition in Disk Management, as opposed to buying some dodgy partition software that may or may not even be needed.. or even work, or completely screw me and my C:\ drive...
But one thing that is throwing me off in particular is the first and second paragraphs here: https://docs.microsoft.com/en-us/window ... sic-volume -- I just don't know what to make of this...
And even though I have backed up crap as much as possible, and I would be able to reinstall Win7 and all my apps, if needed (or just buy a new copy Win10Pro on thumb and install anew from that), I really have little time for that nightmare scenario and pretty hesitant to pull the trigger on even trying to expand SRP, unless I have a little more confidence than I currently do...
This below is from my system, and not some jpg I grabbed. Please advise. Thanks guys! *I'm really hoping to get Tutor or smicha answers here, if possible*
But one thing that is throwing me off in particular is the first and second paragraphs here: https://docs.microsoft.com/en-us/window ... sic-volume -- I just don't know what to make of this...

This below is from my system, and not some jpg I grabbed. Please advise. Thanks guys! *I'm really hoping to get Tutor or smicha answers here, if possible*
You do not have the required permissions to view the files attached to this post.
Last edited by FrankPooleFloating on Fri Nov 02, 2018 6:48 pm, edited 1 time in total.
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles