Tutor,
Again - thank you so much for the information you've provided. I did some research and Lian li a75 seemed a great choice for my next biuld but unfortunately I must have 2 PSUs for watercooled 7x 980 ti and the only choice that's left for X10DRX is D8000. Due to we're (with Yam) going with 7 (not 8 - X10DRX has first 8 open slots for 16x cards) Asus Z10PE D8 is currently our target and Phanteks Enthoo Primo with dual PSU support.
My further questions: what is the maximum number of GPUs (or rather graphics cards) you managed to connect from a single PSU? What I see I'll be able to connect 2 Xeons and 4x 980 ti from a 1600W Supernova T2 and other 3x 980 ti from 1000W T2 or P2. Although according to my calculations 7x 980 ti shall draw no more than 1400W during rendering - any thoughts about it?
Best Practices For Building A Multiple GPU System
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
smicha wrote:Tutor,
Again - thank you so much for the information you've provided. I did some research and Lian li a75 seemed a great choice for my next biuld but unfortunately I must have 2 PSUs for watercooled 7x 980 ti and the only choice that's left for X10DRX is D8000. Due to we're (with Yam) going with 7 (not 8 - X10DRX has first 8 open slots for 16x cards) Asus Z10PE D8 is currently our target and Phanteks Enthoo Primo with dual PSU support.
My further questions: what is the maximum number of GPUs (or rather graphics cards) you managed to connect from a single PSU? What I see I'll be able to connect 2 Xeons and 4x 980 ti from a 1600W Supernova T2 and other 3x 980 ti from 1000W T2 or P2. Although according to my calculations 7x 980 ti shall draw no more than 1400W during rendering - any thoughts about it?
Hello Smicha,
I'm assuming (1) that your reference to "a single PSU" means a 1600W PSU, (2) that your reference to the maximum number of GPUs refers to GPUs with a TDP of 250W and a PSU that is also powering the motherboard, ram, etc. and (3) related to running only Octane, unless otherwise specified. With that understanding, my answer is, leaving little to no room for overclocking, a maximum of five (5) for a dual CPU system and six (6) for a single CPU system [Please keep in mind, however, that all of my systems, including my MacPros, each also have a 450W FSP Booster X PSU which I closely manage to stay below 1750W/circuit, so that five could become six and that six could become seven, but leaving no headroom for overclocking]. So, I agree that one will "be able to connect [and render in Octane with] 2 Xeons and 4x 980 ti from a 1600W Supernova T2 and [power the] other 3x 980 ti from [a single] 1000W T2 or P2 [PSU], [leaving headroom for overclocking]." Using .8 as a factor that accounts for Octane's efficiency means that "7x 980 ti shall draw no more than 1400W during rendering," assuming that Octane's power efficiency hasn't changed significantly from the final V2 to V3 alpha of Octane.
P.S. It may take me a couple of days for me to run a scene/heavy compile file that Yam ask me to run timed tests on because I haven't yet installed or tried V3 alpha and I'll be running the test file between my current projects. I do look forward to testing V3 alpha with that complex scene file.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
Thank you again so much, Tutor. I added some thoughts on Yam's post.
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Tutor, I'd be interested to see your rig running OR3, I don't remember if you have any cards connected PCI 1X, but a couple of us with 1X connections are getting crashes on scenes where the PC actually freezes and reboots. I have one scene in particular that I sent to developer Abstrax where I can reproduce it over and over again, even at different "time-'til-freeze" speeds, depending on resolution (higher res = faster crash). And we don't see the crash when using 16x connection. I also found that all it takes is for one card to be 1x, and I get the crash on the scenes that 'provoke' it. I was wondering if crashes occur at 1x and not 16x, would they occur with 4x and 8x. You know how I would test? I would send you the scene and you would run at 4x.
And the key thing is, it never ever occurs when using V2, or in general using the PC. SeekerFinder posted a comment in the development build that made me think that the move to CUDA 7 may be behind it, as it is a new CUDA Code for Octane.
As is written by Mother Goose:
"bits and bytes bit the GPU, that made the build go rat-tat-too...
and when coders code with new code new, the code could-go then coo-coo-coo" .

And the key thing is, it never ever occurs when using V2, or in general using the PC. SeekerFinder posted a comment in the development build that made me think that the move to CUDA 7 may be behind it, as it is a new CUDA Code for Octane.
As is written by Mother Goose:
"bits and bytes bit the GPU, that made the build go rat-tat-too...
and when coders code with new code new, the code could-go then coo-coo-coo" .
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
Notius, could You provide test scene (or part of it) that is most likelly causing crash at Your system?
I have 1x splitter (backplate from Amfeltec), 4x GPU oriented & then risers 8X + motherboard runs on 16x, 8x & 1x native, also thunderbolt devices.. - I'd love to test things out on something that more than likelly to crash..
(feel free to drop PM if You would not like to share here).
I have 1x splitter (backplate from Amfeltec), 4x GPU oriented & then risers 8X + motherboard runs on 16x, 8x & 1x native, also thunderbolt devices.. - I'd love to test things out on something that more than likelly to crash..
(feel free to drop PM if You would not like to share here).
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
Since Glimpse, who is eminently qualified, has offered to help, I'm assuming that you won't need my assistance now. By the way, my first significant contact with V3 was yesterday and I am also assuming that Glimpse currently has a lot more experience with V3 than do I. If any of my assumptions aren't correct, please let me know. My only comment is that while the move to CUDA 7 might be playing a role, if the problem isn't occurring when you use V2 and CUDA 7, then a deficiency in V3's present coding would appear to me to be more likely at fault.Notiusweb wrote:Tutor, I'd be interested to see your rig running OR3, I don't remember if you have any cards connected PCI 1X, but a couple of us with 1X connections are getting crashes on scenes where the PC actually freezes and reboots. I have one scene in particular that I sent to developer Abstrax where I can reproduce it over and over again, even at different "time-'til-freeze" speeds, depending on resolution (higher res = faster crash). And we don't see the crash when using 16x connection. I also found that all it takes is for one card to be 1x, and I get the crash on the scenes that 'provoke' it. I was wondering if crashes occur at 1x and not 16x, would they occur with 4x and 8x. You know how I would test? I would send you the scene and you would run at 4x.![]()
And the key thing is, it never ever occurs when using V2, or in general using the PC. SeekerFinder posted a comment in the development build that made me think that the move to CUDA 7 may be behind it, as it is a new CUDA Code for Octane.
As is written by Mother Goose:
"bits and bytes bit the GPU, that made the build go rat-tat-too...
and when coders code with new code new, the code could-go then coo-coo-coo" .
glimpse wrote:Notius, could You provide test scene (or part of it) that is most likelly causing crash at Your system?
I have 1x splitter (backplate from Amfeltec), 4x GPU oriented & then risers 8X + motherboard runs on 16x, 8x & 1x native, also thunderbolt devices.. - I'd love to test things out on something that more than likelly to crash..
(feel free to drop PM if You would not like to share here).
Thanks Glimpse for fielding this ball.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
The more information we would have, Tutor, the better we would know real causes.. we all have different systems & milage may vary depending on those different builds.. (like with recent discovery about compilation of scene)..-it's impossible to guess from one ore two runs on similar systems what is happening...You can only start guessing (blind shooting), but..as we get more & more data some patterns start to get repeated & it gets easier to notice them. So please, if You would have some time, share Your insights on this topic as well.
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
glimpse wrote:The more information we would have, Tutor, the better we would know real causes.. we all have different systems & milage may vary depending on those different builds.. (like with recent discovery about compilation of scene)..-it's impossible to guess from one ore two runs on similar systems what is happening...You can only start guessing (blind shooting), but..as we get more & more data some patterns start to get repeated & it gets easier to notice them. So please, if You would have some time, share Your insights on this topic as well.
All excellent observations. I'll join in, but I will not be able to start until tomorrow, trying to replicate such usage on a MacPro, on a couple of Supermicros (a 32-core server [ X9QR7-TF+/X9QRi-F+ ] and a 16-core workstation [ X9DRX ]), and a Gigabyte or EVGA X79 system. It may take a few days to fit in tests on four of my systems.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
Tutor,
If this is a right place to ask: what is the distance between the top of Lian-li D8000 do the top edge of X9DRX? I am curious of there is a room only for fans at the top or if by any chance an extra thin radiator would fit?
If this is a right place to ask: what is the distance between the top of Lian-li D8000 do the top edge of X9DRX? I am curious of there is a room only for fans at the top or if by any chance an extra thin radiator would fit?
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
- Tutor
- Posts: 531
- Joined: Tue Nov 20, 2012 2:57 pm
- Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
Smicha - surely, you jest. If you aren't allowed to ask me a question concerning cooling a multiple GPU system (to be built for Yam) in an Off Topic Forum thread focused on the Best Practices For Building A Multiple GPU System (and that's surely mainly all that we've been discussing most recently), then we're just going to have to break some rules, crack some eggs, and if it we must - confront some stiffs and straighten them out further.smicha wrote:Tutor,
If this is a right place to ask:
Now it's obvious to me that you've been spying on me because I just took a break from working on installing a water cooling radiator beneath the inner fans at the top of one of my Lian-Li D8000s. While on that break I reviewed my messages and what awaits me? - Your post asking me if there's room only for fans at the top or if by any chance an extra thin radiator would fit? Well, at least through your spy glass the radiator at the top of that Lian-Li prevented you from seeing the fans. In any event, as to the two fans nearest the case's door, it'll be so high above the motherboard that only length and width and fitting points matter - thickness will not likely be a concern. As to the distance from the top of the case to the motherboard (depicted in the picture below) it's 2 inches.smicha wrote: ... what is the distance between the top of Lian-li D8000 to the top edge of X9DRX? I am curious if there is room only for fans at the top or if by any chance an extra thin radiator would fit?
P.S. Your application requires this kit - http://www.ebay.com/itm/Lian-Li-D8000-2 ... SwR0JUNAGN - which is sold separately and the D8000 can take two kits. There's room for 2 long radiators and two sets of 120mm fan pairings. This is where I bought my last kits before today and the kit that I had previously ordered earlier today for another Corsair Hydro Series™ H100i GTX Extreme Performance Water / Liquid CPU Cooler - 240mm from http://www.newegg.com/Product/Product.a ... 6835181090 to cool my CPUs in that system. Hey, bet you didn't see me placing that order. But in any event, stop that spying on me or reading my mind! I know that many think that it's a short and easy read, my having only one page of content there. Nevertheless, mind reading is just spying without external devices. It's much for forbidden than your asking me proper questions in a proper manner in the proper thread in the proper forum.

Always at your service.
You do not have the required permissions to view the files attached to this post.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.