8 GPU, how?

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
BKEicholtz
Licensed Customer
Posts: 80
Joined: Tue Dec 07, 2010 11:14 pm

Have you checked out Trenton Systems?

http://www.trentonsystems.com/pci-expre ... -backplane

It is not an inexpensive option, but seems to have a lot of computing potential. I like the idea of custom building the system, but my current research has not determined whether these GPU backplanes will work well with a desktop chassis. They are designed for server chassis.
ASRock Extreme11 | i7 3970x 5.0 Ghz | 32gb RAM | (4) EVGA SC Titans
User avatar
glimpse
Licensed Customer
Posts: 3740
Joined: Wed Jan 26, 2011 2:17 pm
Contact:

BKEicholtz wrote:..but my current research has not determined whether these GPU backplanes will work well with a desktop chassis. They are designed for server chassis.
they do work - someone on this forum already have one of these /maybe just older/
there's a thread, just search through forum =)
User avatar
Tutor
Licensed Customer
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

Snoopy wrote:How can you have 8 GPUs? What kind of external case/controller do you need?

My motherboard allows 4 GPUs, how can I add 4 more?

Is there anybody here using a pci-e expansion chassis?

Cubix? Magma? StarTech? Netstor?

Thanks in advance!

Snoopy
I'm looking into those same questions you've posed, for my thread here: [ http://forums.macrumors.com/showthread.php?t=1333421 ]. What I've found out is that, if you don't want to go the multiple 4 GPU system route,

(1) then going the 8 PCI-e slotted server route costs about the same as going the external 8 PCI-e chassis route because the chassis is in the $7-8K range and the Tyan 8 GPU server costs about $5.2k from SabrePC [ https://www.sabrepc.com/p-3748-tyan-b70 ... ebone.aspx ] and elsewhere. Add in enough ram from SuperBiiz to make the server useful and add an HD or 2 and get some low powered Xeons from eoptionsonline [ https://www.eoptionsonline.com/default.aspx ] and you'll be good to go for a little less than the price of going the external chassis route, except for one thing. There's a Gorilla in the corner constantly shouting "Feed Me." All of these solutions are designed for Teslas, which have a much lower TDP (230-235W). The same applies to Quadro cards - their TDP is much lower than their comparable GTX card. In fact, if you're talking about overclocking GTX Titans or Fermi's, then you'e talking about twice that amount and the power supplies that come with the external chassis or the server are insufficient to feed those top of the line beasts whose name begin with GTX, especially when overclocked. The flip side is that you could, and may have to, underclock the GTX card, by from 70 to 85% to more closely mimic the TDP of the Tesla/Quadro comparator. So it really boils down to following the wise advice posted above by gabrielefx - not to put all of your Titan eggs in one basket, unless you got in place a sound backup plan. You may have to hack another 1500 - 1600W power supply or two into the picture because even the server's 2000-2400W power supply is insufficient to run 8 overclocked top of the line GTX cards. Top of the line GTX overclocked cards can each consume between 350-550W, depending on the card [ also, see, e.g., [ http://www.tweaktown.com/reviews/5402/e ... dex22.html ] and note 8 x 350 = 2,800W and 8 x 550 = 4,400W - Ouch!]. So to avoid burning down the structure where your 8 Titans are all housed snuggly together, call your electrician and tell him or her that you're Santa because you're giving him or her a means to pay for Christmas because you've read the fine print in the caution for the 2,000 - 2,400W PSUs in the servers:
"Power Supply Type ERP1U
Efficiency PFC
Serviceability Hot-swap
Input Range 100-127V AC (Low-Line Voltage) / 200-240V AC (High-Line Voltage)
Frequency 60 Hertz
Output Watts 2400W [(2+1) 2400W @200-240V], Max. 12Vdc@ 199.6A / 3000W [3 x1000W @100-127V], Max. 12Vdc@ 249.6A; Note: Only one AC inlet allowed per circuit breaker"
[ http://www.tyan.com/product_SKU_spec.as ... =600000188 ]. If you're running @ 120V, then that 8 slotted PCI-e server is pulling from all three internally housed 1200W PSUs. Depending on how you muster enough power to run 8 OC'ed Titans, you could have at least four AC inlets sucking about 1K watts each, necessitating that you have your system on four separate circuit breakers, i.e., if you chose to add a 1500 - 1600W power supply or two to the mix to get you to the grail of 4k watts for 8 OC'ed Titans or Fermis;

(2) then going the four slotted external chassis route has similar cracks in the roadway because they are not designed for the power requirements of the top of the line GTX cards either - they're also designed for the much lower TDP of the Tesla and Quadro, which don't use power to pump out a steady video stream. Those chassis come with either a 1000W PSU and if you're somewhat lucky a 1250W PSU. So you'll have to swap out the factory PSU in the 4 slot chassis and replace it with a much more beefier one. For what it will cost you to buy a single 4 PCI-e slot chassis and install within it a powerful enough PSU, you could have bought one of these [ http://www.provantage.com/supermicro-sy ... UP9347.htm ] or, better yet, one of these [ https://www.superbiiz.com/detail.php?name=SY-74GRTPT ] and for just a few hundred more dollars add the finishing touches to have another complete system. Most importantly for running up to 4 power hungry cards you'd get more power and an equally powerful backup PSU to boot. You'd also get an additional 8x slot for a half-length card such as the GT 640 4GB that you can use for system interactivity. You could also easily add one or two FSP booster(s) from newegg [ http://www.newegg.com/Product/Product.a ... 6817104054 ] for additional overclocking overhead; and/or

(3) then you might, nevertheless, adopt the wait and see approach as I'm doing. Maybe, if and when Gnif lets us know exactly how to give Titans a Tesla/Quadro personality makeover like the ones contemplated here [ http://www.eevblog.com/forum/projects/h ... nterparts/ ] and yielding the following:

A GTX Titan becoming a Tesla K20X;
A GTX 690 becoming a Tesla K10 or Grid K2 or Quadro K5000;
A GTX 680 becoming a Tesla K10 or Grid K2 or Quadro K5000;
A GTX 670 becoming a Tesla K10 or Grid K2 or Quadro K5000;
A GTX 650 becoming an NVIDIA GRID K1 or Quadro K600 or Quadro K2000;
A GT 640 becoming an NVIDIA GRID K1 or Quadro K600 or Quadro K2000;
A GTS 450 becoming an NVIDIA GRID K1 or Quadro K600 or Quadro K2000 (pre 2011 model); and
A GTX 580 becoming an Tesla M2090,

you could then more easily get that Tesla/Quadro TDP (but, e.g., at the loss of video display if that is a feature [or a non-feature if you prefer to call it that] of the Quadro or Tesla card yours is now emulating), but you also get the ability to connect two systems with Infiniband cards from Mellanex and use the RDMA for GPU Direct feature of Teslas to make the separate systems operate as if they were one for CUDA-based rendering, over a two way infiniband network. In Titans, certain Tesla card features have been disabled (i.e., the Titan drivers don't activate them). RDMA for GPU Direct is a Tesla feature that enables a direct path of communication between a Tesla GPU and another or peer device on the PCI-E buss of your computer, without CPU intervention. Device drivers can enable this functionality with a wide range of hardware devices. For instance, the Tesla card can be allowed to communicate directly with your Mercury Accelsior card or another Tesla card without getting your Xeon or i7 involved. Titans do not support the RDMA for GPU Direct feature. But from Gnif's work it may soon be possible to give the Titans a personality makeover by changing their identities to that of Teslas. Then you could take advantage of the RDMA for GPU Direct. This personality makeover has been done to the GTX680s. Of course, to connect more that two systems [or in my case (if I did not want to aggregate them in pairs) eight systems] , you'll need to buy an Infiniband switch which is very expensive, unless you're Ebay lucky, but all of your systems would then be perceived as one for rendering - you'd just need an open PCI-e slot open in each system to insert the Infiniband card.

Addition - Here's a comment posted on Newegg's site by an owner of two GTX 580s, that makes my point about overclocking necessitating lots of watts and the 4 and 8 slotted chassis being underpowered when it comes to top of the line O'ced GTX cards: "I bought 2 Classified Ultra's with stock cooling. After installing and seeing that these cards have a Dual Bios feature (2nd Bios for serious overclocking), I bought 2 waterblocks and turned them into Hydro Coppers. I was able to crank these high enough to score a 13,085 in 3Dmark11. According to EVGA, an overclock on the core to 1070 does not even come close to how these cards can perform. I was running a 1200 watt Silverstone Strider PSU, and the system would shut down due to lack of power before I could O.C. these cards more. This is an amazing card, especially in SLI." That's with just two overclocked cards using a 1200w PSU and we're talking about four or eight Titan's. So, even given that the GTX Titans are more electrically efficient than the GTX 580s, it's not just the number of slots that pose an obstacle to building a large Titan render farm, but also the amount of electricity you can safely tap plays a key role. Thus, having more open PCI-e slots that can be filled just increases that the challenge.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
machineyes
Licensed Customer
Posts: 12
Joined: Sun Jul 14, 2013 3:28 am

Thank you all for this very insightful information! Does anyone happen to know whether backplanes with Gen3 vs Gen2 make a big difference?
16 core xeon Z820
128 gigs of ram
Maximus 2.0 (quadro k-5000 + Tesla K20) + 4 gtx titans in a cubix gen3
software used with Octane: Houdini & C4D
UnCommonGrafx
Licensed Customer
Posts: 199
Joined: Wed Mar 13, 2013 9:14 pm

I agree on the second case, particularly that Otoy is going to make octane network-able.
Two workstations, one that can be a backup with one as the main, and you ought to be where you want.
i7-4770K, 32gb ram, windows 8.1, GTX Titan and gt 620 for display
User avatar
tonycho
Licensed Customer
Posts: 391
Joined: Mon Jun 07, 2010 3:03 am
Location: Surabaya - Indonesia
Contact:

how about turbobox pro from netsor?


http://www.netstor.com.tw/_03/03_02.php?MTEx
http://www.antoni3D.com
Win 7 64 |GTX 680 4 Gb (Display) | 3 x GTX690 4Gb | Intel i5 |Corsair 16GB
artdude12
Licensed Customer
Posts: 85
Joined: Mon Feb 11, 2013 5:16 pm
Location: Chicago

Zimshady, just curious about your temps with the cards that close together. I have the same mobo and have two titans spaced one slot apart. My temps have been between 60 - 80C when rendering. Am considering WC.
Thanks.
i7 4930K_Asus_RampageIV-Extreme_64GB_3-Titan_NVIDIA-337.88_Win7-64
User avatar
BorisGoreta
Licensed Customer
Posts: 1413
Joined: Fri Dec 07, 2012 6:45 pm
Contact:

I have 3 TITANs in the main case and 4 TITANs in the Netstor expansion box so it is not 8 but 7 cards. I really recommend the Netstor box because it works without any problems at all. Windows recognizes all 7 cards and scaling is linear, Octane works exactly 7 times faster then with a single TITAN.

This is on Windows 8.1.
PB285039.jpg
PB285040.jpg
User avatar
gabrielefx
Licensed Customer
Posts: 1701
Joined: Wed Sep 28, 2011 2:00 pm

nice rig!
but with $2200 I buy a second workstation:
TJ11
1500w PSU
cpu
24gb
2 ssd
mb
Windows 7 Pro
quad Titan Kepler 6GB + quad Titan X Pascal 12GB + quad GTX1080 8GB + dual GTX1080Ti 11GB
User avatar
karanis
Licensed Customer
Posts: 79
Joined: Sat Jul 23, 2011 11:21 pm
Location: Ankara / TURKEY

Gabriel, I aggree with you but I have 2 points to disaggree & aggree

Pros or Cons
1. Distributing computation across multiple workstations is a backup for consistency and workflow
2. You can also distribute UPS`es
3. Your work never cuts off, only slows down on catastrophic scenerios

Cons or Pros
1. Every other workstation brings more licence fees. The main software, Octane SA, Octane Plugin etc
2. Bigger UPS`es cost less wrt scaling
3. You can get very much shorter design times on your scene and what your final result will look like
4. Number of workstation you deal is the multiplier of the headache and effort to summen up things
5. Managing files over the network causes workflow time increase also
6. Need for speed is endless. We always get used to what we have and seek more.

My conclusion :
Working on a full throttled machine is lovely. Insane ...

My question :
Mmm... Did anybody experience an "Asus P9X79-E WS" like mobo connected to 4 Netstors with 4 780Ti`s on each ??? :D
Win7 64 & Slackware 14 64 | 3x Zotac 580 amp & 1x MSI 680 | i7 3930K @4.8 | 32 GB | Asus rampage extreme IV
Post Reply

Return to “General Discussion”