Notiusweb wrote:Tutor et al,
What are your thoughts on SSD being used in available PCIE express lanes as far as Best Practices for a multi GPU rig, ...
http://computer.howstuffworks.com/pci-express1.htm states:
“Each lane of a PCI Express connection contains two pairs of wires -- one [pair] to send and one [pair] to receive. Packets of data move across the lane at a rate of one bit per cycle. A x1 connection, the smallest PCIe connection, has one lane made up of four wires. It carries one bit per cycle in each direction. A x2 link contains eight wires and transmits two bits at once, a x4 link transmits four bits, and so on. Other configurations are x12, x16 and x32.”
I liken PCI-e lanes to the modern expressways. When traveling the expressway, I frequently see large, medium and small trucks, luxury cars, sports cars, large family cars, small cars, minivans and motorcycles. “Peripheral devices that use PCIe for data transfer include graphics adapter cards, network interface cards (NICs), storage accelerator devices and other high-performance peripherals.” [ http://searchdatacenter.techtarget.com/ ... CI-Express ] Those high-performance peripherals can come in many forms and serve many purposes.
CPUs determine how many PCI-e express lanes might be supported and the more CPUs (usually the higher priced ones) per system, the more PCI-e lanes that can be supported. Intel labels its Xeon processors such that one can tell the number of supported CPUs that can co-exist by the first number after E5. In many ways, my E5-4650s V1s have virtually the same specs [ http://ark.intel.com/products/75289/Int ... e-2_40-GHz ] as the E5-2680 V1s [ http://ark.intel.com/products/75277/Int ... e-2_80-GHz ] that supports Max # of PCI Express Lanes = 40. At best, for the higher priced and larger number/name denominated CPUs - support up to 40 lanes has been the max from V1 to V5 of E5s. The number of CPUs that can co-exist together do differ. */ For example, systems that support E5-4600s can support up to four CPUs. Thus, four E5-4650s can support a Max # of PCI-e lanes of 160 (4x40). Systems that support the E5-2800s can support up to two high end CPUs and thus can provide up to 80 lanes (2x40). Thus on the one hand, if one were inclined to have PCI-e lanes sufficient to support more GPUs and other peripherals that use PCI express lanes one should look into acquiring a system motherboard that supports more CPUs. Moreover, the kind of connection one uses can affect PCIe lane availability. For example, using certain splitter cards (like Amfeltec’s X4 Splitter card) and certain riser cables (like x1 or x4 or even x8) can help to reduce PCI-e lane needs. Moreover, there’s a caution in that motherboard manufacturers seemingly abhor free/unused PCI-e lanes and thus tend to add more functionality that relies upon and uses PCI-e express lanes for data transmission. Don’t forget that motherboard designers/manufactures have very important roles to play. Even USB & SATA data can travel down PCI-e lanes even though you don’t see a USB or SATA card occupying one of your system’s PCI-e slots. Additionally, game systems aren’t known for being populated with Xeon CPUs and there’s no legal requirement that any manufacturer make any of its motherboards support the max lane capability of the highest end CPUs. There’s bound to be situations where someone buys a 40-lane supported CPU/ or two or four of them, along with the appropriate motherboard, and yet cannot take full advantage of it/them on a particular motherboard because that functionally hasn’t been implemented fully. So, pre-purchase investigation is required and compromises are likely. In the end, I fall back on an observation that I made earlier - judge the capability of a motherboard’s potential to satisfy a user’s GPU (and with a twist for your special question - SSD) needs by the number and “X” designation of the PCI-e slots that are visible. To be sure, one may be able to satisfactory run more GPUs and other PCI-e based cards than there are open slots on the motherboard, but the question of "how many more" is left to ingenuity and a lot of luck. That’s why the three motherboards that I last purchased each have eleven X8 sized slots, with one of the eleven being only X4 electrically. How I have and will populate those slots (and there’ll always be at least one SSD card in one or two of them) will greatly depend on my ingenuity and luck or as Seekerfinder says,"... it's trial and error." However, to reduce error, I'd recommend that a purchaser who's following our path just get the motherboard that has the greater number of visible PCI-e slots and be happy with any surplus GPU installations.
*/ Likewise, the E5-1600’s CPUs support only one CPU per system. [ As just one example, the E5-1680 V3 supports up to 40 lanes - http://ark.intel.com/products/82767/Int ... e-3_20-GHz ]
For now, I'm using one of my Supermicro X9-DRXs (11 PCI-e slotted). It works as well as I had/have expected it to work and I haven't found it to be any noticeable burden on the rig's ability to perform GPU functions. Since I'm doing animations, an SSD is essential to getting the smooth playback speeds, especially for large format projects, that I need; although in the near future, I might also dedicate one of my non-GPU-rendering systems mainly to final animation review and thus install one or more additional SSDs in it.Notiusweb wrote: ... as in does it work well, or is it a burden on the rig's ability to perform GPU functions. Or if you tried, what have you found works well or what doesn't work well?
glimpse wrote:Guys, I'm wondering, how many GPUs You managed to plug most on single PSU? (yeah, I'm aware of Watage), but.. what is the highest number of GPUs You manage to connect? =)
See below.
For my systems (many of which have dual GPU processor cards), it depends on various factors such as what else the system has to power, the number of processors on the cards, the type of rendering one does ( I use GPU-only rendering on some projects and GPU/CPU hybrid rendering on others and even simultaneous GPU and CPU rendering using different renderers depending on project needs) and whether one overclocks and the degree of thereof. So I'd suggest that one look closely at the wattage needs for their particular usage, one's total system's needs and the particularities of the GPUs that one may own. But in general, my experience has been that five GPU processors per one 1600W PSU is tops for my usages.glimpse wrote:Ghm, noOne tried to plug more? 1600W PSU seems to be an overkill for 5GPUs sipping under 1000W in total. I've heard some stability issues with more than 5GPUs & I'm curious if that has anything to dow with reality..itou31 wrote:on my side : 5 (2 Titan black and 3 780Ti) on a 1600W PSU LEPA