Notiusweb wrote:....
Hi Tutor,
1) Does X9DRX let you set the BCLK (Base Clock), I found that setting it higher or lower than default can impact performance and stability.
When 13 GPUs running full tilt on a render, it sometimes caused 1 or 2 to disconnect, so I have played with this in the past to try to enhance stability.
Default is usually 100 in my BIOS.
Note this is only as far as stability of multiple GPUs in system that are functional, it does not add IO space.
I found this relationship:
+BCLK = Higher Performance CPU, Lower stability GPU
-BCLK = Higher Stability GPU, lower performance CPU
2) Do some GPUs inherently use more or less IO space than others, or is it the connection schematic that uses more or less IO space (IE 1x vs 4x vs 8x vs 16x connection)
Supermicro multi processor systems that allow slight CPU clock tweaking = the DAX line - ex.
https://www.supermicro.com/products/mot ... DAX-iF.cfm.
https://www.supermicro.com/manuals/moth ... L-1366.pdf .
X(9)(10)DRXs don’t provide that luxury.
Speed of data (I/O) movement determinants
x1 (example - Amfeltec Splitter one cable or
http://i.ebayimg.com/images/g/f78AAOxy6 ... s-l500.jpg )
x4 (example -
http://i.ebayimg.com/images/a/T2eC16h,! ... s-l500.jpg )
x8 (example -
http://i.ebayimg.com/images/g/93MAAOxy4 ... s-l500.jpg )
x16 (
http://www.ebay.com/itm/NEW-PCIe-Expres ... 1117914385 and see also the bottom male PCI-e protrusion on most GPUs)
x16 is considered 16 times faster than x1; x8 is 8 times faster than x1, but only half as fast as x16 for speed of data movement between the GPU and CPU memory; so for us this mainly affects how fast does the GPU get projects and send back the render results, not the speed at which the GPU renders the project. CUDA core count and GPU speed settings are impactful for rendering speed, although also having faster "x" values would be of some benefit to animation rendering where multiple frame loads are needed.
Over and under clocking systems such as gaming or DAX systems that allow for such tweaks affects the PCIe transfer rate [ General Rule = Over clocking CPU usually = faster PCIe bus and under clocking usually = slower PCIe bus. But particular bios enhancements allow for hemi, demi, semi stages of detachments
*/]. The connection scheme (IE 1x vs 4x vs 8x vs 16x connection) generally has nothing directly to due with the IO space; but I can imagine situations where a system might otherwise be on the brink of instability and decreasing the speed of data throughput might save the day or increasing the speed of data throughput might push it over the edge.
2) Do some GPUs inherently use more or less IO space than others [?].
Yes, some GPUs, particularly, the latest ones, have more IO space needs. IO space is more like the “What is being required for the GPU and thus the system as a whole because the system must know the GPUs' identity and their resource needs and be able to satisfy those needs to boot cleanly and to run stably.” If the GPUs crave to much IO space the system becomes unstable. My Supermicro systems have bioses that will disable some GPUs to have enough resources available to tell the user/me the nature of the problem. Too much IO space consumption will muck up the system’s bios - which is what I recently caused - I didn't heed the warnings soon enough. IO space consumed by GPUs appears to be growing, particularly faster (especially for the GTX 780 TIs and 980 TIs) - in sum, GPUs with the TI designation. However, overall, as GPU cards become more complex/featurefull IO needs will likely increase for most of them.
**/ And keep in mind that IO space consumption isn't limited to GPUs; so, getting a system to accommodate a certain number of GPUs might require disabling other non-GPU related features. But in the end, no motherboard can accommodate an infinite no. of GPUs - THERE IS A LIMIT FOR EVERY MOTHERBOARD MADE THUS FAR. I choose the Supermicro DRXs because their capacity (when combined with a Linux OS) appears to currently be the highest.
*/ Nehalem and Westmere CPU generations decoupled the CPU's speed from the PCIe bus speed - see, e.g.,
http://www.insanelymac.com/forum/topic/ ... ch-scores/. Intel ended this with SandyBridge and later GPUs but gave what I call "hemi, demi, semi stages of detachments" to some mainly gamer systems see, e.g.,
http://www.overclock.net/t/1198504/comp ... ck-edition . The net result was an end to extreme CPU overclocking.
**/ If one desires more precise information about IO space consumption, he/she should delve into the study of "bars," beginning here -
https://en.wikipedia.org/wiki/PCI_configuration_space. Earlier in this thread I have referenced other sites with more detailed bar info. such as -
http://resources.infosecinstitute.com/p ... nsion-rom/ . Those with a deeper interest should just begin by searching this thread for "io space." However, for those who don't want to dive this deeply, just remember to get a system with a bios that allows PCI-e configuration space to 4k, also sometimes called "above 4G" functionality. For those systems that have this functionality it's usually found in the PCI-e settings of the bios configuration options. Enable it before adding more than one GPU, by booting and accessing the bios options, then the PCI-e options and turning it on there. Don't buy a motherboard that lacks that functionality if you want to maximize GPU resources; so I recommend that you download the motherboard's manual and check for this functionality before you purchase..
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.