External Graphics Cards PC

Forums: External Graphics Cards PC
A public forum for discussing and asking questions about the demo version of Octane Render.
Forum rules
For new users: this forum is moderated. Your first post will appear only after it has been reviewed by a moderator, so it will not show up immediately.
This is necessary to avoid this forum being flooded by spam.

Re: External Graphics Cards PC

Postby Tutor » Wed Jul 01, 2015 6:01 pm

Tutor Wed Jul 01, 2015 6:01 pm
Notiusweb wrote:
Post by smicha » Wed Jul 01, 2015 9:18 am
Notiusweb,

You said something important about titan X and risers: if I put a primary gpu (titan X) on a riser will a system boot?



Hi Smicha,

In the cases of my Titan X, Titan Z, and 660 Ti as a primary GPU loading through the powered USB 3.0 riser on my ASRock X79 Extreme 11, the answer is NO.

I have just recently experienced failure in trying different things to open up lane 2.
In various scenarios I have run them as Primary GPU in lane 1 on the riser and they would not boot into Windows. They go into BIOS, but then just stall after that. One time Windows flash screen froze, another time it started Startup Recovery and said it could not repair. I did try the Titan X to load out of lane 2 one time also, but it was almost being handled like a second monitor. It worked once, installed drivers, asked to restart, but then would not ever proceed to boot. I tried the whole process again after having told BIOS to load lane 2 as the primary GPU, but then I ran into the no-boot problem again. Could be only my board, or the risers I am using, but they won't work off of the USB 3.0 riser as functional primary GPUs. It maybe would work off of some other type of riser, or the risers with some other motherboard, but I don't know...

I did see that Polish Ginger had rig photos, on page 7 of this blog, where the primary GPU appears to be connected to the riser and tested in different slots. But there were problems encountered, and then I see on page 8 when Polish Ginger's photos show the photo of the Amfeltec cluster, the primary GPU looks like it is attached directly to the board, because PG makes reference that the cluster is holding the 980 Ti's. I would bet the riser was not allowing the primary GPU to boot properly on that motherboard as well during the 'page 7 phase'.

On the side, I took a look at recent motherboards by ASRock, and saw a lot seem to have 6 PCI lanes, but the gap between 1 and 2 is 2 slots wide. I am wondering if it is common that lane 2 never gets utilized unless you had a single slot card. Titan X and Z are power hungry, but I couldn't get my 660 Ti working that way either. Maybe a powered full non-USB16x long riser would work?... Sorry to throw out more questions, but I would say if one was going to build a rig, they could probably already test with their own current rig and cards whether or not it is plausible that it is viable.

In other news...

GippyCar11.png


GippyCar8.png


Notiusweb,

Please supply pic(s) of risers from which system does not boot and pic(s) of any riser(s) from which system does boot, indicating also from which slots boot fails or succeeds. Thanks.
Last edited by Tutor on Wed Jul 01, 2015 6:16 pm, edited 3 times in total.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
Tutor
Licensed Customer
Licensed Customer
 
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

Re: External Graphics Cards PC

Postby Tutor » Wed Jul 01, 2015 6:07 pm

Tutor Wed Jul 01, 2015 6:07 pm
A Polish Ginger wrote:Smicha
Thanks for the recommendations, I didn't know about PLX chips. Though I am going to try out the P9 X79 E-WS, which does have 2 PLX chips only because I don't feel like buying a new CPU if I can help it. Also the GPU Cluster ships tomorrow, but I don't know how fast it will arrive. Fingers are crossed that I'll get it this weekend, but most likely will be next week. Canada to the States.

Notiusweb
Thanks and I was originally planning on using the 1st PCIE lane for the display card. Because I have 6 PCIE lanes and the separation between lanes 1 and 2 are big enough to fit a dual slot card, though that won't be the case with the new board. I currently have the P9 X79 WS so I'm getting the step up from that and new so hopefully I can at least get all 6 GPUs up and running with little trouble. The new board should come this weekend so I'll update once I'm able.

I will attempt your suggestion, but on the current board I was only able to get those 5 GPUs to work once. I tried it again this last weekend and it didn't decide to want to work with me. So that is what lead me to do that single riser test. My thinking is since each lane is not working with a single riser/ single GPU connected to it (except lane 1) Then I can think of 2 problems.

1. Since the riser works with lane 1, but can't with the others. The amount of bandwidth is to low to lanes 2-6 and can not function properly with the 1x USB 3.0 riser.

2. the amount of bandwidth in the riser needs to be bigger 4x, 8x, 16x in order for the lanes to accept it.

These are complete guesses, so I could be completely wrong. Is there a way to manually set bandwidth on the PCIE lanes? I know that the CPU decides bandwidth automatically, but I was thinking if you set it to x1,x4,x8,x16 manually it would take out the guess work for the CPU and make things more fluid. Though I'm just making guesses here. What really bothers me is that 4 GPUs directly connected doesn't working anymore, because last time I got 4 980 ti's to work no problem. I just plugged them in and that was it. Which also leads me to believe the bandwidth priority of this Motherboard is screwed up.

I'll update once the new board arrives and I have the time to install it to see if I have the same issues, hopefully I don't.

Also as to you waiting on the response from your BIOS manufacture do you mean the Motherboard manufacture or is there a different company that makes BIOS? When I went to contact ASUS about maximum GPU load they kept on saying 4 only. No matter how I phrased the question they kept telling me 4 only on all their Motherboards. It was over chat support so I don't know how to hold his credibility/ knowledge on the subject. I might of gotten a bad person maybe, but since what we are doing is breaking/ bending rules I don't know that your average technical support will know the details and just read what is off the box. Perhaps if I explained the situation to the person better we would've gotten somewhere, but that is something to try a later day. You'll just have to keep calling until you get someone that knows what you are talking about. This is assuming that you're talking about the Motherboard manufacture. If you are not then those people most likely are well versed... hopefully.


Polish Ginger,

If you're using any risers, please supply pic(s) of risers from which system will not boot and pic(s) of any riser(s) from which system does boot, indicating also from which slots boot fails or succeeds. Thanks.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
Tutor
Licensed Customer
Licensed Customer
 
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

Re: External Graphics Cards PC

Postby Notiusweb » Thu Jul 02, 2015 7:42 am

Notiusweb Thu Jul 02, 2015 7:42 am
Postby Tutor » Wed Jul 01, 2015 6:01 pm

Notiusweb,

Please supply pic(s) of risers from which system does not boot and pic(s) of any riser(s) from which system does boot, indicating also from which slots boot fails or succeeds. Thanks.


Here is the riser type I have:
http://www.amazon.com/RIF6-PCI-E-Adapte ... 4NYXGPCV94

It fails to boot beyond BIOS into OS in any instance I tried, but it can reach into BIOS. I tried in PCIE Lanes 1 and 2, while they were set as primary display, respectively. I tried this with Titan X, Titan Z, and 660 Ti, all with failed results into the OS.
I also loaded my Titan X one time with HDMI through lane 2 while NOT set as primary display. It loaded once, asked to update driver for the card and requested a restart (it never appeared as if it had access to the driver itself, the lack of NVidia icons and screen coloring appeared as if it had not yet installed). Once I then rebooted, it would not boot again into the OS. It was almost as if it served merely to initialize driver installation for the card, without any plan to operate as an ongoing display.

As such, I can only use them to connect external GPU CUDA devices, which in my case, works in any PCIE lane.
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
User avatar
Notiusweb
Licensed Customer
Licensed Customer
 
Posts: 1285
Joined: Mon Nov 10, 2014 4:51 am

Re: External Graphics Cards PC

Postby Tutor » Thu Jul 02, 2015 7:48 am

Tutor Thu Jul 02, 2015 7:48 am
Notiusweb wrote:
Postby Tutor » Wed Jul 01, 2015 6:01 pm

Notiusweb,

Please supply pic(s) of risers from which system does not boot and pic(s) of any riser(s) from which system does boot, indicating also from which slots boot fails or succeeds. Thanks.


Here is the riser type I have:
http://www.amazon.com/RIF6-PCI-E-Adapte ... 4NYXGPCV94

It fails to boot beyond BIOS into OS in any instance I tried, but it can reach into BIOS. I tried in PCIE Lanes 1 and 2, while they were set as primary display, respectively. I tried this with Titan X, Titan Z, and 660 Ti, all with failed results into the OS.
I also loaded my Titan X one time with HDMI through lane 2 while NOT set as primary display. It loaded once, asked to update driver for the card and requested a restart (it never appeared as if it had access to the driver itself, the lack of NVidia icons and screen coloring appeared as if it had not yet installed). Once I then rebooted, it would not boot again into the OS. It was almost as if it served merely to initialize driver installation for the card, without any plan to operate as an ongoing display.

As such, I can only use them to connect external GPU CUDA devices, which in my case, works in any PCIE lane.


Thanks for info.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
Tutor
Licensed Customer
Licensed Customer
 
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

Re: External Graphics Cards PC

Postby Notiusweb » Thu Jul 02, 2015 9:15 am

Notiusweb Thu Jul 02, 2015 9:15 am
My update:

I now have Titan X with the 5 Titan Z getting past BIOS. In a nutshell, it took a Titan X compatible BIOS capable of an Above 4G Decoding Option. I had to disable Marvel ESata Option in BIOS, it is some sort of PCIE hotplug external SATA option, which was being enabled by default in certain BIOS versions, but not the latest one I got. So once I found the difference, I found the culprit.

Now, my problem is the OS. From what I am reading, I am not going to get >10 GPU through Win 7 64. The thing is, I can isolate my Z's to the point where as long as it is not > 10, in any configuration they all work. I read in Bitcoin and gaming forums that in this scenario it does not make sense to mod the registry, because there is just a brute limit. Thus, once it hits 11, 1 core won't work.
It appears, however, that Win 8.1 will net me one more GPU core somehow, as long as my board supports it, which I think it would given it is showing me the 'problem' GPU core. So right now my option would be to update to 8.1, but for now I'm happy to just get the 4 1/2 Z's working, which gives me an extra 2880 CUDA.

BUT, if anyone knows how to beat this without going to Win 8.1, please share your ideas!

OctaneRender4.jpg
Titan X with 4 1/2 Titan Z (10 GPU)


And just for fun...
if I am to mathematize, under a Win 7 64 10 GPU limit, for just straight CUDA rendering, would one be better served to have had 10 Titan X, 10 GTX 980 Ti, or 5 Titan Z?

Working VRAM GB

Titan X 12GB
980 Ti 6 GB
Titan Z 6 GB

The difference is an additional 6 GB VRam power in favor of the Titan X vs the Titan Z or GTX 980 Ti.

CUDA

Titan X - 3,072 CUDA per card x 10 = 30,720 CUDA
GTX 980 Ti - 2,816 CUDA per card x 10 = 28,160 CUDA
Titan Z - 5,760 CUDA per card x 5 = 28,800 CUDA

The difference is an additional 1,920 CUDA infavor of the Titan X vs Z, and 2,560 CUDA in favor of the Titan X vs GTX 980 T1.

PCIE Lane slots needed:

Titan X - 2 per 1 GPU card x 10 cards= 20 lanes for 10 GPU
GTX 980 Ti - 2 per 1 GPU card x 10 cards = 20 lanes for 10 GPU
Titan Z - 3 per 2 GPU card x 5 cards = 15 lanes for 10 GPU

However, assuming you need the primary GPU in the motherboard, and you will be using a working riser for the rest, and these risers handles 1 total card each:

Titan X - 2 lanes for primary display attachment + 1 riser per 1 GPU card x 9 cards= 11 lanes for 10 GPU
GTX 980 Ti - 2 lanes for primary display attachment + 1 riser per 1 GPU card x 9 cards = 11 lanes for 10 GPU
Titan Z - 3 lanes for primary display attachment + 1 riser per 2 GPU card x 4 cards = 7 lanes for 10 GPU

Titan Z will save you 4 lanes if connected by risers theoretically. However, with a working splitter solution, this can be dealt with in any # of ways (ie Amfeltec GPU clusters).

Power:

Titan X - 250 W per card x 10 = 2500 W
GTX 980 Ti - 250 W per card x 10 = 2500 W
Titan Z - 375 W per card x 5 = 1,875 W

I know you may not hit the ceiling with the cards, but the ceiling would be in favor of the Z by 625 W.

$ (US).
This is just avg stock card, not OC version, and of course is not necessarily real world $ (I used Newegg current a/o July 2015), but it's really just an attempt at comparing...actually Newegg has a limit 1 per customer many times... :evil: :

Titan X - $1,050 per card x 10 = $10,500
GTX 980 Ti - $680 W per card x 10 = $6,800
Titan Z - $1,600 per card x 5 = $8,000

So, which is best?
None...
The best is CPU rendering...go old school...
Or, if you wanna' b new school, OpenCL...
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
User avatar
Notiusweb
Licensed Customer
Licensed Customer
 
Posts: 1285
Joined: Mon Nov 10, 2014 4:51 am

Re: External Graphics Cards PC

Postby smicha » Thu Jul 02, 2015 11:19 am

smicha Thu Jul 02, 2015 11:19 am
If you can handle/fit/make them work go with 10x Titans X. I assembled two computers lately - 2x X, 2x Z and own 3x 780 6gb+titan. My thoughts about 980Ti have been expressed somewhere on this forum, but shorty speaking it is a crippled titan x chip with 50% less of vram and worse performance at 1500mhz than X is at 1400mhz. Titan Z is a great card saving slots and space but 6gb makes is less attractive than X. It's a pity we no longer have 780 6gb which scores about 100 OC and cost 350EUR (this is the price I payed).

As for power draw: 2xZ OC (on water) draw about 1250 W, 2xX 480W (entire system with 5930k). So in terms of power efficiency X are great. To handle 10Xs you'd need 2x1500W PSUs (1600W superflower titanium is so cool) and score 1500 (OC).
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
User avatar
smicha
Licensed Customer
Licensed Customer
 
Posts: 3151
Joined: Wed Sep 21, 2011 4:13 pm
Location: Warsaw, Poland

Re: External Graphics Cards PC

Postby Tutor » Thu Jul 02, 2015 10:05 pm

Tutor Thu Jul 02, 2015 10:05 pm
Notiusweb wrote:My update:

I now have Titan X with the 5 Titan Z getting past BIOS. In a nutshell, it took a Titan X compatible BIOS capable of an Above 4G Decoding Option. I had to disable Marvel ESata Option in BIOS, it is some sort of PCIE hotplug external SATA option, which was being enabled by default in certain BIOS versions, but not the latest one I got. So once I found the difference, I found the culprit.

Now, my problem is the OS. From what I am reading, I am not going to get >10 GPU through Win 7 64. The thing is, I can isolate my Z's to the point where as long as it is not > 10, in any configuration they all work. I read in Bitcoin and gaming forums that in this scenario it does not make sense to mod the registry, because there is just a brute limit. Thus, once it hits 11, 1 core won't work.
It appears, however, that Win 8.1 will net me one more GPU core somehow, as long as my board supports it, which I think it would given it is showing me the 'problem' GPU core. So right now my option would be to update to 8.1, but for now I'm happy to just get the 4 1/2 Z's working, which gives me an extra 2880 CUDA.

BUT, if anyone knows how to beat this without going to Win 8.1, please share your ideas!

OctaneRender4.jpg


And just for fun...
if I am to mathematize, under a Win 7 64 10 GPU limit, for just straight CUDA rendering, would one be better served to have had 10 Titan X, 10 GTX 980 Ti, or 5 Titan Z?

Working VRAM GB

Titan X 12GB
980 Ti 6 GB
Titan Z 6 GB

The difference is an additional 6 GB VRam power in favor of the Titan X vs the Titan Z or GTX 980 Ti.

CUDA

Titan X - 3,072 CUDA per card x 10 = 30,720 CUDA
GTX 980 Ti - 2,816 CUDA per card x 10 = 28,160 CUDA
Titan Z - 5,760 CUDA per card x 5 = 28,800 CUDA

The difference is an additional 1,920 CUDA infavor of the Titan X vs Z, and 2,560 CUDA in favor of the Titan X vs GTX 980 T1.

PCIE Lane slots needed:

Titan X - 2 per 1 GPU card x 10 cards= 20 lanes for 10 GPU
GTX 980 Ti - 2 per 1 GPU card x 10 cards = 20 lanes for 10 GPU
Titan Z - 3 per 2 GPU card x 5 cards = 15 lanes for 10 GPU

However, assuming you need the primary GPU in the motherboard, and you will be using a working riser for the rest, and these risers handles 1 total card each:

Titan X - 2 lanes for primary display attachment + 1 riser per 1 GPU card x 9 cards= 11 lanes for 10 GPU
GTX 980 Ti - 2 lanes for primary display attachment + 1 riser per 1 GPU card x 9 cards = 11 lanes for 10 GPU
Titan Z - 3 lanes for primary display attachment + 1 riser per 2 GPU card x 4 cards = 7 lanes for 10 GPU

Titan Z will save you 4 lanes if connected by risers theoretically. However, with a working splitter solution, this can be dealt with in any # of ways (ie Amfeltec GPU clusters).

Power:

Titan X - 250 W per card x 10 = 2500 W
GTX 980 Ti - 250 W per card x 10 = 2500 W
Titan Z - 375 W per card x 5 = 1,875 W

I know you may not hit the ceiling with the cards, but the ceiling would be in favor of the Z by 625 W.

$ (US).
This is just avg stock card, not OC version, and of course is not necessarily real world $ (I used Newegg current a/o July 2015), but it's really just an attempt at comparing...actually Newegg has a limit 1 per customer many times... :evil: :

Titan X - $1,050 per card x 10 = $10,500
GTX 980 Ti - $680 W per card x 10 = $6,800
Titan Z - $1,600 per card x 5 = $8,000

So, which is best?
None...
The best is CPU rendering...go old school...
Or, if you wanna' b new school, OpenCL...


Your update is excellent. Which GPU is best depends on many variables, but I consider the following to be the top three (not necessarily in the order listed):
(1) Your Immediate Rendering Needs -
If you do mainly non-4k projects, then 10x GTX 980 TIs may be the best buy. If currently do or in the immediate future you intend to do 4k (or larger) projects, then 10x GTX Titan Xs may be the best buy.
(2) Your Available Financial Resources -
The tighter are financial resources, the more using what you may already own and can comfortably purchase in the near term and performance/cost matter. This would also tend to favor 10x 980 Tis or 10x AMD R9 Fury Hybrids (AMD cards tend to still perform a little better at OpenCL chores than Nvidia GPUs). Those Furies + Octane V3 may prove to be worthy competitors, but this is affected by a time window that may be too large for some.
(3) Your Desire To Achieve The Greatest Degree Of Future Resistance -
This may favor 10x Titan X (12G), waiting for (a) Pascal (12G) {Pascal is predicted to come next year and to be many times more powerful than Maxwell, but If you've got sufficient funds and heavy immediate needs, then this doesn't matter that much and thus the Titan X may win out} or (b) the post-R9 Fury AMD GPUs that may offer more Vram, particularly if you do or are considering doing large format work (4k, 8k etc.), but the time window may be too large for some.

There are two other variables that don't so much affect which GPUs, but their number:
(1) Your Preferred OS -
This may affect the maximum number of GPUs that you can get running properly. Linux [free] will likely support maxing out the motherboard's/CPU's IO space potential better than Windows Server [expensive] which is better than the later/latest versions of Windows standard [moderately priced], which are better than Mac OSX Mavericks (10.9), which is better than Mac OSX Yosemite (10.10), which is likely better than will be OSX El Capitan (10.11) [the Mac line appears to be ever tightening user-based enhancement] [free]. Nvidia-released OSX driver updates are tied to particular OSX updates which further constrains GPU choice by requiring those who desire to use the latest GPUs to also have the latest Mac OS edition and updates. Mac systems tend to have fewer GPU slots, which is indicative of less IO space for GPUs, and tends to result in such systems being able to handle fewer GPUs successfully.
(2) Your System Construction/Modification Abilities/Preferences And Preferred Hardware Build -
There are few pre-built systems that can handle > 8 GPU processors without the need for additional individual mods; moreover, it appears that for the near-term the more powerful GPU rendering systems will be had by the big studios and those adventurous enough to roll their own. However, there may be many for whom rendering-as-a-service [like Octane Cloud] is a satisfactory alternative, but their number remains to be seen.

Factors that tend to limit the number of GPUs, having a limited number of GPU slots, and satisfying GPU power requirements may tend to make getting the faster, more expensive GPUs preferable (and, in an ironic sense, less costly since the maximum number of installable GPUs is smaller) and that may tend to favor Titan Zs and Xs.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
Tutor
Licensed Customer
Licensed Customer
 
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

Re: External Graphics Cards PC

Postby A Polish Ginger » Sun Jul 05, 2015 8:56 am

A Polish Ginger Sun Jul 05, 2015 8:56 am
Hello again gentlemen, sorry for the late response. I'm in California right now at a convention so I've been enjoying myself, anyways to the point.

Can you run a primary Display GPU on a riser...

Yes.

However I did not test it for a prolonged period of time so I do not know how reliable it is, but it did work. A downside (at least with my riser 1x to 16x USB 3.0 powered) is that there was a little bit of screen "lag". It was slight and still operate-able, but I could see that increasingly getting worse if you are rendering with that GPU.

Note that I only had that GPU plugged in and nothing else so I do not know if that makes a difference, but I just wanted to put that out there.

Sincerely,
A Polish Ginger
Win 7 64 | GTX Titan, GTX 980 ti x5 | i7 4930K | 32 GB
User avatar
A Polish Ginger
 
Posts: 14
Joined: Sun Mar 08, 2015 7:32 pm
Location: Chicago, IL

Re: External Graphics Cards PC

Postby Notiusweb » Mon Jul 06, 2015 1:14 pm

Notiusweb Mon Jul 06, 2015 1:14 pm
Postby Tutor » Thu Jul 02, 2015 10:05 pm

There are two other variables that don't so much affect which GPUs, but their number:
(1) Your Preferred OS -
This may affect the maximum number of GPUs that you can get running properly. Linux [free] will likely support maxing out the motherboard's/CPU's IO space potential better than Windows Server [expensive] which is better than the later/latest versions of Windows standard [moderately priced], which are better than Mac OSX Mavericks (10.9), which is better than Mac OSX Yosemite (10.10), which is likely better than will be OSX El Capitan (10.11) [the Mac line appears to be ever tightening user-based enhancement] [free]. Nvidia-released OSX driver updates are tied to particular OSX updates which further constrains GPU choice by requiring those who desire to use the latest GPUs to also have the latest Mac OS edition and updates. Mac systems tend to have fewer GPU slots, which is indicative of less IO space for GPUs, and tends to result in such systems being able to handle fewer GPUs successfully.


Tutor, in my situation it appears I passed the BIOS goalie, and am in an OS setting with a recognized 11 GPUs attached, but have a 10-limit working GPU. Do you think that my experienced 10-limit of working GPUs is is caused by the OS itself? Or, would it be the BIOS/motherboard that still is the background cause. Or, might it be a combination of both, either via an independent relationship or where one is contingent upon the other?
One of the complexities I have come across is the idea that a BIOS generally is set up to support 7-8 GPUs, and websites have claimed Windows has a 4, 6, and even 8 GPU limit. But this OS limit can't be if I have 10. So what is going on. Are the motherboard/BIOS and OS able to stretch # GPU depending on IO space available? It would almost seem there are 2 separate IO space realms to work with, (1) motherboard/BIOS (ie eliminating Marvell eSata option, enabling 4G), and (2) OS (I have no idea what makes an OS IO space stretch...However, apparently Win 8 busts Win 7 limit by 1, for whatever stretching the motherboard/BIOS has already yielded).
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
User avatar
Notiusweb
Licensed Customer
Licensed Customer
 
Posts: 1285
Joined: Mon Nov 10, 2014 4:51 am

Re: External Graphics Cards PC

Postby Tutor » Tue Jul 07, 2015 8:53 pm

Tutor Tue Jul 07, 2015 8:53 pm
Notiusweb wrote:
Postby Tutor » Thu Jul 02, 2015 10:05 pm

There are two other variables that don't so much affect which GPUs, but their number:
(1) Your Preferred OS -
This may affect the maximum number of GPUs that you can get running properly. Linux [free] will likely support maxing out the motherboard's/CPU's IO space potential better than Windows Server [expensive] which is better than the later/latest versions of Windows standard [moderately priced], which are better than Mac OSX Mavericks (10.9), which is better than Mac OSX Yosemite (10.10), which is likely better than will be OSX El Capitan (10.11) [the Mac line appears to be ever tightening user-based enhancement] [free]. Nvidia-released OSX driver updates are tied to particular OSX updates which further constrains GPU choice by requiring those who desire to use the latest GPUs to also have the latest Mac OS edition and updates. Mac systems tend to have fewer GPU slots, which is indicative of less IO space for GPUs, and tends to result in such systems being able to handle fewer GPUs successfully.


Tutor, in my situation it appears I passed the BIOS goalie, and am in an OS setting with a recognized 11 GPUs attached, but have a 10-limit working GPU. Do you think that my experienced 10-limit of working GPUs is is caused by the OS itself? Or, would it be the BIOS/motherboard that still is the background cause. Or, might it be a combination of both, either via an independent relationship or where one is contingent upon the other?
One of the complexities I have come across is the idea that a BIOS generally is set up to support 7-8 GPUs, and websites have claimed Windows has a 4, 6, and even 8 GPU limit. But this OS limit can't be if I have 10. So what is going on. Are the motherboard/BIOS and OS able to stretch # GPU depending on IO space available? It would almost seem there are 2 separate IO space realms to work with, (1) motherboard/BIOS (ie eliminating Marvell eSata option, enabling 4G), and (2) OS (I have no idea what makes an OS IO space stretch...However, apparently Win 8 busts Win 7 limit by 1, for whatever stretching the motherboard/BIOS has already yielded).



For our purposes, it's more like a great orchestra - every one plays an important part - and like a great orchestra the sum isn't just additive, it's exponential. A good bios, in my opinion and given our mutual goal to maximize the number of GPUs per system, is a bios that allows the system to boot no matter how many GPUs a user has installed and allows the user to divert resources from lower preference functions and maximize IO space allocation. If the bios is a “good” one, through the lens of the OS we can get more definitive information about the reasons that particular GPU(s) aren’t properly functioning and what may be the cause if we delve into the information provided by the Device Manager. Thus, while the bios and the OS are surely separate, they’re thankfully not fully independent of one another because the OS, when operating properly in a multi-GPU environment, is, in part, dependent on the bios and also helps us to determine what we may need to modify in bios.

Regarding your question about what’s going on when some have claimed that Windows has a 4, 6, or even an 8 GPU limit, my answer is that I don’t know enough about those situations to make an educated guess. There are, e.g., different Windows OSes (such as server versus non-server and different version numbers and builds/updates of each) and may have many other unknown variables at play. In a general sense, a good motherboard bios is able to stretch the number of GPUs by allowing the system operator to reduce the number of options vying for IO space and to enable 4G space to maximize use of all available IO space. The OS doesn’t stretch IO space, but it can fail to take advantage of all such space.

The easiest way that I know of to test whether you have to be fully satisfied with a working GPU limit of ten GPUs is to:
(1) download a free version of Linux (I like linux Mint), and install it to a separate drive (you should remove any drives with any other OS on it to avoid Linux installing files to your Windows OS drive);
(2) download and install CUDA for Linux; and
(3) download and install the Linux version of Blender. Using Blender 3d, see whether you can run Mike Pan’s BMW project file (just Google it) and see whether you can get all 11 GPUs participating in a Cycle's GPU render. If you can render in Cycles with 11 GPUs, then your OS of choice is likely the holdback. If the same problem prevails, then you're likely at the hardware limit (unless you can disable more peripheral functions, but that's not expanding IO space in the absolute; it only expanding usable IO space by freeing some IO space from supporting certain, unnecessary functions).

Also, if you look at the pics that I most recently posted to my thread [ Best Practices For Building A Multiple GPU System
] here [ download/file.php?id=44606&mode=view ] and here [ download/file.php?id=44605&mode=view ] that depict the hardware layout of my Supermicros, you’ll see what is center of it all - the CPUs. The systems with the largest number of PCIe slots have more than one CPU. Systems with more than one CPU have more IO space potential than the IO space potential of a system with a single CPU of the same kind and same motherboard. Probably not surprisingly, motherboard manufacturers find ways to allocate a lot more IO space in the multi-CPU setups by providing more IO space consuming peripherals, allocated among the available CPUs. Thus, in a multi-CPU system, if one doesn’t install CPUs in all of the available CPU slots, then some otherwise available functionality (and likely IO space potential) may be missing.

Moreover, keep in mind that a system's bios is a lot more expansive and complex than what we see when we boot into bios. It's a lot like looking into a mansion's living room window and using that view to try to assess what're in the 12 bedrooms, the two kitchens, the two dining rooms, the 14 bathrooms, etc. The bios screens that we can boot into by holding down the "Delete" key allows us to control only some of the bios functions that the manufacturer has decided to give us access to and to see some deemed important condition states over which we have no direct control in bios. There's a lot more, including the wizard, behind the curtains. The OS's Device Manager is just another window where we can see what may likely have gone wrong, but there's not a whole lot that we can do to correct the situation using only the Device Manager. It's mainly a state view and diagnostic tool.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
Tutor
Licensed Customer
Licensed Customer
 
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute
PreviousNext

Return to Demo Version Questions & Discussion


Who is online

Users browsing this forum: No registered users and 21 guests

Sat Apr 27, 2024 3:38 am [ UTC ]