External Graphics Cards PC

A public forum for discussing and asking questions about the demo version of Octane Render.
Forum rules
For new users: this forum is moderated. Your first post will appear only after it has been reviewed by a moderator, so it will not show up immediately.
This is necessary to avoid this forum being flooded by spam.
User avatar
Tutor
Licensed Customer
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

A Polish Ginger wrote:Gentlemen I have to say that I'm just dumb..... I didn't have the 5th 980 ti plugged into the PSU unit................ it was so simple that I started to laugh :lol:

But yeah Phase 1 of the DTRM (Dream Team Render Machine) is in operation
Notiusweb wrote:
Anyway, your progress is ahead of mine...
HA, you got good jokes. I was here struggling with a mere 6 GPUs when you have greater than 10. I'm no where near your level so don't sell your self short. I hope that some day I'll have around that number but keep up the updates.

Not just Notiuweb, but anyone that is going through this struggle. I've learned so much just browsing this forum, that I wish to thank you all from my non existent soul (haha get it). But seriously keep it up gentlemen!

Sincerely,
A Polish Ginger

I. Just to be sure, am I correct that you got six GPUs now working after you powered that previously non-working card?

II. Regarding the registry hack, here how you do it:
"Issue 9. Windows and the Nvidia driver see all available GPU's, but OctaneRender™ does not.

There are occasions when using more than two video cards that Windows and the Nvidia driver properly register all cards, but OctaneRender™ does not see them. This can be addressed by updating the registry. This involves adjusting critical OS files, it is not supported by the OctaneRender™ Team.

1) Start the registry editor (Start button, type "regedit" and launch it.)

2) Navigate to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E968-E325-11CE-BFC1-08002BE10318}

3) You will see keys for each video card starting with "0000" and then "0001", etc.

4) Under each of the keys identified in 3 for each video card, add two dword values:
DisplayLessPolicy
LimitVideoPresentSources
and set each value to 1

5) Once these have been added to each of the video cards, shut down Regedit and then reboot.

6) OctaneRender™ should now see all video cards."
[ http://render.otoy.com/universe.php#51Troubleshooting ]

This not only gets Octane to see all of your installed video cards {subject to real IO limits which it can't cure}, but it also removes that caution in the Device Manager that shows that there is/are issue(s) with the video card. I've done the hack to all of my systems, even the ones that didn't display any Device Manager issues that use Octane, as well as to those systems that run only RedShift3d, TheaRender and FurryBall when Device Manager indicated that they had issues.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
A Polish Ginger
Posts: 14
Joined: Sun Mar 08, 2015 7:32 pm
Location: Chicago, IL

Tutor wrote: I. Just to be sure, am I correct that you got six GPUs now working after you powered that previously non-working card?
Yes, I had the VGA power cables plugged into the the GPU only and not the PSU. Which it could be seen by the OS but couldn't be used (Because not enough power was going to it, is my guess). I'm just glad it was a simple fix.

As for the registry hack thank you I'll be sure to try it, but there is something I find odd. I have 0000-0006, which means I would have seven which I do not. So that is weird. Also by adding dword values you just right click on the folders "0000, 0001, etc" and not in the "inside" of the folder.

Sincerely,
A Polish Ginger
Attachments
This
This
Win 7 64 | GTX Titan, GTX 980 ti x5 | i7 4930K | 32 GB
User avatar
Tutor
Licensed Customer
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

Notiusweb wrote:Hello all! Here is my latest:

Up to this point whenever I try to boot with > 10 GPU I got "bF" error, i.e. using a Titan X as primary GPU and 5 external GPU Titan Z (11 total GPU CUDA devices).
My BIOS manufacturer ASRock suggested using an older revision BIOS in order to enable an "Above 4G Decoding" (ROM option allowing usage of 64 bit devices, see https://en.wikipedia.org/wiki/PCI_hole, and, https://en.wikipedia.org/wiki/Memory-mapped_I/O). Most recently I could not try because the older revision BIOS they were suggesting did not support the GTX Titan X. So, I subsequently tried using the older revision BIOS with my GTX 660 Ti, and I have confirmed that under the suggested older revision BIOS, I can, yes, get PC to boot with 5 Titan Z. There is an option under NorthBridge Configuration "Above 4G Decoding", by default is "Disabled" and when set to "Enabled" it booted past post and into Windows with 5 Titan Z! Upon doing so I hit what would appear to be an OS snag of 10 GPU, I am thinking it is a Win 7 64 I/O thing, but I didn't bother to sort through that, I just wanted to see if this whole thing is even possible. In my case, I am interested in having the Titan X (12Gb) in the picture as the primary GPU because there is a noticeable improvement versus the 660 Ti (3Gb) in apps like SketchUp, Blender, and Daz Studio. Plus I used to get a lot of errors, such as OpenGL and timeouts, with the 660 Ti, that I no longer see using the Titan X.
The attachment NVid Ctrl Panel3.jpg is no longer available
The attachment OctaneRender3.jpg is no longer available

Some details during my testing:
-newest revision BIOS P3.30M, with 4 Titan Z - 660 Ti / Titan X as primary boots successfully
-newest revision BIOS P3.30M, with 5 Titan Z - 660 Ti yields beeping error "d4", whereas Titan X yields "bF"
-older revision BIOS P3.30F, no 4G enabled, with 4 Titan Z - 660 Ti yields error "bF"
-older revision BIOS P3.30F, 4G enabled, with 4 or 5 Titan Z - 660 Ti boots successfully
-older revision BIOS P3.30F - Titan X as primary doesn't/can't boot, yields error "b2"

Conclusion: GTX Titan X cannot boot as primary GPU with older revision BIOS P3.30F. Newest revision BIOS P3.30M, allows Titan X to boot as primary, however has no option in either Boot Menu or NorthBridge for "Above 4G Decoding". As such, I cannot try 4G Option with a Titan X in the scenario. I am already looking to see now if ASRock can get the 4G option added to the newest revision BIOS, but I can now pass the data onto them regarding success with the 660 Ti.

I guess my testing won't be a true "Can I do it?" scenario, it will be more of a "Can I do it the way I want it?" scenario, as I am interested in the Titan X as the primary GPU.
In the meantime, here is a girl with a sword and an eagle floating mid air...
The attachment EagleRoar1.png is no longer available
The attachment EagleRoar3.png is no longer available


Regards!
What you've hit I'd call an IO space snare, instead of an "OS snag of 10 GPU." Although I don't have the motherboard that you're using, I doubt that is has 10 PCIe slots. Moreover, none of us should be surprised when a motherboard manufacturer tells us that the motherboard has a GPU limit which is less than what we've got running. When the motherboard was made the uses that we now put them to likely wasn't even contemplated. The bioses most likely come from American Megatrends, Inc. [ http://www.ami.com ] and are modified slightly by the motherboard manufacturers to suit their design goals, which in most cases are significantly less than what we're doing. Thus, you're not at fault in the least. Remember that the motherboard manufactures can modify the AMI bios, but have little incentive to do so at our request for old motherboards because those manufacturers can reap more profits from selling us their latest and greatest. I have two EVGA SR-2 motherboards that each have seven single width spaced PCIe slots. The most GPUs that I've found that few others have been able to get that motherboard to accommodate is 10 GPUs. Sounds familiar to your situation, doesn't it. But did Asrock or EVGA build those motherboards with those companies having in mind that there users would be running 10 GPUs on them? I think that the answer is certainly not. So, my words to you are, "Congratulations. Job well done because getting that motherboard to recognize 11 GPUs appears to be likely beyond its IO space capabilities, and your getting it to recognize 10 GPUs is far beyond what it was purposely built to handle." Also, rather than considering merely the number of lanes a system provides and the fact that dual CPU systems can provide more lanes than a single CPU system, think also about the IO space needed to handle the data needed to fully contain the important data needed to handle the needs of all PCIe feed devices/components and the fact that dual CPU systems can provide more IO space headroom than a single CPU system. SATA, USB, and ethernet are just a few of the features that need IO space and use PCIe resources. Devices such as AMFELTEC GPU CHASSIS AND SPLITTERS DO NOT INCREASE IO SPACE. What they do is give us users points of connections to the PCIe lanes. In fact Amfeltec states:
"The motherboard limitation is for all general purpose motherboards. Some vendors like ASUS supports maximum 7 GPUs, some can support 8.
All GPUs requesting IO space in the limited low 640K RAM. The motherboard BIOS allocated IO space first for the on motherboard peripheral and then the space that left can be allocated for GPUs.

To be able support 7-8 GPUs on the general purpose motherboard sometimes requested disable extra peripherals to free up more IO space for GPUs. [Emphasis added]

The server type motherboards like Super Micro (for example X9DRX+-F) can support 12-13 GPUs in dual CPU configuration [Emphasis added]. It is possible because Super Micro use on motherboard peripheral that doesn’t request IO space."

What reasonably priced motherboard has eleven PCIe slots and has been shown to run up to 18 GPUs under CUDA V5.5? Hints: (1) Reread the last paragraph, (2) then see my pic below of my workbench and (3) lastly, read my thread starting here: http://render.otoy.com/forum/viewtopic. ... &start=180 . Tentatively, I've currently installed (1) one SATA host external card for connection to my two 4x external hard drive raid arrays, (2) two OCZ Storage Solutions RevoDrive 350 Series 960GB PCI Express Generation 2 x 8 Solid State Drives RVD350-FHPX28-960G (to run in raid 0), (3) two Amfeltec GPU Oriented x4 Splitter cards and (4) one 240G OWC Mercury Accelsior_E2 PCI Express SSD for running the system alternatively under another one of my favorite OSes when not needing to run Windows, still leaving me five completely empty PCIe slots (and remember that each of the two splitters allows me to connect 4 GPUs). What you don't yet see are the GPUs that I intend to install. I'm fabricating assemblies to store the GPUs internally inside of my Lian Li PC-D8000 chassis [ https://www.google.com/search?q=Lian+Li ... 60&bih=743 ]. Note that the Lian Li has a slide out motherboard tray [ http://www.lian-li.com/en/dt_portfolio/pc-d8000/ ] for ease of further customization and accommodates exactly eleven single-wide PCIe slots perfectly {I did have to add 5 motherboard supports, but otherwise that mystery motherboard fits perfectly}. Admittedly, I'm not sure at this stage whether I can leave all of those storage related cards connected and whether I can leave them connected and still have the system support 13 video cards, half of which will be Titan Zs, in an environment with 4 connectors (SplitterCard#1) + 4 connectors (SplitterCard#2) + 5 connectors (empty slots using x8 to x16 PCIe slot adapters [ http://render.otoy.com/forum/download/f ... &mode=view ] which I hope will arrive this week from across the pond and then be connected to x16 to x16 powered riser cables. Only time and trial will determine that. Any run overs (i.e., GPUs that exceed that motherboard's IO space limit or storage related cards {they also consume IO space} that I decide to forgo in the interest of GPU maximization) will go inside my second build.
Attachments
Supermicro X9DRX+-Fs.png
Last edited by Tutor on Mon Jun 29, 2015 12:43 pm, edited 6 times in total.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
Tutor
Licensed Customer
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

A Polish Ginger wrote:
Tutor wrote: I. Just to be sure, am I correct that you got six GPUs now working after you powered that previously non-working card?
Yes, I had the VGA power cables plugged into the the GPU only and not the PSU. Which it could be seen by the OS but couldn't be used (Because not enough power was going to it, is my guess). I'm just glad it was a simple fix.

As for the registry hack thank you I'll be sure to try it, but there is something I find odd. I have 0000-0006, which means I would have seven which I do not. So that is weird.
Not weird at all. Each card that you have installed in the past leaves a profile there (think "fingerprints"). I simply delete the profiles that aren't relevant. But you can leave them there until you get comfortable with the process.

A Polish Ginger wrote:
Tutor wrote:Also by adding dword values you just right click on the folders "0000, 0001, etc" and not in the "inside" of the folder. ... .
On my version of Windows, I click on the folder to open it in a window to the right; then in that window on the right, right click in empty space bring up a selector that allows me to select to add a 32 bit Dword attribute (among others) and enter what Octane suggests, i.e., DisplayLessPolicy or LimitVideoPresentSources (they have to be entered separately; so I have to do this twice for each GPU); then I double-click on each entry that I've just created (the attribute appears at the bottom of the window to the right where I created that attribute) to give it that numerical value "1" assignment in a popup window that appears to my left.

P.S. In your Windows version, it appears to be even easier by just right clicking and selecting add a 32 bit Dword. But don't forget to give it that "1" assignment.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
Notiusweb
Licensed Customer
Posts: 1285
Joined: Mon Nov 10, 2014 4:51 am

Hi Tutor, love looking at the Supermicro X9DRX+-F picture. If I lived across the street I would bring my GPUs over and we could shut down the local power grid!...

Just wondering:
(1) what is your primary display/boot GPU? Is it a low power / low performing card? (In my case my Titan X is a real demanding GPU, needing its own special BIOS, not booting off a riser...my #2 lane is unused because of this...and sucking away power resources from the mobo. But, I take it because 12GB Vram really makes using art apps fun.)
(2) does the Supermicro X9DRX+-F support 'Quad SLI', as in a gaming config? I know mobos have molex pins to help power these configurations so they run stable. So, does the Supermicro X9DRX+-F have a well-distributed power arrangement option? Titan Z's are power beasts. "With great power comes great incompatibility"
(3) no USB 3.0 right? Is that a trade-off for power, I/O space, or is it just the manufacture age of the board
(4) how do you like the BIOS UI, or support

Sorry, if I'm conducting an interview 8-)
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
User avatar
Tutor
Licensed Customer
Posts: 531
Joined: Tue Nov 20, 2012 2:57 pm
Location: Suburb of Birmingham, AL - Home of the Birmingham Civil Rights Institute

Notiusweb wrote: (1) what is your primary display/boot GPU? Is it a low power / low performing card? ... .
I haven't yet completed the build, so I haven't finalized my choice for primary display/boot GPU device. I do plan to install at least eighteen GPU processors, including six Titan Zs and six other single processor GPUs, in my first Supermicro X9DRX+-F build to run Redshift3d, FurryBall and TheaRender, along side of OctaneRender (the only one of the four that has a firm 12 GPU processor license limit). For my other systems that I use as render masters I typically use the EVGA GT640 4G (DDR3) for interactivity during design and tweaking assets/scenes and might use one for this build. The GT640 is a low power (TDP = 65W), relatively high memory (4G), thin (single slot), low price (I purchased them 2 yrs. ago for about $80 {USD} each), but good performing (900 MHz core) card. I've got seven of them in my current network configuration. However, for a current purchaser of such a GPU, I'd recommend the EVGA GT 740 4G (DDR3) [The GT 640 4G and GT 740 4G (in the single slot versions) will raise your OctaneBench score by only 8-15 points { https://en.wikipedia.org/wiki/List_of_N ... 700_Series }]. Keep in mind, however, that one of my purposes in using the Supermicro X9DRX+-F is to consolidate my many GPUs into as few systems as possible to improve rendering performance and to reduce software licensing costs, power usage and administrative chores.
Notiusweb wrote:(2) does the Supermicro X9DRX+-F support 'Quad SLI', as in a gaming config? ... .
I'm not sure that the Supermicro X9DRX+-F will not support Quad or 4-way SLI (or any other SLI config.), but I doubt that it will since the manual doesn't mention SLI. That's one thing I haven't been concerned with at all since I do not plan to use these systems for gaming and all of my GPU rendering software recommends against using SLI for rendering.
Notiusweb wrote:So, does the Supermicro X9DRX+-F have a well-distributed power arrangement option?


Not exactly sure what "a well-distributed power arrangement option" entails, particularly in the context of my build which will likely employ four 1600 watts PSUs. */ But the Supermicro X9DRX+-F does have standard one ATX 24-pin input power connector, plus two 8-pin power input connectors and a 4-pin power input connector. The only other motherboards that I'm familiar with that have as many power input options are EVGA's SR-2s and SR-Xs. The LEPA 1600W PSU has the power output connectors to feed all four of the Supermicro X9DRX+-F motherboard's power inputs. Not all PSUs support all four inputs.
Notiusweb wrote:(3) no USB 3.0 right? Is that a trade-off for power, I/O space, or is it just the manufacture age of the board
Right, no USB 3.0 out of the box. But why no USB 3.0, I have no idea. It might be age related. I don't see there being a power or IO space trade-off for choosing USB v2 over USB v3. In any event, there're PCIe slots galore to install the latest and greatest of whatever card can be feed from PCIe, including USB, SATA, network, etc.
Notiusweb wrote:(4) how do you like the BIOS UI, or support
Before buying any motherboard or GPU, I download its manual(s) and read it/them from beginning to end. I also like being able to perform digital searches of the manual. The bios for my two Supermicro X9DRX+-Fs are almost a mirror image of the bios for my two Supermicro SuperServer SYS-8047R-TRF+s. So, I'm very familiar with the bios. I appreciate it's features/options and know how to tweak them. Supermicro's support is the best that I've seen in my 30+ years of learning and using computer technology.
Notiusweb wrote:Sorry, if I'm conducting an interview 8-)
No Problem. One of my life's missions is to aid/assist all of my cousins, close and distant - no matter where they are. Love is not bounded by either time or space.


*/ Please keep in mind that if I don't consolidate, 18 GPUs would have to be distributed across more systems and those 1600W PSUs would also have to power those other systems' needs, apart from those of the GPUs within them. So although the number of PSUs may be reduced little, if at all, the system overhead power requirements are cut drastically - allowing for more power for the GPUs when I tweak them.
Because I have 180+ GPU processers in 16 tweaked/multiOS systems - Character limit prevents detailed stats.
User avatar
Notiusweb
Licensed Customer
Posts: 1285
Joined: Mon Nov 10, 2014 4:51 am

Here is my latest - Still in BIOS world

I received BIOS P3.30N with a 4G option (made me happy, with 'N' after 'M' LOL). They stated it was based off the P3.30M, which to me was the only BIOS which thus far had supported the Titan X.

When I tested, both the Titan X and 660 Ti booted by themselves as primary with 4G into Windows.

However, when booted > 10 GPU cores, both Titan X and 660 Ti gave "d4" error (PCI resource allocation).
I was able to confirm too, through process of isolation, that all cards are functioning independent of one another. So under 4G option, if < 10, fine. If > 10, "d4".

What is interesting is that on the ones with the 4G option that the Titan X could not boot off of, P3.30F and P3.30K, the 660 Ti posted with > 10 GPU under the 4G option (It is here that I did not yet tackle the apparent OS limit of 10).

So, I replied back with all of this (more succinctly, I just said I wanted Titan X to boot >10 GPU cores under 4G like the 660 Ti had booted with >10 cores under 4G, on P3.30F and P3.30K...), I will see what they say.
I told them I welcome the testing and assistance, they are very responsive. :D

Tutor,
(1) That sounds like a magnificent arrangement. I will share my own Z experience; stock run 'nicer' than the overclocked versions. Not faster, but temps on avg are ~10+ degrees higher when rendering. I feel I need to monitor the OC versions, whereas the stock just roll right along. Also, if buying new or used, try to get the EVGA versions. I think PNY may no longer support warranty if they are no longer in production through PNY. Whereas with EVGA, this is not the case, and you can even, if you have the serial #, look up how much warranty time is left on their website. Just my thoughts.
(2) I do not use my PC for gaming either, however much like from our bitcoin cousins, I actually have gotten a lot of info from our gaming cousins. They encounter, and cause, many problems! My board has two 4-pin molex plug sites for quad-SLI, and when I saw this I had imagined it would give my PCIE lanes support as I try my own little expansion. While I have not noticed any difference since adding the molex, I am imagining I might be alleviating some strain on the board with this "power arrangement option".
(3) I have tried IRay for Daz Studio and VRay and Indigo Render for Sketchup. They both are not as fast as OR, however they are in-house products that work well with the apps. From what I have seen, OR for Daz is very nice, with some evolution needed, but really fun to use. However I think I heard that the Sketchup version is not receiving as much attention in development. I saw the plugin for Cinema4D on a couple videos, looks really nice too. I guess I need only look at the forums to see how they are doing.
(4) 30+ years - you know, I have done PC Art and Music as a hobby for about 15 years, and my iPad gives a lot of the current stuff now a solid run. What must rendering have been like 15 years ago, it must have been heartbreaking to have waited and waited on a render and then when done you get some lighting artifact, or artistically you would say to yourself, "Hmmm...I think now it would be better actually this way...."
(5) Yep, I am happy to be of a world with many minds. I think we are beginning to link them more effectively together with the internet, maybe one day technology can truly merge them. I hope this would a be peaceful way of interacting for parts of the day with others. I never forget that my own body is not just "me", but rather a collaboration of tiny cells, composed of even smaller systems, each with their own protocols, interests, and dare I say on some level, intelligence. And everyone else has this too.
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
User avatar
smicha
Licensed Customer
Posts: 3151
Joined: Wed Sep 21, 2011 4:13 pm
Location: Warsaw, Poland

Notiusweb,

You said something important about titan X and risers: if I put a primary gpu (titan X) on a riser will a system boot?
3090, Titan, Quadro, Xeon Scalable Supermicro, 768GB RAM; Sketchup Pro, Classical Architecture.
Custom alloy powder coated laser cut cases, Autodesk metal-sheet 3D modelling.
build-log http://render.otoy.com/forum/viewtopic.php?f=9&t=42540
User avatar
Notiusweb
Licensed Customer
Posts: 1285
Joined: Mon Nov 10, 2014 4:51 am

Post by smicha » Wed Jul 01, 2015 9:18 am
Notiusweb,

You said something important about titan X and risers: if I put a primary gpu (titan X) on a riser will a system boot?

Hi Smicha,

In the cases of my Titan X, Titan Z, and 660 Ti as a primary GPU loading through the powered USB 3.0 riser on my ASRock X79 Extreme 11, the answer is NO.

I have just recently experienced failure in trying different things to open up lane 2.
In various scenarios I have run them as Primary GPU in lane 1 on the riser and they would not boot into Windows. They go into BIOS, but then just stall after that. One time Windows flash screen froze, another time it started Startup Recovery and said it could not repair. I did try the Titan X to load out of lane 2 one time also, but it was almost being handled like a second monitor. It worked once, installed drivers, asked to restart, but then would not ever proceed to boot. I tried the whole process again after having told BIOS to load lane 2 as the primary GPU, but then I ran into the no-boot problem again. Could be only my board, or the risers I am using, but they won't work off of the USB 3.0 riser as functional primary GPUs. It maybe would work off of some other type of riser, or the risers with some other motherboard, but I don't know...

I did see that Polish Ginger had rig photos, on page 7 of this blog, where the primary GPU appears to be connected to the riser and tested in different slots. But there were problems encountered, and then I see on page 8 when Polish Ginger's photos show the photo of the Amfeltec cluster, the primary GPU looks like it is attached directly to the board, because PG makes reference that the cluster is holding the 980 Ti's. I would bet the riser was not allowing the primary GPU to boot properly on that motherboard as well during the 'page 7 phase'.

On the side, I took a look at recent motherboards by ASRock, and saw a lot seem to have 6 PCI lanes, but the gap between 1 and 2 is 2 slots wide. I am wondering if it is common that lane 2 never gets utilized unless you had a single slot card. Titan X and Z are power hungry, but I couldn't get my 660 Ti working that way either. Maybe a powered full non-USB16x long riser would work?... Sorry to throw out more questions, but I would say if one was going to build a rig, they could probably already test with their own current rig and cards whether or not it is plausible that it is viable.

In other news...
What do you want...these leather seats are hot...
What do you want...these leather seats are hot...
Hot like your GPU...
Hot like your GPU...
Win 10 Pro 64, Xeon E5-2687W v2 (8x 3.40GHz), G.Skill 64 GB DDR3-2400, ASRock X79 Extreme 11
Mobo: 1 Titan RTX, 1 Titan Xp
External: 6 Titan X Pascal, 2 GTX Titan X
Plugs: Enterprise
User avatar
itou31
Licensed Customer
Posts: 377
Joined: Tue Jan 22, 2013 8:43 am

With the riser with cable USB3. There are only 4 signals (differental TX and differental RX) from first lane of PCIe. And I think that's there are many other signal like synchro etc ... that needed to boot and defined as primary.
I7-3930K 64Go RAM Win8.1pro , main 3 titans + 780Ti
Xeon 2696V3 64Go RAM Win8.1/win10/win7, 2x 1080Ti + 3x 980Ti + 2x Titan Black
Post Reply

Return to “Demo Version Questions & Discussion”