"It's better to keep your mouth shut and appear stupid than open it and remove all doubt."Seekerfinder wrote: Ah. You were the PCIe riser cable in someone else's rendering. As we all know, PCIe risers are very important...

"It's better to keep your mouth shut and appear stupid than open it and remove all doubt."Seekerfinder wrote: Ah. You were the PCIe riser cable in someone else's rendering. As we all know, PCIe risers are very important...
Says the man with 2955 posts...glimpse wrote:"It's better to keep your mouth shut and appear stupid than open it and remove all doubt."Seekerfinder wrote: Ah. You were the PCIe riser cable in someone else's rendering. As we all know, PCIe risers are very important...
Thanks Tutor. It really seems that it's hit and miss with some of these systems. It generally does seem that dual CPU boards handles lane management much better than single CPU boards though.Tutor wrote:Nope. I had installed OSes, drivers, etc. Then loaded up seven of my Titans (the original ones) on risers planted in the seven PCIe slots; then, that's when things went very bad with bios corruption. Between my Asus nightmare and my subsequent purchases of Tyan & Supermicro Servers, I ran the Titans successfully in one of my two old EVGA SR-2s systems (purchased back in 2010). They are 7 PCI-e slotted/ X16 sized - motherboards. There, the Titans ran without any incident. The SR-2s are now running GTX 590s. The bioses for the SR2s are known to be somewhat quirky; but what does that say about my Asus Servers' bios? I still smell their stench.Seekerfinder wrote:
Tutor,
Do I understand correctly that you got that 00 error before loading the system with a bunch of rendering cards? I think Asus service worldwide seems to be a problem.
I also like the the Tyan boards.
Seeker
Seekerfinder wrote: ... .
Thanks Tutor. It really seems that it's hit and miss with some of these systems. It generally does seem that dual CPU boards handles lane management much better than single CPU boards though.
Best,
Seeker
Agreed. But it would also be really good to try and 'distill' what in most cases *should* work with multi-GPU system. Try and reduce the unknowns, even if it means a narrower set of options for builders of new rigs. We're going to try and do that at least to a degree with the GPU Turbine. Right now, we're at dual CPU and dual PSU.Tutor wrote:Seekerfinder wrote: ... .
Thanks Tutor. It really seems that it's hit and miss with some of these systems. It generally does seem that dual CPU boards handles lane management much better than single CPU boards though.
Best,
Seeker
I fully agree. Also, after re-reading the posts on this and some of the previous pages, to sum it up - It seems to me that the freedom we've exercised in chosing the many different systems we've put into service as muti-GPU rendering systems comes at the price of some frustration where a single component that works in one system may appear not to be working in another system. Determining what's specifically at fault, given the many other hardware/software (including even version differences) variables at play, can be nightmarish. Thus, it appears that purchasing what is sold specifically as a GPU server might be the best way to go if one want's more comfort that the hardware isn't the source of the problem. The downside, however, of going that route is that the entry price is much higher. Decisions, decisions!
Again, I fully agree with you. What you advocate has been and will continue to be the primary focus of what we've been and will be doing here. But for those who want to minimize the "it should have worked," (i.e., they want the minimal of frustration/experimentation) and have the means, I'm just giving them another option. I'd rather err by being too over-inclusive when it comes to providing options.Seekerfinder wrote:Agreed. But it would also be really good to try and 'distill' what in most cases *should* work with multi-GPU system. Try and reduce the unknowns, even if it means a narrower set of options for builders of new rigs. We're going to try and do that at least to a degree with the GPU Turbine. Right now, we're at dual CPU and dual PSU.... .
I fully agree. Also, after re-reading the posts on this and some of the previous pages, to sum it up - It seems to me that the freedom we've exercised in chosing the many different systems we've put into service as muti-GPU rendering systems comes at the price of some frustration where a single component that works in one system may appear not to be working in another system. Determining what's specifically at fault, given the many other hardware/software (including even version differences) variables at play, can be nightmarish. Thus, it appears that purchasing what is sold specifically as a GPU server might be the best way to go if one want's more comfort that the hardware isn't the source of the problem. The downside, however, of going that route is that the entry price is much higher. Decisions, decisions!
Best,
Seeker
Tom, were you referring to a physical ground wire here? That's interesting you ask that if that is what you meant, Amfeltec first asked me to attach a ground wire to the PSU, to the Amfeltec Chassis, and then to the PC chassis. I actually tried it 2x with a 10-guage ground wire, with no effect. When you find yourself doing unusual things to make something normal you start to feel like you have a problem with that arrangement.** another idea, but I do not know how much ground it has..- Notius have You tried to ground multiple PSUs & those explansion units?
How is the Turbine arranged, does it connect externally to a motherboard like a GPU splitter with one cable connecting to a PCIe slot, or is the motherboard integrated into it structurally somehow, and it has GPUs independently connected to the motherboard?Seekerfinder wrote: ... .
We're going to try and do that at least to a degree with the GPU Turbine. Right now, we're at dual CPU and dual PSU.
Best,
Seeker
Yeah, I was refering to this. A while ago one Guy on Fb group was in panic as He couldn't even start computer (minig rig like setup) with more GPUs it wasn't moving anywhere not booting at all (he was using splitter from Amfeltec). I was like ready to send my splitter (as I had working tested unit sitting unused), but then Amfeltec support suggested to ground things up & that solved problems. So I was thinkin' maybe that (missing ground somewhere) has anything to do with stability in Your case as well.Notiusweb wrote: Tom, were you referring to a physical ground wire here? That's interesting you ask that if that is what you meant, Amfeltec first asked me to attach a ground wire to the PSU, to the Amfeltec Chassis, and then to the PC chassis. I actually tried it 2x with a 10-guage ground wire, with no effect. When you find yourself doing unusual things to make something normal you start to feel like you have a problem with that arrangement.
Not just "another," but an excellent idea for all of us out-of-the-boxers to first try to bust the dead system/external component ghosts.glimpse wrote:...
** another idea, but I do not know how much ground it has..- Notius have You tried to ground multiple PSUs & those explansion units?