Page 1 of 2

17 PCI Express 2.0 x16

Posted: Sun Dec 16, 2012 8:22 pm
by glimpse
I was googling around the other day, tripped through bitcoin forums..after a while I found a company, that makes some amazing products =) & here's one of them

”..the BPG8032 PCI Express 2.0 backplane redefines the world of PICMG 1.3 slot-based servers by maximizing PCI Express I/O expandability...For high-performance GPGPU applications, this backplane supports up to nine NVIDIA Tesla 20-series GPUs for cluster computing applications inside a single 5U, 19″ rackmount computer enclosure when using the TSB7053 single board computer.“

post itself: http://blog.trentonsystems.com/new-back ... -2-0-slots

backplane BPG8032: http://www.trentonsystems.com/products/ ... ne-bpg8032
& their SHB, TSB7053: http://www.trentonsystems.com/products/ ... 13/tsb7053

just sharing if anyone is geekly interested in these kind of things..apart from the price =)..it's actually amazing how much of a power You can pack into one system =)

Re: 17 PCI Express 2.0 x16

Posted: Sun Dec 16, 2012 8:26 pm
by Refracty
so you will need 3 psu s and a high voltage current :)

Re: 17 PCI Express 2.0 x16

Posted: Sun Dec 16, 2012 8:57 pm
by bepeg4d
wow, with 9x 690 you could have 18 gpu for octane :o
ciao beppe

Re: 17 PCI Express 2.0 x16

Posted: Sun Dec 16, 2012 9:16 pm
by glimpse
Refracty wrote:so you will need 3 psu s and a high voltage current :)
why three, when You can have one more for redundancy! =p

http://www.trentonsystems.com/products/ ... er/trc5005
bepeg4d wrote:wow, with 9x 690 you could have 18 gpu for octane :o
ciao beppe
actually it supports 17 x16 PCIe 2.0links..but it would be only possible to stuff cards if they had single slot design..
source: http://www.trentonsystems.com/downloads ... csheet.pdf

p.s. if You wouldn't like to use SHB, there's an option for PCI Express Expansion products.
http://blog.trentonsystems.com/pci-expr ... capability

Re: 17 PCI Express 2.0 x16

Posted: Mon Dec 17, 2012 8:15 am
by gabrielefx
Are these systems certified for Teslas?
I would go for Tyan barebones, less slots but pci 3.0.

Re: 17 PCI Express 2.0 x16

Posted: Mon Dec 17, 2012 8:24 am
by glimpse
gabrielefx wrote:Are these systems certified for Teslas?
I would go for Tyan barebones, less slots but pci 3.0.
reading their website inside out I see they have a lot of encouragment from big players..
so I assume their product is verified, but I haven't seen this exact word anywhere =)..

though they claim that their product supports up to..Nine K20 units on one backplane =).,
..if that scale up as nicelly as on arion we have the power of roughtly..~eighteen GTX 580..

from what I've read..nVidia claims the cards running 2-2.5 times faster than Fermi generation
(but they speak 'bout DP) out of the box..

Re: 17 PCI Express 2.0 x16

Posted: Mon Dec 17, 2012 12:49 pm
by César
This is very interesting. If I understand, you just have to connect the big PCI backplane to your computer with the PCI extension cable and it works ?

I don't find the price of this PCI backplane, do you know how much it cost ?


I was thinking about buy a PCI extender, I found other solutions in the same kind of idea :

Cubix stuff : https://www.cubixgpu.com/Online-Store (between 2000 and 7000 €)

Cyclone stuff : http://www.cyclone.com/products/expansi ... /index.php
They sell blackplanes too, but I don't find the price.


The space of the PCI slots seems to be important, you can only put ~8 video cards on the Trenton 17 slots PCI backplane because cards like a GTX or Tesla are thick. Some backplane have PCI slots more spaced because of that : http://www.cyclone.com/products/expansi ... /index.php


I am very interested to know if buy a backplane with an power supply is more economic than the very expensive Cubix box.

Re: 17 PCI Express 2.0 x16

Posted: Mon Dec 17, 2012 7:12 pm
by glimpse
Cesar, these goodies are not going to come cheap.. as GPU computing is in demand for big players - corporations buying in big quantities, building some of the fastest couters in the world..this market is ocupied by OEMs and those are selling fully equiped systems (with intention to earn of course..=) for high prices..

Companies that produce this stuff doesn't bother to write the price of one unit as the product itself is not ment to be sold for enthusiast. To reinforce that I can mention the fact that it's a bit problematic to utilise all those expansion slots using GPU's not only because of density issues, but also 'cos of the fact that drivers or bioses (don't know 'bout that too much..) doesn't allow easily utilise available power - don't even dream about plug and play.. if You willing to stuff more than eight..unless Ypu buy the product directly from them & get some kind of suport..

so..let's get back to the price. as far as managed to find out new backplane might cost between 1-2k$ pluss, You need SHB or PCIe conection unit..Depending on SHB You're looking, the price comes along..You can choose from i3/i5/i7 powered board with ddr* - that might come rather 'cheap', though =) if You lean to the unit that has dual 8core Xeons (E5) with ECC..& with all other goodies..=p price goes up steap enough

anyway..this is more or less standartised field..so you can use different verdor pieces and plug them into one unit. That allows to look around for some used stuff online. in this thread (https://bitcointalk.org/index.php?topic=64450.0) Guy manages to find used backplane for 600$ and SHB 200$.. then PSU..- read more if You want..

even if this come as a really loud solution..You can use something like PCoIP & have thin client on Your table, while keeping this beast in a cage sowhere in a celar or storage room (if You good good enough cooling there =) maybe in the fridge? =p..ok..too much offtopic..

anyway You will find a lot of issues here & in other forums around why this path soon become seriously complicated =)..




I try to do math from time to time & look for different solutions..byfar on my own calculations, one of the cheapest way is to grab dual slot MB, put some lowEnd CPUs, as You don't need them too much (unless You want to combine that with high end Xeons for CPU rendering..) and slot somewhat 7 cards. This should work 'out of the box'..without getting too deep into bios/drivers issues etc.. pluss everything is one box/rig/place.. & the best thing is that You can easilly find everything and order at once from mayor online retailers..so if.. let's say You have a need for some serious power..get online order and in a period of few days everything will be delivered..You will feel Your pockets lighter or even see some debt in a creadit card =p but..this will come cheaper than the solution i've started a thread here..'cos You paying way way more for density, insane expandability, etc..

Re: 17 PCI Express 2.0 x16

Posted: Tue Dec 18, 2012 10:21 am
by César
Thanks for your complete answer !

So, if I understand, it would be less expensive to use the PCI expansion cable connected to our actual computer instead of build an other one with the SHB. But always around 1000 €.


Maybe one day if cloudcomputing will be available for Octane it will be possible to use an additionnal computer in LAN to increase speed ?
Build an other computer with 4 PCI x16 slots with good spacing could be not too expensive.

Re: 17 PCI Express 2.0 x16

Posted: Tue Jan 29, 2013 9:26 am
by boris
WP_000225.jpg
we got one of these a couple of weeks ago when we were in a hurry to do an animation. the deadline was that near (like two weeks) we decided to invest a part of our salary into such a thing.
it's really rock solid manufactured and trenton's support is amazing.
it's connected to my host via 3m cable, the host itself has another two gtx 680 and one tesla m2070, all three water cooled. the animation was rendered out with arion but will probably behave equally with octane (testing right now). We did not notice a significant bottleneck of the one to four pci configuration.
we need those 4gb per card that's why we've chosen the gtx 680. One good thing about those gainward 2.5-slot design is that their cooling is astonishing. placed like shown in the picture their temperature won't exceed 60 degrees celsius and therefore the fans stay really quite. no need for watercooling.
the future plan is to put for example 6 cards directly on the backplane and another 6 on top of them using pci riser cables.
unfortunately my Asus P6T7 won't be able to address all those cards an won't boot. see http://fastra2.ua.ac.be/?page_id=214 for further details.
So what we think we need then is a host board with a bios that is capable to address devices above 4gb space (64bit). preferably a UEFI bios board. this we will test soon :)
another open question is: will nvidia drivers allow to have say 14 cards on one system? I contacted nvidia support via chat, the guy said that there is no built in limitation from driver side. but that's no guaranty for me and I haven't found a windows system on the internet with more than 8 cards yet.
another thing to consider:
- the bottleneck in such a system will probably be the bus between host system memory and processor(s). therefore I believe a dual processor board will be an advantage to handle the traffic to/from the memory blocks because there are then two lanes and not only one.

cheers
boris