smicha wrote:And here is something I don't get (asus x99 ws)
40-Lane CPU-
7 x PCIe 3.0/2.0 x16 (single x16 or dual x16/x16 or triple x16/x16/x16 or quad x16/x16/x16/x16 or seven x16/x8/x8/x8/x8/x8/x8)
28-Lane CPU-
7 x PCIe 3.0/2.0 x16 (single x16 or dual x16/x16 or triple x16/x16/x16 or quad x16/x16/x16/x16 or seven x16/x8/x8/x8/x8/x8/x8)
28 lane cpu and x16/x16/x16/x16?
"However, the X99-E WS uses a pair of PLX chips on the motherboard to expand the usable number of PCI-Express lanes dramatically. A full four PCI-Express x16 cards can be used at full speed, or up to seven with two at full x16 speed and the other slots at x8."
(source: http://www.pugetsystems.com/parts/Mothe ... E-WS-10664)
these workstation motherboards are really heavy duty & start to respect those designs more & more - do understand where the price comes.. - every PLX additional chip itself for cost 30-40$ for the company..- nu surprise price tags of complete board do rise up from time to time up to 350-500$
however I need to read more 'bout those PLX chips, as from the though on previous post, that I found while reading 'bout NVMe PCIe drives..
"..with the help of PCIe switches it's possible to grant all devices the lanes they require (although the maximum bandwidth isn't increased, but switches allow full x16 bandwidth to the GPUs when they need it)."
..I really start to get curious 'bout their usefulness..in multi GPU rigs..