Data is sent to the slave immediately without delay of Net memory.
Saving about 5 to 10 seconds in a scene with a render time of 2 to 3 minutes is a considerable advantage.
If you do not have a lot of network GPUs, it's a good idea to build a 10Gb NIC.
To build a 10Gb network I purchased switches and GBICs on Ebay inexpensively.
I bought a 10Gb 24port switch and 10 10Gb NICs and DAC cables using less than $ 500 total cost.
If you are interested in extending your network rendering further, consider it.
The 10Gb NIC is fantastic.
Moderator: juanjgon
- BorisGoreta
- Posts: 1413
- Joined: Fri Dec 07, 2012 6:45 pm
- Contact:
I was considering it, but to make it work you must sacrifice one PCI slot in your workstation and one in each nodes ( I have two nodes ), which means 3 GPUs less.
What brand did you get ?
What are the real life transfer speeds you get ?
What brand did you get ?
What are the real life transfer speeds you get ?
19 x NVIDIA GTX http://www.borisgoreta.com
LB6M + Mellanox 10Gb + DACBorisGoreta wrote:I was considering it, but to make it work you must sacrifice one PCI slot in your workstation and one in each nodes ( I have two nodes ), which means 3 GPUs less.
What brand did you get ?
What are the real life transfer speeds you get ?
I have to sacrifice one PCIE, but I intend to solve the problem with the PCIE Rig in the future.
I will connect 6 PCIE X16 cables to the X99-E WS main board and install one 10GNIC.
As the number of slaves increases, the net memory data becomes larger as the data becomes larger, so that the leak time becomes longer and the efficiency becomes lower.
- BorisGoreta
- Posts: 1413
- Joined: Fri Dec 07, 2012 6:45 pm
- Contact:
You could get a NIC with two outputs to the switch (Intel has such models) and configure one to send data to 1 slave and second to the other slave. This is what I did with my 2 stock NICs on the motherboard which doubled the transfer speed to the nodes from 128MB/s to 256MB/s.
I had situations where the slaves got their data uploaded only seconds before the actual frame finished rendering, so their contribution was minimal. The scene had ocean in it so it changed for every frame, that is why new data had to be pushed to slaves for every frame.
I plan to purchase new MBs with such fast NICs built in so I can save the PCI slots for GPUs.
I had situations where the slaves got their data uploaded only seconds before the actual frame finished rendering, so their contribution was minimal. The scene had ocean in it so it changed for every frame, that is why new data had to be pushed to slaves for every frame.
I plan to purchase new MBs with such fast NICs built in so I can save the PCI slots for GPUs.
19 x NVIDIA GTX http://www.borisgoreta.com
-----------------------------------------------------------------------------------------------------------------------------------------------BorisGoreta wrote:You could get a NIC with two outputs to the switch (Intel has such models) and configure one to send data to 1 slave and second to the other slave. This is what I did with my 2 stock NICs on the motherboard which doubled the transfer speed to the nodes from 128MB/s to 256MB/s.
I had situations where the slaves got their data uploaded only seconds before the actual frame finished rendering, so their contribution was minimal. The scene had ocean in it so it changed for every frame, that is why new data had to be pushed to slaves for every frame.
I plan to purchase new MBs with such fast NICs built in so I can save the PCI slots for GPUs.
The way to save a slot and use a 10G NIC is to use an expensive motherboard like X99-E WS / 10G.
But it is expensive.
Give up one slot
http://www.ebay.com/itm/LOT-OF-2-MNPA19 ... Sw-itXq5DH
OR
http://www.ebay.com/itm/Mellanox-MNPH29 ... Sw8gVX3F2x
Could it be cost saving to use this product?