So just to clarify then, when 2.0 is out we will be able to use both distributed (still frame) and sequence (animated) over a number of different boxes directly from inside Cinema 4D without transferring to standalone? Will the accessable GPUs show up under the c4d plugin devices tab?
Thanks
Octane Render 2.0 for Cinema 4D previews
Moderators: ChrisHekman, aoktar
I think it's about GPUDirect that kelpler is able connect by LAN
so, I don't think it has to export whole scene to all of cards
it wouldn't make sense otherwise
so, I don't think it has to export whole scene to all of cards
it wouldn't make sense otherwise
- indexofrefraction
- Posts: 178
- Joined: Wed Jul 25, 2012 9:53 am
ah ok, thanks for clarifying aoktar,
i thought this is a 1.5x feature already, but also for 2.0 it is VERY good news !
is 2.0 close? weeks? months? a little hint?
i thought this is a 1.5x feature already, but also for 2.0 it is VERY good news !
is 2.0 close? weeks? months? a little hint?

Mac Pro (2012) 2x6 Core | 24GB | 1 x Geforce GTX580/3GB
It's not GPUdirect,but our own implementation. We distribute render data to other computers via LAN. We try to do it as efficiently as possible (using compression, caching, differential updates etc.), but it's still quite a bit of data, so 1GBit/s Ethernet (or higher) is highly recommended. Fortunately 1GBit/s is quite cheap these days.enigmasi wrote:I think it's about GPUDirect that kelpler is able connect by LAN
so, I don't think it has to export whole scene to all of cards
it wouldn't make sense otherwise
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
The beta is becoming more and more stable, so I think it's not months anymore, but weeks.indexofrefraction wrote:ah ok, thanks for clarifying aoktar,
i thought this is a 1.5x feature already, but also for 2.0 it is VERY good news !
is 2.0 close? weeks? months? a little hint?
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Awesome, do you guys have any numbers for that, in terms of percentage decrease over direct PCI-E cards doing the work?abstrax wrote:It's not GPUdirect, but our own implementation. We distribute render data to other computers via LAN. We try to do it as efficiently as possible (using compression, caching, differential updates etc.), but it's still quite a bit of data, so 1GBit/s Ethernet (or higher) is highly recommended. Fortunately 1GBit/s is quite cheap these days.
Rig#1 Win 10 x64 | GTX 1080Ti | GTX 1080Ti | GTX 1080Ti | i7 7900K 4.7GHz | 64GB
Rig#2 Win 10 x64 | GTX 1080Ti | GTX 1080Ti | GTX 1080Ti | i7 3930K 4.4GHz | 32GB
Rig#3 Win 10 x64 | GTX 1070| GTX 1070| GTX 1070| i7 2600K 4.8GHz | 32GB
Rig#2 Win 10 x64 | GTX 1080Ti | GTX 1080Ti | GTX 1080Ti | i7 3930K 4.4GHz | 32GB
Rig#3 Win 10 x64 | GTX 1070| GTX 1070| GTX 1070| i7 2600K 4.8GHz | 32GB
After all data is transferred, the numbers add up without loss. How long the data transfer takes, depends on the scene, the number of slaves and the network. In other words: If you make big changes in geometry and render only a few samples per pixel, it doesn't work too well, but if you render hundreds or thousands of samples per pixel, it scales nicely.brasco wrote:Awesome, do you guys have any numbers for that, in terms of percentage decrease over direct PCI-E cards doing the work?abstrax wrote:It's not GPUdirect, but our own implementation. We distribute render data to other computers via LAN. We try to do it as efficiently as possible (using compression, caching, differential updates etc.), but it's still quite a bit of data, so 1GBit/s Ethernet (or higher) is highly recommended. Fortunately 1GBit/s is quite cheap these days.
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Man, that is awesome to hear.abstrax wrote:After all data is transferred, the numbers add up without loss. How long the data transfer takes, depends on the scene, the number of slaves and the network. In other words: If you make big changes in geometry and render only a few samples per pixel, it doesn't work too well, but if you render hundreds or thousands of samples per pixel, it scales nicely.
Going to have to rethink my rendernodes strategy

cheers
brasc
Rig#1 Win 10 x64 | GTX 1080Ti | GTX 1080Ti | GTX 1080Ti | i7 7900K 4.7GHz | 64GB
Rig#2 Win 10 x64 | GTX 1080Ti | GTX 1080Ti | GTX 1080Ti | i7 3930K 4.4GHz | 32GB
Rig#3 Win 10 x64 | GTX 1070| GTX 1070| GTX 1070| i7 2600K 4.8GHz | 32GB
Rig#2 Win 10 x64 | GTX 1080Ti | GTX 1080Ti | GTX 1080Ti | i7 3930K 4.4GHz | 32GB
Rig#3 Win 10 x64 | GTX 1070| GTX 1070| GTX 1070| i7 2600K 4.8GHz | 32GB
i have to say, Network rendering is unmatched feature on Live Viewer. Also it's good for animations, changes accordingly scene content. Very good jobbrasco wrote:Man, that is awesome to hear.abstrax wrote:After all data is transferred, the numbers add up without loss. How long the data transfer takes, depends on the scene, the number of slaves and the network. In other words: If you make big changes in geometry and render only a few samples per pixel, it doesn't work too well, but if you render hundreds or thousands of samples per pixel, it scales nicely.
Going to have to rethink my rendernodes strategy
cheers
brasc

Octane For Cinema 4D developer / 3d generalist
3930k / 16gb / 780ti + 1070/1080 / psu 1600w / numerous hw
3930k / 16gb / 780ti + 1070/1080 / psu 1600w / numerous hw