
OcDS and Network Rendering - what is needed?
Moderator: BK
Forum rules
Please keep character renders sensibly modest, please do not post sexually explicit scenes of characters.
Please keep character renders sensibly modest, please do not post sexually explicit scenes of characters.
got it running... rendertime cut to 40% now... BUT i got the same weird issue that you have... those way too bright color... you can exactly see in the viewport when first slave inputs arrive... then color changes from normal to this... whitish whatever
any ideas how to avoid this?

Unfortunately I don't have any official feedback on those from t_3 (which does not surprise me as he dos not often communicate until new version is ready). So as I wrote few posts above, the only workaround I found is to use the scene export feature in the System tab of the plugin and export the scene to .orbx which you can then render in Octane Render standalone, where the network rendering works like a charm...gaazsi wrote:got it running... rendertime cut to 40% now... BUT i got the same weird issue that you have... those way too bright color... you can exactly see in the viewport when first slave inputs arrive... then color changes from normal to this... whitish whateverany ideas how to avoid this?
So basically I have two Titans X in my primary rig and when I work in DAZ I have one of them dealing with the Octane viewport. This is enough to get a preview of the scene when I'm actively working with it. When I'm all good to go for final render then I export the scene to .orbx and do the full size, high quality render there with both the Titans X in my master node and then with the old original Titan and 780 Ti in the slave.
It is not ideal as if you find any issue in the scene after exporting, then if it is geometry/pose/morph related, you need to switch back to Studio, mend the issue and do the export again.

Unfortunately bebeg4d claims, that for him the network rendering feature in the plugin works without problems so apparently it is something that does not happen to everyone and I fear that since it can't be easily replicated on any machine then t_3 will have hard time finding the cause and fixing it...
But at least there is a workaround for that at the moment... Cumbersome, but it works so the extra money spent for the additional full standalone license (which in my opinion is a totally stupid licensing model, as I will never run standalone on the slave node) is not a wasted investment.
Birdovous
Master: Core i7 2600K, 32GB RAM, 2x EVGA GTX Titan X (SC)
Slave 1: Core i5 4460, 16GB RAM, 2x EVGA GTX 1080 Ti SC2
Slave 2: Core i7 9700K, 64GB RAM, 2x ASUS RTX 2080 Ti
Master: Core i7 2600K, 32GB RAM, 2x EVGA GTX Titan X (SC)
Slave 1: Core i5 4460, 16GB RAM, 2x EVGA GTX 1080 Ti SC2
Slave 2: Core i7 9700K, 64GB RAM, 2x ASUS RTX 2080 Ti
Hi,
I can confirm that with my setup, the net rendering works with no issues except that I'm not able to stop temporarily the slave without quitting the app on the slave itself.
Unfortunately I have to move my second PC to another location, so I can't do any further tests with it (there is not a net render installer for Mac and Linux currently), I have to do the same trick of exporting to Standalone, for now.
Losing textures is really weird, I would expect a slave error instead
Also my slave has less Vram than the master, so this should not be the culprit.
I suppose that the network is on gigabit ethernet, could there be some other network activities that slow down the transferring?
ciao beppe
I can confirm that with my setup, the net rendering works with no issues except that I'm not able to stop temporarily the slave without quitting the app on the slave itself.
Unfortunately I have to move my second PC to another location, so I can't do any further tests with it (there is not a net render installer for Mac and Linux currently), I have to do the same trick of exporting to Standalone, for now.
Losing textures is really weird, I would expect a slave error instead

Also my slave has less Vram than the master, so this should not be the culprit.
I suppose that the network is on gigabit ethernet, could there be some other network activities that slow down the transferring?
ciao beppe
The thing is, that it is not just a texture loss. It seems that there is no color information whatsoever either transferred to the slave, or taken into account by the slave while rendering. I did try to create a new scene with just a primitive cube and applying a simple diffuse material to it with no textures but only an RGB color in the diffuse node. As soon as the slave kicks in and starts sending rendered data back to master the image in the viewport gets washed out/desaturated as it seems that the slave basically sends in just 'clay' mesh with all surfaces being white.bepeg4d wrote:Losing textures is really weird, I would expect a slave error instead![]()
Also my slave has less Vram than the master, so this should not be the culprit.
I suppose that the network is on gigabit ethernet, could there be some other network activities that slow down the transferring?
ciao beppe
In my master I have two Titans X with 12GB each. In my slave I have a first generation Titan (6GB) and a 780 Ti (3GB). Scenes I render have usually less than 3GB so it should not be a VRAM issue. The log on slave is clean, no error messages.
No significant traffic on the network.
I may still try to check whether, at least judging by the network activity, the entire scene gets transferred. With standalone (and rendering my normal scenes exported from DAZ in the .orbx), the slave usually receives a blob of data of about 1.5GB in size at 700-900Mbps rate. I may try to see whether there will be same/similar traffic pattern with the plugin alone, which would indicate that both geometry and textures were at least transferred to the slave.
Now if only the slave could do some more extensive logging. As it is now it is pretty much useless for tracking down issues

I guess no word from t_3 on this through your channel?
Birdovous
Master: Core i7 2600K, 32GB RAM, 2x EVGA GTX Titan X (SC)
Slave 1: Core i5 4460, 16GB RAM, 2x EVGA GTX 1080 Ti SC2
Slave 2: Core i7 9700K, 64GB RAM, 2x ASUS RTX 2080 Ti
Master: Core i7 2600K, 32GB RAM, 2x EVGA GTX Titan X (SC)
Slave 1: Core i5 4460, 16GB RAM, 2x EVGA GTX 1080 Ti SC2
Slave 2: Core i7 9700K, 64GB RAM, 2x ASUS RTX 2080 Ti
Hi,
the last time I have contacted him, he said that he was evaluating the possibility to make octane's own network gui available to connect to the slaves, at least as an alternative option.
ciao beppe
the last time I have contacted him, he said that he was evaluating the possibility to make octane's own network gui available to connect to the slaves, at least as an alternative option.
ciao beppe
well my current setup is 2x 980ti in master now, and 2x 780ti in slave... all of them 6gb vram. network is running pretty well... around 90-96 MB/s.
bird already pointed out what is happening and when, and what might be the reason for it so i wont repeat that
the orbx export takes me around 15 minutes (local with ssd) and sometimes it cant be openened in standalone then with an error message. AND if you find some mistakes in it, you have to do the same procedure over and over again. compared to a complete rendertime of 30-40 minutes that solution is a joke ofc... at least for me
would be nice if you could bump that to t_3, maybe its just a small&easy fix
bird already pointed out what is happening and when, and what might be the reason for it so i wont repeat that

the orbx export takes me around 15 minutes (local with ssd) and sometimes it cant be openened in standalone then with an error message. AND if you find some mistakes in it, you have to do the same procedure over and over again. compared to a complete rendertime of 30-40 minutes that solution is a joke ofc... at least for me

small update: today i did a little test, just to be sure its not about my network setup. replaced my "big" switch (worth 400 euros) from where everything starts in my basement (both master and slave directly connected there) with a cheap and crappy 10 euro thingie... thought maybe some of those oh so fancy features of the big one causes the trouble, but no. also with that cheap one there is no difference 

The problem is not in network topology or any of OSI layers. The simple fact that with Standalone all works fine (even with all the network cards in my master box enabled) simply proves that the issue is in either the code of the plugin itself or in the code of the OcDS network slave node.gaazsi wrote:small update: today i did a little test, just to be sure its not about my network setup. replaced my "big" switch (worth 400 euros) from where everything starts in my basement (both master and slave directly connected there) with a cheap and crappy 10 euro thingie... thought maybe some of those oh so fancy features of the big one causes the trouble, but no. also with that cheap one there is no difference
So I think that without direct intervention from t_3, who, as a developer of the plugin, is the only person with actual knowledge of the internal mechanics of his own code, all anyone with the same problem can do, is to use the workaround with export to standalone, where all works fine. Cumbersome, but works.
Birdovous
Master: Core i7 2600K, 32GB RAM, 2x EVGA GTX Titan X (SC)
Slave 1: Core i5 4460, 16GB RAM, 2x EVGA GTX 1080 Ti SC2
Slave 2: Core i7 9700K, 64GB RAM, 2x ASUS RTX 2080 Ti
Master: Core i7 2600K, 32GB RAM, 2x EVGA GTX Titan X (SC)
Slave 1: Core i5 4460, 16GB RAM, 2x EVGA GTX 1080 Ti SC2
Slave 2: Core i7 9700K, 64GB RAM, 2x ASUS RTX 2080 Ti
That 2GB sounds odd. Truth is, that the biggest export file I rendered so far was about 1.8GB. After I get home I shall try a bigger file to see if it behaves the same on my end as well.gaazsi wrote:not for everyoneseems the maximum filesize for those exported files is 2gb. if it is... or better would be bigger its not possible to open it in standalone later on
Birdovous
Master: Core i7 2600K, 32GB RAM, 2x EVGA GTX Titan X (SC)
Slave 1: Core i5 4460, 16GB RAM, 2x EVGA GTX 1080 Ti SC2
Slave 2: Core i7 9700K, 64GB RAM, 2x ASUS RTX 2080 Ti
Master: Core i7 2600K, 32GB RAM, 2x EVGA GTX Titan X (SC)
Slave 1: Core i5 4460, 16GB RAM, 2x EVGA GTX 1080 Ti SC2
Slave 2: Core i7 9700K, 64GB RAM, 2x ASUS RTX 2080 Ti