So, still no slave licences..

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
User avatar
preciousillusion
Licensed Customer
Posts: 94
Joined: Mon Aug 12, 2013 7:19 pm
Location: Stockholm

calus wrote:The technical limit is not about slowness, this is about stability ;)
Sometimes Otoy claim that's the reason, sometimes they claim it will kill puppies or whatever. You've posted the real reason below, which Otoy has all but explicitly confirmed.
calus wrote:With the right hardware and service, it's simpler to certify something,
that's why the enterprise version of OctaneEngine will only work with 10GE and Linux hardware servers certified by both Otoy and NVIDIA,
and can afford to up the GPU limit to 200.
How will they sell these licenses without a limit on the regular ones?
calus wrote:Right :)
but if Otoy remove the GPU limit from Octane license , I'm sure PreciousIllusion will sue them because he can't make Octane work with 50 GPu and random hardware... :D
Nah, I'm good. I'm gonna keep letting Otoy do the sueing around here.
calus
Licensed Customer
Posts: 1308
Joined: Sat May 22, 2010 9:31 am
Location: Paris

preciousillusion wrote:
calus wrote:Right :)
but if Otoy remove the GPU limit from Octane license , I'm sure PreciousIllusion will sue them because he can't make Octane work with 50 GPu and random hardware... :D
Nah, I'm good. I'm gonna keep letting Otoy do the sueing around here.
Arf, my bad, I should have seen this one coming :D

Honesty I'm sure there's not only one reason for the 20 GPUs limit and marketing might be one of them with also technical reason.
Pascal ANDRE
User avatar
preciousillusion
Licensed Customer
Posts: 94
Joined: Mon Aug 12, 2013 7:19 pm
Location: Stockholm

calus wrote:Honesty I'm sure there's not only one reason for the 20 GPUs limit and marketing might be one of them with also technical reason.
There might be more than one reason and it's one thing to not support a given number of gpus, but becomes very problematic when you're not allowed to use more.
All they have to do is to say that "Octane officially supports X gpus" and if someone wants to use more they're on their own. Problem solved.
milanm
Licensed Customer
Posts: 261
Joined: Tue Apr 30, 2013 7:23 pm

Goldorak wrote:
rappet wrote:Maybe a poll can be started and number of future planned or wished nodes are to be selected?
My personal wish would be 1 master with 3 slaves.. Is that 3 or 4 nodes?

Cheers,
That would be 4 nodes. And yes a poll might be helpful as a high level overview of what users on this thread are looking for. Besides node count preferences, we also would like to better understand how users intend to effectively spread their GPUs across nodes - i.e. a dedicated render slave box or a free workstation with additional GPUs?

I will also review this thread with the team this week and discuss some other options we can potentially consider.
Here's some feedback. It's from TWO YEARS ago though.
viewtopic.php?f=24&t=40732

Thanks to geo_n for bumping this.

Regards
Milan
Colorist / VFX artist / Motion Designer
macOS - Windows 7 - Cinema 4D R19.068 - GTX1070TI - GTX780
geo_n
Licensed Customer
Posts: 350
Joined: Tue Feb 02, 2010 5:47 am

Can't believe how old the network render poll is viewtopic.php?f=24&t=40732&start=60
User avatar
Goldorak
OctaneRender Team
Posts: 2321
Joined: Sun Apr 22, 2012 8:09 pm
Contact:

geo_n wrote:Can't believe how old the network render poll is viewtopic.php?f=24&t=40732&start=60
The feedback wasn't conclusive in the poll, and much of the data was pre-V3. I think we now have a good sense about some of the areas we should addressing in addition to the planned subscription offering we are rolling out, including:

- rent to own option (as an alt. to pure rentals, but perhaps without continuous support/updates)
- allow more flexibility in network rendering configurations with single per account network rendering price point (for example some user want more than two nodes, e.g. 3+, even if it comes to same GPU count)

The second point above is something we are working on from a technical perspective in the licensing system . V3 does allow for an implementation of this, specifically in place for headless rendering across two nodes, but it is not tested yet, and can't provide an ETA until it is.

Regarding the current 20 GPU limit, we have found that support issues go up way, way up as we enable more complexity in network rendering.

We ideally want users to keep network rendering to small node counts (even though we make less money on this model with boxed licenses). There is an outlier case where users spread network rendering across 20 nodes, with 1 GPU each, which is a challenge to support (and not how network rendering should be used on non-server boxes if at all possible).

It was much worse in V2 when losing a single GPU killed the whole render, which V3 addressed. If you are going invest in machines to use all 20 GPUs w/ V3, then consider 2x 4U servers w/ 8x GPUs in each node, which will likely max out the power of most offices and provide ideal stability. Our DTLA office is in a former Pacbell building, with huge power supply and an A/C server room; but even, so an 2x4U 16- GPU config is about the limit of what we can reliably power (overheating is an issue even in an AC server room) before moving to a DC.

The >20 GPU option is being bundled wth specific and fully tested server configs (starting NVIDIA VCA boxes). ORC is further meant to offer complimentary and/or alternative options to this right now for users who don't have access to a DC of their own.
User avatar
Lewis
Licensed Customer
Posts: 1100
Joined: Tue Feb 05, 2013 6:30 pm
Location: Croatia
Contact:

Goldorak wrote:
It was much worse in V2 when losing a single GPU killed the whole render, which V3 addressed.
Is that really working now or that is a plan to be supported in future releas eof 3.x ?
It seems that Boris is having exactly that problem and that render STOPS when 1 node dies ?

viewtopic.php?f=36&t=57412
--
Lewis
http://www.ram-studio.hr
Skype - lewis3d
ICQ - 7128177

WS AMD TRPro 3955WX, 256GB RAM, Win10, 2 * RTX 4090, 1 * RTX 3090
RS1 i7 9800X, 64GB RAM, Win10, 3 * RTX 3090
RS2 i7 6850K, 64GB RAM, Win10, 2 * RTX 4090
User avatar
Goldorak
OctaneRender Team
Posts: 2321
Joined: Sun Apr 22, 2012 8:09 pm
Contact:

Lewis wrote:
Goldorak wrote:
It was much worse in V2 when losing a single GPU killed the whole render, which V3 addressed.
Is that really working now or that is a plan to be supported in future releas eof 3.x ?
It seems that Boris is having exactly that problem and that render STOPS when 1 node dies ?

viewtopic.php?f=36&t=57412
This is unrelated to V2 limitations (still, network rendering issues are always complex, often outside our code about half the time).

To confirm, since first V3 build, the film buffer is only on your master node and cannot be 'lost' unless the host goes down. Losing slaves no longer wipes the render like in V2. V3 also allows you to pause/save/resume renders because of this same framework.
User avatar
preciousillusion
Licensed Customer
Posts: 94
Joined: Mon Aug 12, 2013 7:19 pm
Location: Stockholm

Goldorak wrote:
geo_n wrote:Can't believe how old the network render poll is viewtopic.php?f=24&t=40732&start=60

The feedback wasn't conclusive in the poll, and much of the data was pre-V3. I think we now have a good sense about some of the areas we should addressing in addition to the planned subscription offering we are rolling out, including:


- rent to own option (as an alt. to pure rentals, but perhaps without continuous support/updates)
- allow more flexibility in network rendering configurations with single per account network rendering price point (for example some user want more than two nodes, e.g. 3+, even if it comes to same GPU count)


The second point above is something we are working on from a technical perspective in the licensing system . V3 does allow for an implementation of this, specifically in place for headless rendering across two nodes, but it is not tested yet, and can't provide an ETA until it is.

Still no news, basically no answers.

Goldorak wrote:Regarding the current 20 GPU limit, we have found that support issues go up way, way up as we enable more complexity in network rendering.


We ideally want users to keep network rendering to small node counts (even though we make less money on this model with boxed licenses). There is an outlier case where users spread network rendering across 20 nodes, with 1 GPU each, which is a challenge to support (and not how network rendering should be used on non-server boxes if at all possible).

When rendering still frames I may or may not need more than 20 gpus, but animations with thousands of frames is another story. And for animations, why would you use Octanes network render, or any other approach that doesn’t distribute the frames?
Am I supposed to lock up my workstation for batch rendering? No, you use a render manager.
In my case I use C4Ds Team Render Server with 20 dedicated render servers and can add 10 workstations during night/weekends, and with any other render engine that’s not a problem. And even if I were allowed to use all my gpus the cost would be insane.
A little example:


If I buy licenses for the just released Cycles4D it would cost me about $1600 total for the main license and (2 are included) 28 slave licenses.


If I go with Thea render it would be only about $800 for the same setup.


What would it cost with Octane? A freakin’ $18000. Eighteen thousand dollars

Goldorak wrote:It was much worse in V2 when losing a single GPU killed the whole render, which V3 addressed. If you are going invest in machines to use all 20 GPUs w/ V3, then consider 2x 4U servers w/ 8x GPUs in each node, which will likely max out the power of most offices and provide ideal stability. Our DTLA office is in a former Pacbell building, with huge power supply and an A/C server room; but even, so an 2x4U 16- GPU config is about the limit of what we can reliably power
How is what works in your server room relevant?
Are you on 110 or 240 volt? Assuming you completely maxing your power draw it would mean something like 36 ampere if you’re an 110V but only 16 if you’re on 240. In my server room, which is nothing fancy, I have 4 separate 16A outlets and 4 10A and I’m on 240V so that’s 25 kW, enough for 135 GTX1080.

Goldorak wrote:overheating is an issue even in an AC server room
An AC can provide 500W of cooling or 10 kW or 4.7 kW or any other number, so that’s a pretty useless statement.
geo_n
Licensed Customer
Posts: 350
Joined: Tue Feb 02, 2010 5:47 am

Goldorak wrote: - allow more flexibility in network rendering configurations with single per account network rendering price point (for example some user want more than two nodes, e.g. 3+, even if it comes to same GPU count)
There's a 20 gpu limit on one license, correct?
Why not let customers be allowed to spread out that 20 gpu to a network? 4 gpus on 5 computers, etc.
If you don't think its fair then limit network render nodes to less than 10.
Post Reply

Return to “General Discussion”