Page 5 of 5

Re: Compiling scene time not related only to the PCIe speed?

Posted: Mon Feb 08, 2016 11:39 pm
by prehabitat
BorisGoreta wrote:But the point is that Octane developers are super smart and they can make a code that utilizes 100% CPU power if they want to.
Completely agree. what I was saying above is that even WHEN they do; you may still want to reconsider your system build since you have moved your biggest parallelisation workload (rendering) to the GPU, you CPU no longer needs to be a short(Ghz), wide(cores) powerhouse.
Although I admit; if this 2-3 minute compile still outweighs the few seconds speed hit on all your single-thread constrained processes all-day long, you may still opt for CPU cores > CPU GHZ.
BorisGoreta wrote:And the benefit will be hours and hours saved not waiting for the scene to compile so I think the benefit is well worth the effort to improve on this.
It all depends on what kind of parallelisation Otoy think is achievable for these tasks. Abstract could answer in more detail; although it'd be lost on me. there may be some parts of the process which just cant be broken into pieces without negating the benefit with extra overhead putting it all back together... (also I suspect the process would need to be smart enough to vary itself on everything from a 4 core machine to a 36 core machine...)

Someone with a multi-core system constrain the process OFF the cores in thier CPU by limiting the process using windows task manager affinity? (ie test with 2 cores, 4 cores, 8 cores, 12 cores, etc)... this would be interesting considering all other things would be equal.

Tutor... ??

Re: Compiling scene time not related only to the PCIe speed?

Posted: Mon Feb 08, 2016 11:46 pm
by BorisGoreta
I would get the same speed with single CPU with 4 cores as I do now with dual Xeon with 16 cores total because they are roughly the same clock 3.1 Ghz to 3.8 Ghz turbo. It doesn't make it faster, just cheaper, but this is not the way to go since CPU clocks are not getting higher by much every year but they do increase the number of cores.

Re: Compiling scene time not related only to the PCIe speed?

Posted: Tue Feb 09, 2016 5:35 am
by Tutor
prehabitat wrote:
BorisGoreta wrote:But the point is that Octane developers are super smart and they can make a code that utilizes 100% CPU power if they want to.
Completely agree. what I was saying above is that even WHEN they do; you may still want to reconsider your system build since you have moved your biggest parallelisation workload (rendering) to the GPU, you CPU no longer needs to be a short(Ghz), wide(cores) powerhouse.
Although I admit; if this 2-3 minute compile still outweighs the few seconds speed hit on all your single-thread constrained processes all-day long, you may still opt for CPU cores > CPU GHZ.
BorisGoreta wrote:And the benefit will be hours and hours saved not waiting for the scene to compile so I think the benefit is well worth the effort to improve on this.
It all depends on what kind of parallelisation Otoy think is achievable for these tasks. Abstract could answer in more detail; although it'd be lost on me. there may be some parts of the process which just cant be broken into pieces without negating the benefit with extra overhead putting it all back together... (also I suspect the process would need to be smart enough to vary itself on everything from a 4 core machine to a 36 core machine...)

Someone with a multi-core system constrain the process OFF the cores in thier CPU by limiting the process using windows task manager affinity? (ie test with 2 cores, 4 cores, 8 cores, 12 cores, etc)... this would be interesting considering all other things would be equal.

Tutor... ??

Prehabitat,

Thanks for pointing out a feature/benefit of overclocking that I forgot to specifically mention earlier; namely, it's impact (speed acceleration increase) on what is now called QPI (but was previously known as front-side bus) speed. That feature, along with increasing CPU core clock and ram speed, can have a great impact on making the computer faster for many chores, including scene compilation.

I also agree with you (and anyone else who counsels) that it's may be best for many Octane users to consider that the "CPU no longer needs to be a short(Ghz), wide(cores) powerhouse." However, for some that might not be the case. I consider it to be more of an issue of one evaluating what one really needs for what she/he does. Much of my creative outputs are training animations and videos about employment law. When I purchased my 32-core systems back in 2013, I exclusively used CPUs for rendering that output and my GPU usage, other than for video display output, then involved only acceleration of multi-frame/video work, such as with AE and related plugins. It was only after I had made those purchases that I began using GPUs for animation rendering. Although I now find myself using GPU renderers such as, but not limited to, Octane, Thea, FurryBall and Redshift renderers */ more and more for animations, I still do lots of CPU renderings and still use my GPUs for certain video production rendering chores. I'm still a little more comfortable with using CPU renderers for animation than I am with using GPU renderers for those chores because CPU renderers are still a little more familiar to me, feature filled/mature/better integrated in my animation/video work-flows than are GPU renderers presently. So for me, there hasn't yet been a switch to only GPU rendering; however, GPU rendering is mainly much faster. Moreover, I have used and will continue to use my 17 systems **/ for simultaneous CPU and GPU rendering chores. I still consider it to be one of my advantages that within that system count I have (a) high-cored big iron systems from Supermicro and Tyan, (b) overclocked (i) EVGA and Gigabyte x79s (4-cored and 6-cored) i7s and (ii) EVGA SR-2s each with massively tweaked Westmere Xeon 5690s (each 2x6-cores) and (c) my favorite - modded MacPros (not the latest trash cans, but rather Apple's abandoned version of big iron of 2007 to 2012 vintage). I run all of the three most popular OSes on them. In sum, I don't regret that my 32-core systems are now slower than anyone else's system at compiling a particular Octane scene.

*/ I, probably more than do most, view my renderers more like I view my household tools. I.e., I couldn't get my household chores done with just, e.g., one saw, one plier, one hammer, one screw driver or one wrench. I've got particular ones of certain varieties/characteristics for particular jobs that I've learned to perform.

**/ That count of 17 doesn't include my vintage Ataris, Amigas (with Video Toasters) and Macintoshes that I still use for music, video and 2d animation production. This year I'll be 63 and have been using computers, tweaking them and repairing them since 1985. I've seen computer models and brands and great software applications come and then go (particularly when I least wanted them to go). That's partly why early on my creativity journey I began learning to be very self-sufficient, particularly about my hardware, software and other stuff, to never stop learning about, among others, new technologies, and to appreciate the beauty of having the right tools for particular jobs/work-flows.

Re: Compiling scene time not related only to the PCIe speed?

Posted: Wed Feb 10, 2016 6:29 am
by grimm
As Absrax said there is problems with the compiling on MacOs and Linux so that is part of the reason for how slow my system is. I did try the scene with version 2.25 just for fun. The differences with version 3 alpha 4 are interesting.

Load time is much slower, 15+ seconds vs 7+ seconds. I think this was improved in version 3.
Compile time is pretty much the same.
Ms/sec is much lower, 2.1 vs 3.35, which made the render slower 5 min. vs 2.35 min. This definitely shows how much version 3 has improved speed wise.
Memory use was lower, 3 Gbs vs 3.6 Gbs. Which makes sense with the kernel changes in version 3.

Jason

Re: Compiling scene time not related only to the PCIe speed?

Posted: Wed Feb 10, 2016 7:29 am
by prehabitat
grimm wrote:Ms/sec is much lower, 2.1 vs 3.35, which made the render slower 5 min. vs 2.35 min. This definitely shows how much version 3 has improved speed wise.
Wow, great to know that's the improvement we (might) get on a real scene! ;)

Re: Compiling scene time not related only to the PCIe speed?

Posted: Wed Feb 10, 2016 7:31 am
by smicha
prehabitat wrote:
grimm wrote:Ms/sec is much lower, 2.1 vs 3.35, which made the render slower 5 min. vs 2.35 min. This definitely shows how much version 3 has improved speed wise.
Wow, great to know that's the improvement we (might) get on a real scene! ;)
Once 3 came out I also reported that for my scenes v3 is twice as fast (PT).

Re: Compiling scene time not related only to the PCIe speed?

Posted: Sat Feb 13, 2016 9:07 am
by a1x
Thank you all for the infos. Quite interesting and definitely helping when building a new system.



But isn't all the loading time and the time is takes to convert a scene mostly irrelevant when using 3 or 4 instances of octane to max out the gpu's?


Atleast that is what I do every day when rendering animations. I make sure I start so many instances so my gpu's are maxed out. And as long as my cpu and pcie lanes are enough to do that, then I don't really care how long one or the other instance needs to precalculate to get the gpus going.

Of course it's something else when you hit the limit of the gpu ram...

Or does it make the rendering slower at the end of the day because of "task switching" in the gpus?

:?:

Re: Compiling scene time not related only to the PCIe speed?

Posted: Sat Jul 02, 2016 11:30 pm
by Liketheriver
Hi,

I have tested the bench file, on my 2600k with octane 3.01, 11 seconds to load the orbx file, 58 seconds to compiling the scene (the first time it took only 25 seconds, don't know why...)

Re: Compiling scene time not related only to the PCIe speed?

Posted: Sun Jul 03, 2016 7:03 pm
by karanis
You will never understand things, if you do not code.

Please use CUDA Toolkit and try coding just simple things.

You`ll understand.