matej wrote:Hm, I don't know... this news scares me a little, because it looks that GPU programming is a lot harder and (currently) limited than CPU programming. This will undoubtedly bring other limitations in the future.
Anyway, I hope for the best. This new algorithm could be a huge marketing asset, if it will deliver such power.
And thanks for the straight explanation.
Hey,
It's not in fact that bad. The issue is very simple: parallelization.
GPUs require algorithms to be parallelized, and some algorithms are easy to parallelize, others aren't.
We tried to parallelize MLT, which has failed, so we developed a parallel algorithm instead.
That's the only reason. It's not about 'limits' or being 'harder'. GPUs speak russian, CPUs speak english.
A masterwork novel in russian might not be translatable to english and have the same spirit, emotion and philosophy remaining in it. Languages are not just forms of communication, but also have perceptual differences between cultures that speak them.
We tried to translate a masterwork russian novel to english, but we were not happy with the outcome, so we decided to write a new novel in english instead.
That's my non-technical explanation
Radiance
BTW: today we implemented 3 new things in octane, which would have taken 3 days to implement on a CPU, as GPUs have lots of neat little toys in them that are designed for graphics. (some of them are mentioned in the release announcement of the 3rd pre 2.3 coming tomorrow in the RC forum.