That is really funny, getting pinged by their IP address.
Probably some bored PTC employee at lunchtime.
GPU rendering is not the magic bullet it was promised to be
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Win 10. Threadripper 1920X 32gb Ram GTX 1080Ti GTX 980Ti 2xGTX 780
http://ajdesignstudio.co.nz/
http://ajdesignstudio.co.nz/
- PhilLawson
- Posts: 34
- Joined: Thu Jan 21, 2010 3:48 pm
Like others have said - Lux's goal wasn't to write off the likes of Octane just merly quash the rumors about x10-x100 speed increases. I think the fireflys were removed in the same way as we remove them currently and since the image is scaled down anyway to fit the video res, they would be harder again to see. I still enjoy octane and of course can greatly see the benifits were it does excel development and look forward to getting a chance to use the next release.
I think however it's bad taste posting the private details of a customer - if this (refractive software) is a business, then you should at least obide by business standards and handle such a presentation professionally. Make your own test between the two and post the results.
EDIT: Names have now been removed - many thanks.
I think however it's bad taste posting the private details of a customer - if this (refractive software) is a business, then you should at least obide by business standards and handle such a presentation professionally. Make your own test between the two and post the results.
EDIT: Names have now been removed - many thanks.

Last edited by PhilLawson on Thu Aug 12, 2010 1:36 pm, edited 1 time in total.
Vista 64bit | Nvidia 580GTX 1.5GB | Intel Core Quad 9550 2.83GHz | 8GB RAM
I'd have to agree with you there PhilLawson, luckily this is not a public post.
Win 10. Threadripper 1920X 32gb Ram GTX 1080Ti GTX 980Ti 2xGTX 780
http://ajdesignstudio.co.nz/
http://ajdesignstudio.co.nz/
- PhilLawson
- Posts: 34
- Joined: Thu Jan 21, 2010 3:48 pm
Like I said - I have nothing but respect for what the guys are doing building octane, I just think it was unnessary really to post the details. A simple, 'they did purchase it', would have sufficed. Although not technically internet public, we are still the general public. 
EDIT: It wasn't even 'they' it was another user testing all the GPU renders for them.

EDIT: It wasn't even 'they' it was another user testing all the GPU renders for them.
Last edited by PhilLawson on Thu Aug 12, 2010 8:44 am, edited 1 time in total.
Vista 64bit | Nvidia 580GTX 1.5GB | Intel Core Quad 9550 2.83GHz | 8GB RAM
Hi guys,
If i compare a quad core CPU doing 0.3 megasamples/sec against a GTX480 doing 4 megasamples/sec,
both with pathtracing, maxdepth = 1024 and rrprob = 5, it means GPUs *are* much faster.
If you compare a GPU running pathtracing to a CPU renderer running a biased photonmap or lightcache/map this is simply unfair.
what if tomorrow we have GPU renderers that do photonmapping and lightcaching ?
These algorithms are certainly adaptable to a GPU and they *will* come soon.
Radiance
If i compare a quad core CPU doing 0.3 megasamples/sec against a GTX480 doing 4 megasamples/sec,
both with pathtracing, maxdepth = 1024 and rrprob = 5, it means GPUs *are* much faster.
If you compare a GPU running pathtracing to a CPU renderer running a biased photonmap or lightcache/map this is simply unfair.
what if tomorrow we have GPU renderers that do photonmapping and lightcaching ?
These algorithms are certainly adaptable to a GPU and they *will* come soon.
Radiance
Win 7 x64 & ubuntu | 2x GTX480 | Quad 2.66GHz | 8GB
Is that the same code that you're running for that test Radiance?
If so, surely the code would be optimised for the parallel computing power of the gpu, correct? Not saying there's anything wrong with that, I'm thinking of converting some of the Digital Fusion plugins I wrote to use CUDA as you get lots of per pixel calculations that would benefit immensely from parallel processing as well.
If so, surely the code would be optimised for the parallel computing power of the gpu, correct? Not saying there's anything wrong with that, I'm thinking of converting some of the Digital Fusion plugins I wrote to use CUDA as you get lots of per pixel calculations that would benefit immensely from parallel processing as well.
As said beneath the video - It's their view. So this is not a technology review.
Of course they gonna push their product no matter what and bite every one that seems to have chances.
They use ugly ways to win some mass opinion as every big company does. I just "love" marketing :[
Of course they gonna push their product no matter what and bite every one that seems to have chances.
They use ugly ways to win some mass opinion as every big company does. I just "love" marketing :[
http://www.visnevskis.com
Vista64/Ubuntu, GTX470/580
Vista64/Ubuntu, GTX470/580
I like to start by saying I am a fan of Modo and use to own it since version 1.03 and was a member of the luxology forums before it was the luxology forums.
But this truly is in bad form posting a comparative test from a closed unknown 501 beta (none public beta) to a public beta, with the backing of Intel plus unknown new developers code and there state of the art hardware Vers older hardware.
The bottom line here, Refractive is a new business and is about to finish it's first software called Octane.
Why not give a small token of respect from 1 commercial business to another and just wait till it is in version 1.
At the very least are there no other none commercial GPU renders out there to do test from?
But this truly is in bad form posting a comparative test from a closed unknown 501 beta (none public beta) to a public beta, with the backing of Intel plus unknown new developers code and there state of the art hardware Vers older hardware.
The bottom line here, Refractive is a new business and is about to finish it's first software called Octane.
Why not give a small token of respect from 1 commercial business to another and just wait till it is in version 1.
At the very least are there no other none commercial GPU renders out there to do test from?
Vista 64| 660 GTX | 6600 Quad | 8GB DD2|Maya 2011
There's no respect or honesty in big-world competition. Those are companies not people. Ideally they act as machines searching for any chance or weakness.Garrick wrote:Why not give a small token of respect from 1 commercial business to another and just wait till it is in version 1.
Same does Autodesk or Adobe. Except they don't feel threatened and are a bit more clever so it's not so obvious.
Last edited by kivig on Thu Aug 12, 2010 11:13 am, edited 1 time in total.
http://www.visnevskis.com
Vista64/Ubuntu, GTX470/580
Vista64/Ubuntu, GTX470/580
Come on now, have you actually watched the video?havensole wrote:we are dealing with an unbiased engine versus a biased engine in lux (correct me if I am wrong on that).
[...]
I also don't have to spend time doing photon mapping or adjusting a ton of parameters to get something usable.
They tested the modo renderer using Monte Carlo. Of course you need to compare unbiased vs. unbiased, anything else would just be silly.
Since people have been getting incredibly defensive about their purchase, I assume you will be called dishonest or a shill for Luxology as well.richardyot wrote:I own licenses for Modo, Octane and Maxwell, and I did a test on the Luxology forums a few weeks ago (before I know anything about Luxology conducting their own tests), you can see the results here:
http://forums.luxology.com/discussion/t ... x?id=48218
I approached the testing with a completely open mind, I was pretty excited about GPU rendering in fact, but I came to the exact same conclusion that Luxology did