I have been watching with great anticipation the GPU revolution in CGI - no doubt this is the most revolutionary step since raytracing itself.
Big congrats to Terrence and Eric as this is s t u n n i n g. CPU rendering is sooo last year. Not only have you shattered the envelope but you've put a stamp on it and posted it too.
Love the system, it's workflow, the render results - just beautiful. All other renders I've seen just got their A$$ handed to them on a plate. Not to mention the speed which is like having fifty cray xmps on your desktop. Amazing, just amazing.
I have a couple of questions I hope you guys can find a sec to answer:
1. Any chance that we could pay now and get an unlocked "save" and "load" version of the beta - Reason is : the system inevitably crashes and Titanics everything you've done with it which means that an hours work can disappear in the blink of an eye - a little frustrating at times. Or maybe just a "single" save perhaps (as in it will allow only "one" scene in a specific directory?) - My credit card is ready!
2. any chance the system can remember the load paths for images and save you having to trawl back through the directories every time?
3. I can't seem to figure how the system assigns textures as regards the graph editor - is there any way to see the node breakout of the imported mesh texture? (As it is I have to rebuild it in Octane if I want to use the same texture for spec and diffuse etc but the great importing function from 3ds max means that textures come in perfectly - it would be great to see them in the graph editor)
4. Are area lights currently available? (I see emitters mentioned but not in the program- am I missing something?)
5. the ability eventually to import camera data (from a matchmoving prog) would be very useful
6. will chaining GPU cards eventually be possible?
7. Any suggestions on the best card (Geforce 275? Is this the best?)
Thank you guys very much - I looove this thing - it's amazing. Count me in as a customer.
Best
J.
Wow! Octane is the future. No doubt. Thoughts and requests.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Not realyjamestmather wrote: CPU rendering is sooo last year.

Have to remember that GPU rendering has some limits due the architecture.
You can only load up to the memory cap on the card, which is typically 1-2 gb. on small scenes this is ok, but huge production scenes wont get any near to be able to fit into 2gb. A complex shader with 4k textures will consume that in no time.
Then there is incoherent ray treatment, render time displacements and so forth.
GPU renderers are cool fo some stuff for sure, but they wont replace traditional offline renderers any time soon.
Amiga 1000 with 2mb memory card
I'll take an educated guess that this isn't true. Have a look at Wenzel Jakob's thesis (2007)[gk] wrote: GPU renderers are cool fo some stuff for sure, but they wont replace traditional offline renderers any time soon.
"Accelerating the bidirectional path tracing algorithm using a dedicated intersection processor".
Further more I guess Octane Render will probably be the only renderer that will run on GPU only.
When you take a closer look at Arion, LuxRay (upcomming GPU accel. for Luxrender and others)
you'll figure out that the acceleration is being done through accelerating the ray intersections
being handled by the GPU. The point is to find a good way to feed the GPU so it wont sit on its
arse most of the time. There shouldn't be any limit on the scene tho. That is because the GPU
has just to handle ray intersections and not loads of textures and what not.
I dunno about iRay and Vray-RT or others. But till I've been proven wrong I say they are
using the GPU to accelerate certain specific things but they are not running a renderer
on GPU only.
Well, this year will bring a lot of new stuff and we will see some hard and software popping
up in the next years that we might feel in wonderland. So have a seat and enjoy the show. ;o))
take care
psor
"The sleeper must awaken"
We have to see if traceing can be offloaded to the gpu efficiently
Ive just heard programmers say that you need a massive bridge/bandwidth to handle certain aspects for beeing able to use system resources and only use the gpu for raytrace acceleration.
Caustic Grafics has an alternative solution doing this already. They solved the bandwidth issues using a FPGA raytrace accelerator and purely use the gpu for shaders. The FPGA ( running at 100mhz in developer setup ) will with a consumer model get a factor 10x boost on all rays handled. This system can handle incoherent rays, render time displacement, twisting and morphing meshes perfectly.
They developed they own Caustic CL standard, the system can be wraped around any renderer in any host application and give you massive render speed boosts.
First render engine to be implemented is the highend renderer brazil r/s, we have already seem the system work inside 3dsmax directly. And Ive yet to see any other "direct in host app" massive speed solution that can handle infinite large and complex scenens.
Im not a programmer and thus dont know alot technical, but I go side by side with developers each day.
Ive just heard programmers say that you need a massive bridge/bandwidth to handle certain aspects for beeing able to use system resources and only use the gpu for raytrace acceleration.
Caustic Grafics has an alternative solution doing this already. They solved the bandwidth issues using a FPGA raytrace accelerator and purely use the gpu for shaders. The FPGA ( running at 100mhz in developer setup ) will with a consumer model get a factor 10x boost on all rays handled. This system can handle incoherent rays, render time displacement, twisting and morphing meshes perfectly.
They developed they own Caustic CL standard, the system can be wraped around any renderer in any host application and give you massive render speed boosts.
First render engine to be implemented is the highend renderer brazil r/s, we have already seem the system work inside 3dsmax directly. And Ive yet to see any other "direct in host app" massive speed solution that can handle infinite large and complex scenens.
Im not a programmer and thus dont know alot technical, but I go side by side with developers each day.
Amiga 1000 with 2mb memory card
Here's my replies:jamestmather wrote:I have been watching with great anticipation the GPU revolution in CGI - no doubt this is the most revolutionary step since raytracing itself.
Big congrats to Terrence and Eric as this is s t u n n i n g. CPU rendering is sooo last year. Not only have you shattered the envelope but you've put a stamp on it and posted it too.
Love the system, it's workflow, the render results - just beautiful. All other renders I've seen just got their A$$ handed to them on a plate. Not to mention the speed which is like having fifty cray xmps on your desktop. Amazing, just amazing.
I have a couple of questions I hope you guys can find a sec to answer:
1. Any chance that we could pay now and get an unlocked "save" and "load" version of the beta - Reason is : the system inevitably crashes and Titanics everything you've done with it which means that an hours work can disappear in the blink of an eye - a little frustrating at times. Or maybe just a "single" save perhaps (as in it will allow only "one" scene in a specific directory?) - My credit card is ready!
2. any chance the system can remember the load paths for images and save you having to trawl back through the directories every time?
3. I can't seem to figure how the system assigns textures as regards the graph editor - is there any way to see the node breakout of the imported mesh texture? (As it is I have to rebuild it in Octane if I want to use the same texture for spec and diffuse etc but the great importing function from 3ds max means that textures come in perfectly - it would be great to see them in the graph editor)
4. Are area lights currently available? (I see emitters mentioned but not in the program- am I missing something?)
5. the ability eventually to import camera data (from a matchmoving prog) would be very useful
6. will chaining GPU cards eventually be possible?
7. Any suggestions on the best card (Geforce 275? Is this the best?)
Thank you guys very much - I looove this thing - it's amazing. Count me in as a customer.
Best
J.
1. no, sorry, this is the limitation of the demo version.
2. yes, i'm fixing this, it's been requested by many
3. currently not, the nodes are inside the pins, the pins with internal graphs are triangles, not round pins. I'm adding an option to collapse these into the graph editor in next versions.
4. area lights are in development and will be supported in the first beta's early next month
5. this will be possible with a metadata material name in the OBJ file, or through the RIB fileformat. first option will be in next demo, 2nd option in the beta's next month
6. yes, in the first betas
7. we currently recommend the GTS260 with 1892MB RAM, as this is the best price/performance ratio and give you a large amount of memory to load complex scenes. GTX275 is better but more expensive

Radiance
Win 7 x64 & ubuntu | 2x GTX480 | Quad 2.66GHz | 8GB
- jamestmather
- Posts: 112
- Joined: Fri Jan 22, 2010 7:04 pm
Thank you. Looking forward to the beta.
One last question (promise) - I understand the main difference between the quadro and geforce is the amount of available RAM - when Daisy-chaining is introduced will the cards memory also be cumulative or will each cards individual RAM be a limiting scene factor? (in short will, say, 3 Geforce 275's allow the user to load a scene that is 1792x3 = 5376mb?) - or is it just an iterative boost and limited by the 1792mb?
Thanks again.
One last question (promise) - I understand the main difference between the quadro and geforce is the amount of available RAM - when Daisy-chaining is introduced will the cards memory also be cumulative or will each cards individual RAM be a limiting scene factor? (in short will, say, 3 Geforce 275's allow the user to load a scene that is 1792x3 = 5376mb?) - or is it just an iterative boost and limited by the 1792mb?
Thanks again.
no,jamestmather wrote:Thank you. Looking forward to the beta.
One last question (promise) - I understand the main difference between the quadro and geforce is the amount of available RAM - when Daisy-chaining is introduced will the cards memory also be cumulative or will each cards individual RAM be a limiting scene factor? (in short will, say, 3 Geforce 275's allow the user to load a scene that is 1792x3 = 5376mb?) - or is it just an iterative boost and limited by the 1792mb?
Thanks again.
each GPU needs it's own copy of the scene data

Radiance
Win 7 x64 & ubuntu | 2x GTX480 | Quad 2.66GHz | 8GB