This is a good scene for benchmark, if Sam want, we can use it for a new section of this forum........
(sorry for my english, i'm Italian).
Thor.
Old GPUs vs New GPUs fight
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Heh ask Radiance, not me 
And ask Tommy5 its not my scene !
Octane gonna need scenes when we gonna be able to save/load
Gonna need free scenes for everyone to try Octane
And also some benchmark scenes

And ask Tommy5 its not my scene !
Octane gonna need scenes when we gonna be able to save/load
Gonna need free scenes for everyone to try Octane
And also some benchmark scenes
http://Kuto.ch - Samuel Zeller - Freelance 3D Generalist and Graphic designer from Switzerland
- gpu-renderer
- Posts: 106
- Joined: Sat Jan 30, 2010 9:45 am
If you take a triple pci slot motherboard
each slot with a dual gtx 495 (6 gpus) 3072 cores
you should get 20 seconds for that render. Which is truely amazing and far less costly than a pc render farm.
But the real test will be support for displacement mapping. <<--- this really adds 3d depth to grass, bubble glass, wood and grain, other complexities in texture surfaces.
Also the size of textures and resolution output will alter the benchmark.
I'm hoping that octane will be able to support as many gpu's as the motherboard can handle. As this will save soooooo much time and the end results can truely be perfected.
could you provide a max resolution in pixels for us and compare the render time. detailing ram amounts/ time used for your 3d model (great render to)
1280x900
1600x1200
1920x1200
4000x2800
8000x3800
As far as you can take it resolution
I work with 300ppi for prints onto A0 for large 3d industrial layout plans. Average 10 million polys with 2gb of textures with all the effects, bump map, specular, diffuse, gloss, bloom and displacement.
13800 x 9900 try that output if you have the ram :0)
The hard part really is octane limited to GPU ram?. 4gb per gpu ASUS mars gtx 280 in sli costs around 890 pounds. Only other option is the next gen fermi with multiple 6gb of vram (shared) but they cost way to much compared to standard gtx range.
each slot with a dual gtx 495 (6 gpus) 3072 cores
you should get 20 seconds for that render. Which is truely amazing and far less costly than a pc render farm.
But the real test will be support for displacement mapping. <<--- this really adds 3d depth to grass, bubble glass, wood and grain, other complexities in texture surfaces.
Also the size of textures and resolution output will alter the benchmark.
I'm hoping that octane will be able to support as many gpu's as the motherboard can handle. As this will save soooooo much time and the end results can truely be perfected.
could you provide a max resolution in pixels for us and compare the render time. detailing ram amounts/ time used for your 3d model (great render to)
1280x900
1600x1200
1920x1200
4000x2800
8000x3800
As far as you can take it resolution
I work with 300ppi for prints onto A0 for large 3d industrial layout plans. Average 10 million polys with 2gb of textures with all the effects, bump map, specular, diffuse, gloss, bloom and displacement.
13800 x 9900 try that output if you have the ram :0)
The hard part really is octane limited to GPU ram?. 4gb per gpu ASUS mars gtx 280 in sli costs around 890 pounds. Only other option is the next gen fermi with multiple 6gb of vram (shared) but they cost way to much compared to standard gtx range.
i7 920 2.66ghz quad core, gtx 285, asus p6t, 6gb OCZ 1600mhz ram, Windows 7 64bit ultimate. Nvidia cuda Driver v3 Nvidia display drivers V193.13. Octane beta 2
Its not my render I can't provide anythingcould you provide a max resolution in pixels for us and compare the render time. detailing ram amounts/ time used for your 3d model (great render to)

Ask Tommy5, but he would need the beta version to save / load
http://Kuto.ch - Samuel Zeller - Freelance 3D Generalist and Graphic designer from Switzerland
- gpu-renderer
- Posts: 106
- Joined: Sat Jan 30, 2010 9:45 am
well a thumbs up at each res - noting time for 2048 sample image
I've used 280mb on a model so far seems to be ok but crank up the res and obviously it takes longer but still way ahead of a quad core
I've used 280mb on a model so far seems to be ok but crank up the res and obviously it takes longer but still way ahead of a quad core
i7 920 2.66ghz quad core, gtx 285, asus p6t, 6gb OCZ 1600mhz ram, Windows 7 64bit ultimate. Nvidia cuda Driver v3 Nvidia display drivers V193.13. Octane beta 2
Very interesting requirements @gpu-renderer.
Do you currently do A0 @300 dpi or is that what you would like to do?
You print this at your company on a HP photo plotter?
300 dpi seems pretty heavy?...
I think you are going to need 3x 6gb Tesla + a Quadro for display for that sort of mission. However if you think about it as the equivalent of say 45 (+ whatever perf margin fermi brings - 2x? ) quadcores then it is a cheap expense.
1600mb for 10m polys + 2600mb for A0 film + 1800mb textures = 6gb
Even if you cut it down to A1 @200dpi you still need 4gb Quadro.
Of course such a large render size isnt available atm and progress is going to be slowed relative to a 1024x768 render.
It remains to be seem how much ram you need to load 3x 6gb gpu but I guess you will do it sequentially with the same data plus some accumulating management space in ram so 16 or even 24gb I would think.
Fermi generation might assist in that they can exchange some data.
This might come in handy to propogate change sideways to each seeded set rather than only via cpu. I dont know if @radiance would code esp for that though.
It hasnt been talked about yet but I suppose you would need to suspend rendering momentarily make a change and upload that to each gpu and then resume again.
Do you currently do A0 @300 dpi or is that what you would like to do?
You print this at your company on a HP photo plotter?
300 dpi seems pretty heavy?...
I think you are going to need 3x 6gb Tesla + a Quadro for display for that sort of mission. However if you think about it as the equivalent of say 45 (+ whatever perf margin fermi brings - 2x? ) quadcores then it is a cheap expense.
1600mb for 10m polys + 2600mb for A0 film + 1800mb textures = 6gb
Even if you cut it down to A1 @200dpi you still need 4gb Quadro.
Of course such a large render size isnt available atm and progress is going to be slowed relative to a 1024x768 render.
It remains to be seem how much ram you need to load 3x 6gb gpu but I guess you will do it sequentially with the same data plus some accumulating management space in ram so 16 or even 24gb I would think.
Fermi generation might assist in that they can exchange some data.
This might come in handy to propogate change sideways to each seeded set rather than only via cpu. I dont know if @radiance would code esp for that though.
It hasnt been talked about yet but I suppose you would need to suspend rendering momentarily make a change and upload that to each gpu and then resume again.
i7-3820 @4.3Ghz | 24gb | Win7pro-64
GTS 250 display + 2 x GTX 780 cuda| driver 331.65
Octane v1.55
GTS 250 display + 2 x GTX 780 cuda| driver 331.65
Octane v1.55
Well, I can say that for one of my renders I've been playing with when trying it with most of the renderers out there (lux render and the like) vs. octane, the difference is dramatic. I had a render going last night that I set before I went to bed and after 12 hours it was maybe 25% done. Octane on the other hand did the rendering in about 15-20 minutes. Generally I like the output of Octane better anyway, so all in all my money is going to them. The system I've been testing on is a windows 7, intel q9450 quad, 4gb ram, and bfg tech 8800gt 512vram system.
Like said before, and in other places, the host cpu gets tasked by other things running in the background and just general OS dependent apps, so it is not often that a rendering app gets full use of the CPU 100% of the time. You guys can correct me if I'm wrong, but I would imagine that using the GPU bypasses this natively, so that the application can use the full resources available 100% of the time.
Like said before, and in other places, the host cpu gets tasked by other things running in the background and just general OS dependent apps, so it is not often that a rendering app gets full use of the CPU 100% of the time. You guys can correct me if I'm wrong, but I would imagine that using the GPU bypasses this natively, so that the application can use the full resources available 100% of the time.
System 1: EVGA gtx470 1280Mb and MSI gtx470 1280 in Cubix Xpander for Octane, AMD 945, 4Gb Ram
All systems are at stock speeds and settings.
All systems are at stock speeds and settings.
That's true, CUDA use 100% of the GPU when availableI would imagine that using the GPU bypasses this natively
Someone need to make a simple scene (like a box open on top with 1 sphere glass 1 sphere metal inside and diffuse walls)
and render it using Luxrender and Octane at same size, same path tracing settings, and same HDRI for lighting, no depth of field
And then post screenshots with render time and samples here !
One test could be 20min rendering then stop both renderers and check the difference of ammount of samples between the two
Test two could be let them render until 4096 samples and then check the render time
Someone who have Luxrender and Octane, can you do it ?
And if someone has Maxwell or Fryrender also, same tests
http://Kuto.ch - Samuel Zeller - Freelance 3D Generalist and Graphic designer from Switzerland
I always print A0 at 150 dpi on photo paper (Canon IPF 8000) and the result is quite marvelous for archviz competition.pixelrush wrote:…You print this at your company on a HP photo plotter?
300 dpi seems pretty heavy?...…
Work Station : MB ASUS X299-Pro/SE - Intel i9 7980XE (2,6ghz 18 cores / 36 threads) - Ram 64GB - RTX4090 + RTX3090 - Win10 64
NET RENDER : MB ASUS P9X79 - Intel i7 - Ram 16GB - Two RTX 3080 TI - Win 10 64
NET RENDER : MB ASUS P9X79 - Intel i7 - Ram 16GB - Two RTX 3080 TI - Win 10 64
The blender benchmark file would be pretty good. Uses a reflective sphere with some translucent cubes on a diffused surface. It's what I've been using to test other rendering apps, though I don't think I've tried it in octane yet. Will do tonight.
System 1: EVGA gtx470 1280Mb and MSI gtx470 1280 in Cubix Xpander for Octane, AMD 945, 4Gb Ram
All systems are at stock speeds and settings.
All systems are at stock speeds and settings.