Kepler test build [Obsolete]

A forum where development builds are posted for testing by the community.
Forum rules
NOTE: The software in this forum is not %100 reliable, they are development builds and are meant for testing by experienced octane users. If you are a new octane user, we recommend to use the current stable release from the 'Commercial Product News & Releases' forum.
Post Reply
User avatar
JimStar
OctaneRender Team
Posts: 3816
Joined: Thu Jul 28, 2011 8:19 pm
Location: Auckland, New Zealand

matej wrote:Octane on OpenCL
That is the only right way. Octane should as fast as possibly leave this propietary CUDA, not binding its users to just GeForce cards... Otherwise - competitors who will first achieve all the same features with OpenCL, will take the leadership. I will be one of the first who will buy the OpenCL renderer (with a bunch of AMD cards) if it will have the same features as Octane (and I will be forced to leave Octane, as it can't work with these AMD cards)... Propietary standards is the evil.:evil:
At the time, Octane is leader in features (from GPU renderers), and it will be shame if it will lose its leadership due to propietary decisions of NVidia marketing...

Only my IMHO.
Timmaigh
Licensed Customer
Posts: 168
Joined: Mon Nov 01, 2010 9:52 pm

I am still hopeful, that they will later release bigger, more GPGPU-friendly chip. They have to, after all, they need new generation of Teslas.

And even if it does not look like that, right now, there are few upsides to Nvidia. There is surely a reason, why Radiance chose CUDA over OpenCL in the first place, and without CUDA, maybe, there would be no Octane at all. Their drivers are generally better than those of ATi.
And in my particular case, i was thankful to them, they made gtx590 only 29cm long, so it would still fit in my case. AMD, on other hand, went just brute-force full retard with their Radeon 6990. Even if it hypothetically supported Octane in the first place, i would not get it, as it was 31cm long, so it would not fit, it was WAY louder than gtx590 and AFAIK you could not turn the multi-gpu off, what i normally do with the 590...
Intel Core i7 980x @ 3,78GHz - Gigabyte X58A UD7 rev 1.0 - 24GB DDR3 RAM - Gainward GTX590 3GB @ 700/1400/3900 Mhz- 2x Intel X25-M G2 80GB SSD - WD Caviar Black 2TB - WD Caviar Green 2TB - Fractal Design Define R2 - Win7 64bit - Octane 2.57
User avatar
x3studio
Licensed Customer
Posts: 17
Joined: Mon Feb 22, 2010 4:20 pm
Location: Poland
Contact:

That is one big FAIL of NVidia!

If we consider that Radeon 7990 will have 7.6 TFLOPS Single Precision compute power and 1.8 TFLOPS Double Precision compute power - then NV is seriously in deep sh%%t.

Octane should be as soon as it is possible, ported to OpenCL.

Image
Ubuntu 9.10 64 bit, GeForce 275, Core2Quad, 6 GB Ram
Dom74
Licensed Customer
Posts: 155
Joined: Sat Oct 23, 2010 7:16 pm

I've tested this release cuda 4.2 on a gtx 480, it's slower than cuda 4.1, from 4 millions samples/sec, I drop down to 3.2 millions samples/sec.
Maybe kepler is not already dead...
Win7 x64 - 3DSMAX 2019 - 32GB RAM - 1080 TI
User avatar
mbetke
Licensed Customer
Posts: 1294
Joined: Fri Jun 04, 2010 9:12 am
Location: Germany
Contact:

for OpenCL I guess the whole core would be in need to be recoded?
PURE3D Visualisierungen
Sys: Intel Core i9-12900K, 128GB RAM, 2x 4090 RTX, Windows 11 Pro x64, 3ds Max 2024.2
User avatar
t_3
Posts: 2871
Joined: Tue Jul 05, 2011 5:37 pm

ok; some more numbers:

as already mentioned, the current octane 4.2 cuda build is slower than the previous 4.1 - but how much slower it is, seems to be also scene dependent...

benchmark scene (pathtracing):

Code: Select all

gtx 680 @ default clocks on cuda 4.2   ...   ~1.68ms/sec   ...    43%
gtx 560 @ default clocks on cuda 4.2   ...   ~1.79ms/sec   ...    46%
gtx 570 @ default clocks on cuda 4.2   ...   ~2.27ms/sec   ...    58%   ...    72%
gtx 570 @ default clocks on cuda 4.1   ...   ~3.14ms/sec   ...    80%   ...   100%
gtx 580 @ default clocks on cuda 4.1   ...   ~3.92ms/sec   ...   100%
complex scene, 1mio tris, hdri environment, 3 mesh lights, misc. mats including sss & specular (pmc):

Code: Select all

gtx 560 @ default clocks on cuda 4.2   ...   ~0.59ms/sec   ...    52%
gtx 680 @ default clocks on cuda 4.2   ...   ~0.64ms/sec   ...    56%
gtx 570 @ default clocks on cuda 4.2   ...   ~0.77ms/sec   ...    68%   ...    81%
gtx 570 @ default clocks on cuda 4.1   ...   ~0.95ms/sec   ...    83%   ...   100%
gtx 580 @ default clocks on cuda 4.1   ...   ~1.14ms/sec   ...   100%
overclocking ability was around 15% with my card.

so, currently it seems the 680 could match a 570, if cuda/octane is more optimized; of course it is unpredictable what future cuda optimization from nvidia will have to offer, but it would need quite a lot to even match a 580, thus i don't expect to see the gk104 chip to bring notable more power than a 580 anytime soon... still it lifts the texture limit, and will bring 1gb more vram soon...
The obvious is that which is never seen until someone expresses it simply

1x i7 2600K @5.0 (Asrock Z77), 16GB, 2x Asus GTX Titan 6GB @1200/3100/6200
2x i7 2600K @4.5 (P8Z68 -V P), 12GB, 1x EVGA GTX 580 3GB @0900/2200/4400
User avatar
necko77
Posts: 323
Joined: Thu Jan 21, 2010 11:27 am
Location: Bosnia&Hercegovina

thank you for your time t_3
im goign to clean forum, it looks very dirty :)
after a while . . . its clean !
ArchiCad, Blender, Moi3d
GTX 580 3GB
Win 7, 64 Bit
User avatar
pixelrush
Licensed Customer
Posts: 1618
Joined: Mon Jan 11, 2010 7:11 pm
Location: Nelson, New Zealand

Yeah well thats fairly disgusting performance considering.
In other words it runs at about 40% of what it really ought to have been capable of. :cry:
I guess we wait now to see how fast Quadros/Teslas are.
Certainly at the moment it looks like Nvidia now have a policy of deliberately crippling cuda in Geforce like they have for openGL for pro apps.
Bastards :P
i7-3820 @4.3Ghz | 24gb | Win7pro-64
GTS 250 display + 2 x GTX 780 cuda| driver 331.65
Octane v1.55
Dom74
Licensed Customer
Posts: 155
Joined: Sat Oct 23, 2010 7:16 pm

Hey NVIDIA, copy that :
We don't want to buy any quadro or tesla, cause it's way to expensive for same material with minor changes !
Win7 x64 - 3DSMAX 2019 - 32GB RAM - 1080 TI
User avatar
t_3
Posts: 2871
Joined: Tue Jul 05, 2011 5:37 pm

x3studio wrote:That is one big FAIL of NVidia!

If we consider that Radeon 7990 will have 7.6 TFLOPS Single Precision compute power and 1.8 TFLOPS Double Precision compute power - then NV is seriously in deep sh%%t.
only that the 7990 is a dual gpu solution...

if we consider that the upcoming gk110 will have 2tflops dp power from a single gpu, and the gk104 already generates a little over 3tflops raw sp performance out of a single chip, i don't think that nvida will need to worry seriously or deeply ;) esp. because for large scale gpgpu computing people usually count flops per watt, thus the gk110 might be a clear winner in this area.

the more interesting question is, how long it will take until cuda/compilers/engines will take better advantage of the raw kepler performance, and what is in the end possible with the current gk104 design. afaik the gk110 will not only have more shaders, but will again have a different design (and notably higher prices), but most probably better suited for gpgpu... and the question is, if nvidia cares for "small business" gpu rendering needs :?
The obvious is that which is never seen until someone expresses it simply

1x i7 2600K @5.0 (Asrock Z77), 16GB, 2x Asus GTX Titan 6GB @1200/3100/6200
2x i7 2600K @4.5 (P8Z68 -V P), 12GB, 1x EVGA GTX 580 3GB @0900/2200/4400
Post Reply

Return to “Development Build Releases”