Octane and GTX 1080 release

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
User avatar
glimpse
Licensed Customer
Posts: 3740
Joined: Wed Jan 26, 2011 2:17 pm
Contact:

Seekerfinder wrote:
glimpse wrote:have You ever had any issues runnning cards fro mdifferent architecture? no..
Erm, yes actually. Performance degredation from Fermi to Kepler was HUGE.
sometimes I ask myself after post like this: are You joking, or are You so short-seeing?

how come 680 has anything to do with performance degradations comapared to 580?

firstly, 580 was HI-END GPU the best Fermi Architecture could offer, while 680 was just a MID unit from Kepler.

680 used less power & matched high end card from last achitecture, How is that inferior???..
pluss it offered extra 1GB of vram (that for some people was importand, because there were no out of core texture functionality back then..)

Just bending facts with false claims is stupid. 680 doesn't lack any performance..it was meant for games (& optimisations were made there), while it filled the gap between 580 & 780 (HiEnd Fermi & HiEnd Kepler GPUs).

It's marketing at msot, not any technological or "Performance degredation"..

it's like saying hey,, my Toyota doesn't beat Lexus (both brands owned by the same company)..-well, that's was the plan =) if You can't figure out Yourself.

most likely, reason why nvidia scaled things that way was to satisfy server marker with HiEnd chip equiped Tesla GPUs (& then skipped Maxwell refresh). Now on Pascal they focus again on Servers (HPC market) & give You 1080 equiped mid level GPU..

Just start claiming 1080 (midEnd SKU) has bad performan 'cos it's barelly faster than 980Ti (hiEnd SKU) - new card beats older gen & does that using only single 8pin..so again.. efficiency beter..same as it was with 580/680 case..

P.S. Check Your arguments next time before making coments
Performance of 580 in OctaneBench ~63 https://render.otoy.com/octanebench/sum ... 1x+GTX+580
Performance of 680 in OctaneBench ~52 https://render.otoy.com/octanebench/sum ... 1x+GTX+680
if You wsih to compare apples to oranges (or Hi end to mid SKU), Hi end Kepler Achitecture GPUs aka
780 - hitting ~81 on most systems https://render.otoy.com/octanebench/sum ... 1x+GTX+680
while 780Ti ~105 https://render.otoy.com/octanebench/sum ... GTX+780+Ti

in Your logic 980 scoring "only " 100 points in OctaneBench is also suffers "Performance degradation" copared to 780Ti?
User avatar
Seekerfinder
Licensed Customer
Posts: 1600
Joined: Tue Jan 04, 2011 11:34 am

glimpse wrote: sometimes I ask myself after post like this: are You joking, or are You so short-seeing?
mmm. You know Glimpse, your posts seem to normally look for some positive angle and I like that. But sometimes when you disagree with someone your posts get all personal and demeaning. That is not nescessary. If you want to make a counter argument, go right ahead. That is what these forums are for. But try and keep the demeaning comments aside.

I do not have time to respond to all your points here right now but I will respond to this statement:
glimpse wrote:680 used less power & matched high end card from last achitecture, How is that inferior???..
Comparing CUDA cores to CUDA cores is what counts most when it comes to rendering speed. Some of us, after spending a lot of money converting to Octane got a real shock with the performance degradation between those two architectures. Of course Nvidia has to move forward with new architectures. But I will repeat that I think it should be a higher priority for any GPU rendering developer to engage their users with respect to new architectures, how it is being tested etc.

Seeker
Win 8(64) | P9X79-E WS | i7-3930K | 32GB | GTX Titan & GTX 780Ti | SketchUP | Revit | Beta tester for Revit & Sketchup plugins for Octane
Timmaigh
Licensed Customer
Posts: 168
Joined: Mon Nov 01, 2010 9:52 pm

glimpse wrote:
Seekerfinder wrote:
glimpse wrote:have You ever had any issues runnning cards fro mdifferent architecture? no..
Erm, yes actually. Performance degredation from Fermi to Kepler was HUGE.
sometimes I ask myself after post like this: are You joking, or are You so short-seeing?

how come 680 has anything to do with performance degradations comapared to 580?

firstly, 580 was HI-END GPU the best Fermi Architecture could offer, while 680 was just a MID unit from Kepler.

680 used less power & matched high end card from last achitecture, How is that inferior???..
pluss it offered extra 1GB of vram (that for some people was importand, because there were no out of core texture functionality back then..)

Just bending facts with false claims is stupid. 680 doesn't lack any performance..it was meant for games (& optimisations were made there), while it filled the gap between 580 & 780 (HiEnd Fermi & HiEnd Kepler GPUs).

It's marketing at msot, not any technological or "Performance degredation"..

it's like saying hey,, my Toyota doesn't beat Lexus (both brands owned by the same company)..-well, that's was the plan =) if You can't figure out Yourself.

most likely, reason why nvidia scaled things that way was to satisfy server marker with HiEnd chip equiped Tesla GPUs (& then skipped Maxwell refresh). Now on Pascal they focus again on Servers (HPC market) & give You 1080 equiped mid level GPU..

Just start claiming 1080 (midEnd SKU) has bad performan 'cos it's barelly faster than 980Ti (hiEnd SKU) - new card beats older gen & does that using only single 8pin..so again.. efficiency beter..same as it was with 580/680 case..

P.S. Check Your arguments next time before making coments
Performance of 580 in OctaneBench ~63 https://render.otoy.com/octanebench/sum ... 1x+GTX+580
Performance of 680 in OctaneBench ~52 https://render.otoy.com/octanebench/sum ... 1x+GTX+680
if You wsih to compare apples to oranges (or Hi end to mid SKU), Hi end Kepler Achitecture GPUs aka
780 - hitting ~81 on most systems https://render.otoy.com/octanebench/sum ... 1x+GTX+680
while 780Ti ~105 https://render.otoy.com/octanebench/sum ... GTX+780+Ti

in Your logic 980 scoring "only " 100 points in OctaneBench is also suffers "Performance degradation" copared to 780Ti?
I dont think you understand. Glimpse. OFC 680 was technically not a true successor to 580, it was 104 chip at half the size. But super-important things to consider were:

- it was sold at pretty much same price as 580. So if you bought it to replace 580 and it performed worse than 580 - and it did - i dont think you cared about the things, like its only mid-range chip, and its more power efficient, etc... it performed worse, period. For 500 USD/EUROs...you already paid one for 580.

- 680 had 1536 CUDA cores, which were akin to 768 Fermi cores (because of architectural differences, no more shader hot-clocks etc...) It performed about 30 percent better in games than 580. So by all accounts it was perfectly normal to expect there it will fare better in Octane too. It did not. Thus performance degradation he is talking about. And IF 1080 does not perform in Octane better than 980Ti now, despite generally being faster in games and sold pretty much for the same price as 980Ti (599/699USD vs 649USD, here in Europe i expect same price as 980Ti, thus 700 - 800 EUROs), those words will apply again, and it wont matter one bit, that its chip of different lineage (104, not 100/200).

Long story short, you gotta take prices into account. Even if technologically 1080 is not the same lineage as 980Ti, as long as its priced pretty much equally, it has no business to perform worse. If it does, no buy, sorry. I could not care less if its more power efficient or optimized toward games. I am not buying it to have cheaper electricity bill nor to play games at more FPS.
Intel Core i7 980x @ 3,78GHz - Gigabyte X58A UD7 rev 1.0 - 24GB DDR3 RAM - Gainward GTX590 3GB @ 700/1400/3900 Mhz- 2x Intel X25-M G2 80GB SSD - WD Caviar Black 2TB - WD Caviar Green 2TB - Fractal Design Define R2 - Win7 64bit - Octane 2.57
protovu
Licensed Customer
Posts: 476
Joined: Thu Sep 11, 2014 7:30 pm

I have to agree. Quite put off by the tone. Not friendly. Not professional. Kind of disappointing.
User avatar
FrankPooleFloating
Licensed Customer
Posts: 1669
Joined: Thu Nov 29, 2012 3:48 pm

Tim, I think you meant 680's 1536 cores akin to 580's 512 cores, since Kepler is triple the number of cores of Fermi, but about 1/3 as powerful... :? or have I had this wrong all this time?
Win10Pro || GA-X99-SOC-Champion || i7 5820k w/ H60 || 32GB DDR4 || 3x EVGA RTX 2070 Super Hybrid || EVGA Supernova G2 1300W || Tt Core X9 || LightWave Plug (v4 for old gigs) || Blender E-Cycles
voon
Licensed Customer
Posts: 527
Joined: Tue Dec 17, 2013 6:37 pm

Seekerfinder wrote: Erm, yes actually. Performance degredation from Fermi to Kepler was HUGE.
Was it? Or you you merely mean the performance "per core", since fermi cores are more complex (which can be compensated by more of the smalelr cores in kepler)?

I remember the fermi 5 Series being strong and what came after not to be much of a revelation ... but not necessarily much slower in total?
prehabitat
Licensed Customer
Posts: 495
Joined: Fri Aug 16, 2013 10:30 am
Location: Victoria, Australia

I think the original question about performance degradation is completely valid.

However, I also think that no one will be able to speak to it until the cards are released and the Otoy devs have a chance to throw the code at it..

There might be some clues that we can find. As noted above: Kepler was claimed/marketed as being more efficient/simpler/(cooler) than Fermi; which translated into real-world consequences in operating temperatures, energy consumption, but also output per core.

Are similar claims made about Pascal vs Maxwell?
Win10/3770/16gb/K600(display)/GTX780(Octane)/GTX590/372.70
Octane 3.x: GH Lands VARQ Rhino5 -Rhino.io- C4D R16 / Revit17
User avatar
Seekerfinder
Licensed Customer
Posts: 1600
Joined: Tue Jan 04, 2011 11:34 am

voon wrote:
Seekerfinder wrote: Erm, yes actually. Performance degredation from Fermi to Kepler was HUGE.
Was it? Or you you merely mean the performance "per core", since fermi cores are more complex (which can be compensated by more of the smalelr cores in kepler)?

I remember the fermi 5 Series being strong and what came after not to be much of a revelation ... but not necessarily much slower in total?
Yes, I was referring to cuda cores, since this is the core architecture that Octane is built on. Remember that Kepler had an increase of 3x more cuda cores, yet less performance than Fermi. A number of Octane users were surprised that Otoy seemed not ready for that transition.

So again, perfectly valid question. People need to make decisions on new rigs and if they know that there would be a degradation of a third of rendering power per core, that is not insignificant and will certainly influence purchase decisions.

Seeker
Win 8(64) | P9X79-E WS | i7-3930K | 32GB | GTX Titan & GTX 780Ti | SketchUP | Revit | Beta tester for Revit & Sketchup plugins for Octane
prehabitat
Licensed Customer
Posts: 495
Joined: Fri Aug 16, 2013 10:30 am
Location: Victoria, Australia

Seekerfinder wrote:.... Remember that Kepler had an increase of 3x more cuda cores, yet less performance than Fermi. A number of Octane users were surprised that Otoy seemed not ready for that transition.
As above, this line of questioning is valid. However, I wonder if the difference in performance between a Fermi core and a Kepler core is relating to software implementation or the physical internals of the different cores?

From memory Nvidia made up for the 'change' in Kepler core mechanics by putting things closer together and making it all run off a loop, rather than a two-way bus... been a while tho..
What I'm saying is that I place the blame on 'Nvidia Marketing' rather than some shortfall in Otoy dev's abilities to cope with the "change".

After all, if it was high level enough that tweaks to the software would help you'd expect that the Fermi/Kepler gap would have narrowed significantly in the last 12 months: yet Kepler performance per core has remained (close to) constant relative to any Fermi core benchmark since its release... otherwise your Kepler cores would have approached parity with Fermi's and they haven't.

If I stick to my call on 'Marketing' issues: once we get a Pascal core to test octane on we'll instantly know if our Pascal cores are oranges, like our Maxwell and Kepler cores were, or if they're apples like Fermi, or Bananas....

In Marketing's defence: the internal workings of a processor's architecture are extremely difficult to explain/spin up for to the masses... knowing what you know now about the difference between Fermi and Kepler: how would you 'market to sell' Kepler??
Win10/3770/16gb/K600(display)/GTX780(Octane)/GTX590/372.70
Octane 3.x: GH Lands VARQ Rhino5 -Rhino.io- C4D R16 / Revit17
User avatar
Seekerfinder
Licensed Customer
Posts: 1600
Joined: Tue Jan 04, 2011 11:34 am

prehabitat wrote:knowing what you know now about the difference between Fermi and Kepler: how would you 'market to sell' Kepler??
Yeah it's interesting. The way cuda works with rendering and other GPGPU applications is somewhat different from mainstream gaming, which is by far Nvidia's biggest GTX market. Energy efficiency (=heat) is a big deal for any processor. And since the Kepler architecture performed better in some games apparently, Kepler was a step up in that regard. It's just that the implication of the new architecture was negative for cuda core efficiency pertaining to rendering.
prehabitat wrote:In Marketing's defence: the internal workings of a processor's architecture are extremely difficult to explain/spin up for to the masses...
Exactly! This is why it would be welcomed if Otoy just spent a bit of time explaining the implications of an upcoming architecture to users. As developers, surely they should have some insight that the 'masses' (over 150k Octane users!) don't. That is why I always find it strange to see a comment like "we'll test it as soon as we get one of the new cards". What guarantee is there that we don't get something like "Oh sorry guys, the new architecture turns out just not work great with Octane, but hey look, light fields!!".

Again, I think it is not unreasonable to ask Otoy to consider a small write-up prior to the public release of a new architecture to confirm compatibility and rough performance expectations. Mainstream tech media does that for other applications, why can't we as Octane users get the same benefit?

Seeker
Win 8(64) | P9X79-E WS | i7-3930K | 32GB | GTX Titan & GTX 780Ti | SketchUP | Revit | Beta tester for Revit & Sketchup plugins for Octane
Post Reply

Return to “General Discussion”