Cuda 5.0 is available
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Octane 2022.1.1 nv535.98
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
- PolderAnimation
- Posts: 375
- Joined: Mon Oct 10, 2011 10:23 am
- Location: Netherlands
- Contact:
Now let hope otoy can give us some preformence with Kepler.
(Btw ik ben nr12 @ tweakers
)
(Btw ik ben nr12 @ tweakers

Win 10 64bit | RTX 3090 | i9 7960X | 64GB
Haha, good to see you here.
Nvidia has updated its CUDA programming language to take advantage of its Kepler based GPUs.
Nvidia's Kerpler architecture brought with it a number of advances in GPGPU with the firm touting the ability to 'vary the parallelism' depending on workload characteristics. Now the firm has released CUDA 5 to support such features, including GPU Direct and the Nsight Eclipse integrated development environment (IDE).
Nvidia's CUDA 5 programming language will be of particular interest for those developing applications that run on Kepler GPGPUs, with the firm claiming the code changes required to make use of Kepler's dynamic parallelism features are minimal. The firm also introduced the ability to directly access libraries from object code.
However Nvidia's biggest new feature in CUDA 5 is GPU Direct, which allows CUDA applications to access memory on other GPUs using the PCI-Express bus and the network card. According to Nvidia, the technology for accessing the memory of other GPUs through the PCI-Express bus can be used to lower the latency of memory accesses rather than increase the local memory available to CUDA applications.
The Achilles' heel of GPGPU accelerators is their relatively small amount of local memory. When working with large datasets that can span tens of gigabytes the card has to request data from main memory, resulting in a significant bottleneck as the GPU waits to be fed data.
Nvidia also announced its Nsight Eclipse IDE that not only includes CUDA syntax highlighting but also a debugger and a code profiler that the firm claims can identify performance issues with code. The firm said that its Nsight IDE is available for Linux and Mac OS X.
While Nvidia is making CUDA the programming language of choice for those that use Kepler GPGPUs the wider market is moving towards OpenCL, a language that Nvidia does support but clearly not in the same way as its own proprietary CUDA language. Even though Nvidia's CUDA language is popular with researchers, it might find that some of these CUDA-only features will also have to be made available to OpenCL should it want to see continued growth.
The Inquirer

Nvidia has updated its CUDA programming language to take advantage of its Kepler based GPUs.
Nvidia's Kerpler architecture brought with it a number of advances in GPGPU with the firm touting the ability to 'vary the parallelism' depending on workload characteristics. Now the firm has released CUDA 5 to support such features, including GPU Direct and the Nsight Eclipse integrated development environment (IDE).
Nvidia's CUDA 5 programming language will be of particular interest for those developing applications that run on Kepler GPGPUs, with the firm claiming the code changes required to make use of Kepler's dynamic parallelism features are minimal. The firm also introduced the ability to directly access libraries from object code.
However Nvidia's biggest new feature in CUDA 5 is GPU Direct, which allows CUDA applications to access memory on other GPUs using the PCI-Express bus and the network card. According to Nvidia, the technology for accessing the memory of other GPUs through the PCI-Express bus can be used to lower the latency of memory accesses rather than increase the local memory available to CUDA applications.
The Achilles' heel of GPGPU accelerators is their relatively small amount of local memory. When working with large datasets that can span tens of gigabytes the card has to request data from main memory, resulting in a significant bottleneck as the GPU waits to be fed data.
Nvidia also announced its Nsight Eclipse IDE that not only includes CUDA syntax highlighting but also a debugger and a code profiler that the firm claims can identify performance issues with code. The firm said that its Nsight IDE is available for Linux and Mac OS X.
While Nvidia is making CUDA the programming language of choice for those that use Kepler GPGPUs the wider market is moving towards OpenCL, a language that Nvidia does support but clearly not in the same way as its own proprietary CUDA language. Even though Nvidia's CUDA language is popular with researchers, it might find that some of these CUDA-only features will also have to be made available to OpenCL should it want to see continued growth.
The Inquirer
Octane 2022.1.1 nv535.98
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
Sorry, to disappoint you, but CUDA 5 doesn't magically improve performance. We have tried them already. The current builds are the fastest we can get without algorithmically changes.
Cheers,
Marcus
Cheers,
Marcus
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Ah too bad, well keep up the good work!! 

Octane 2022.1.1 nv535.98
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
x201t - gtx580 - egpu ec
Dell G5 - 16GB - dgpu GTX1060 - TB3 egpu @ 1060 / RTX 4090
Octane Render experiments - ♩ ♪ ♫ ♬
Will we get any improvement in term of allowed number of texture ? Currently, I reach the max on my big London city scene... 
I would like to animate the scene, or at least the camera, and so I couldn't delete the textures each time the camera moves. All object and characters are detailed (vegetation will also come).
Below, a low sample recent screen shot during rendering.
The node graph is very crowded, and it becomes more and more diffficult to place the instances. I am constantly panning.

I would like to animate the scene, or at least the camera, and so I couldn't delete the textures each time the camera moves. All object and characters are detailed (vegetation will also come).
Below, a low sample recent screen shot during rendering.
The node graph is very crowded, and it becomes more and more diffficult to place the instances. I am constantly panning.
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
If you use only Kepler GPUs, then yes with the upcoming release you would get more textures (144 LDR RGBA, 68 LDR greyscale, 10 HDR RGBA, 10 HDR greyscale), otherwise not. The reason is that Keplers allow more texture references. We hope to solve this problem in the future for Fermi and Kepler cards, but we are not there yet.ROUBAL wrote:Will we get any improvement in term of allowed number of texture ? Currently, I reach the max on my big London city scene...
I would like to animate the scene, or at least the camera, and so I couldn't delete the textures each time the camera moves. All object and characters are detailed (vegetation will also come).
Below, a low sample recent screen shot during rendering.
The node graph is very crowded, and it becomes more and more diffficult to place the instances. I am constantly panning.
The next version will also deal differently with too high texture counts. You will not get a pop-up anymore, but the colouring in the render status will indicate that you are trying to use too many textures. All textures that don't fit into the texture limit, will be rendered black.
Regarding the node graph editor: Yes that's one of the many items on our to-do list.
Cheers,
Marcus
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Thanks Marcus, I had just submitted a feature request about Zooming and auto panning in the graph editor !
About texture, no popup warnings will be welcome ! I have just spent at least one hour to delete small textures here and there and I had warnings over warnings !o)
I will stay on Fermi cards until I will be able to change my workstation... so at least half a year at best !
I would never have thought that I would create so big scenes, but I love scenes with life inside, and Octane allows so vivid images that I can't stop building larger and larger... and it requires much optimization !
About texture, no popup warnings will be welcome ! I have just spent at least one hour to delete small textures here and there and I had warnings over warnings !o)
I will stay on Fermi cards until I will be able to change my workstation... so at least half a year at best !
I would never have thought that I would create so big scenes, but I love scenes with life inside, and Octane allows so vivid images that I can't stop building larger and larger... and it requires much optimization !
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.