Texture_Overload

Generic forum to discuss Octane Render, post ideas and suggest improvements.
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
GeoPappas
Licensed Customer
Posts: 429
Joined: Fri Mar 26, 2010 5:31 pm

ROUBAL wrote: As each compression adds new artifacts, if you save an image in jpg after working on it, the uncompressed image will be bigger after each new compression !
I don't believe that is correct.

The size in memory for an uncompressed image is dependent on three things: height, width, and color bit level. For example, if an image is 4000 x 3000 with 24 bits/pixel, then it will take up 36,000,000 bytes of memory. It won't matter if the image is all black, all white, a graduated black to white, or a mixture of all 16 million colors.

Compressing an image on disk is just used to save disk space. Some compression formats are lossy (where some of the image can be lost due to compression), such as JPG, and some compression formats are lossless, such as PNG. But when the image is uncompressed it will occupy the same amount of memory no matter how it is stored (as long as the color level - bits/pixel are the same).
User avatar
matej
Licensed Customer
Posts: 2083
Joined: Fri Jun 25, 2010 7:54 pm
Location: Slovenia

If using compressed images in Octane would be that easy or feasible to implement, Radiance would be probably already working on it. Instead of just a simple memory lookup, the decompressing algorithm would need to run every time some pixel info is needed in the pipeline - and that's; width * height * no. of times a texture appears in the calculation * no. of textures, per refresh cycle. My educated guess is that the speed impact would not be just minimal.

The whole node-based-materials thing is there (also) to save memory for texture data, which will be even easier to do when the new stuff will be implemented.
**STK** wrote:Yep, try to be more creative
Exactly
SW: Octane 3.05 | Linux Mint 18.1 64bit | Blender 2.78 HW: EVGA GTX 1070 | i5 2500K | 16GB RAM Drivers: 375.26
cgmo.net
**STK**
Licensed Customer
Posts: 16
Joined: Sat May 29, 2010 12:04 am

ok lets see...
Last edited by **STK** on Fri Apr 01, 2011 5:14 pm, edited 1 time in total.
Dual Xeon E5540 CPU 2.53 - 1x GTX470 - win7 64 -12GB ram
**STK**
Licensed Customer
Posts: 16
Joined: Sat May 29, 2010 12:04 am

Ok mr knowlege here are some extreme thoughts from some people who wants to go further from their nose...
Attachments
10.1.1.37.100.rar
(437.63 KiB) Downloaded 147 times
Dual Xeon E5540 CPU 2.53 - 1x GTX470 - win7 64 -12GB ram
User avatar
matej
Licensed Customer
Posts: 2083
Joined: Fri Jun 25, 2010 7:54 pm
Location: Slovenia

**STK** wrote:Ok mr knowlege here are some extreme thoughts from some people who wants to go further from their nose...
Since you obviously* read the paper you posted, can you point me to the section that talks about using compressed textures for rendering by decompressing them on the fly :P

And could you explain how you know that:

=> the techniques described in the paper are relevant to a GPU renderer, such as Octane
=> [if above is the case] the techniques described in the paper are applicable to and implementable on GPU architecture and CUDA technology
=> [if above is the case] Refractive software is willing to throw money and time inventing & implementing compression algorithms, instead features like nodes, render kernels & memory optimizations on geometry
=> [if above is the case] Are the users willing to wait another year for Octane 1.0, because of this

(* that paper is not applicable to the problem in this thread)

---

Ontopic:
The devs are implementing instances, which will in some cases help saving memory a lot. For everything else there is Masterc... are nodes - which in some cases already allow you to create full-fledged materials (diffuse, specular, bump) with just one grayscale texture or even without textures.

EDIT: can anyone point me to a renderer that uses compressed textures and decompresses them on the fly during rendering?
Last edited by matej on Fri Apr 01, 2011 6:03 pm, edited 1 time in total.
SW: Octane 3.05 | Linux Mint 18.1 64bit | Blender 2.78 HW: EVGA GTX 1070 | i5 2500K | 16GB RAM Drivers: 375.26
cgmo.net
User avatar
ROUBAL
Licensed Customer
Posts: 2199
Joined: Mon Jan 25, 2010 5:25 pm
Location: FRANCE
Contact:

@GeoPappas : you are right :
As each compression adds new artifacts, if you save an image in jpg after working on it, the uncompressed image will be bigger after each new compression !
I wanted to say "the compressed image will be bigger after each new compression".

I agree that a BMP image will have a constant size, but even if PNG is lossless, a PNG image should be bigger if it is a JPG image converted into PNG instead of a native BMP converted into a PNG. At least, it was in my own tests.
Last edited by ROUBAL on Fri Apr 01, 2011 6:03 pm, edited 1 time in total.
French Blender user - CPU : intel Quad QX9650 at 3GHz - 8GB of RAM - Windows 7 Pro 64 bits. Display GPU : GeForce GTX 480 (2 Samsung 2443BW-1920x1600 monitors). External GPUs : two EVGA GTX 580 3GB in a Cubix GPU-Xpander Pro 2. NVidia Driver : 368.22.
User avatar
dave62
Licensed Customer
Posts: 310
Joined: Tue Oct 05, 2010 6:00 pm

GeoPappas wrote:
ROUBAL wrote: As each compression adds new artifacts, if you save an image in jpg after working on it, the uncompressed image will be bigger after each new compression !
I don't believe that is correct.

The size in memory for an uncompressed image is dependent on three things: height, width, and color bit level. For example, if an image is 4000 x 3000 with 24 bits/pixel, then it will take up 36,000,000 bytes of memory. It won't matter if the image is all black, all white, a graduated black to white, or a mixture of all 16 million colors.

Compressing an image on disk is just used to save disk space. Some compression formats are lossy (where some of the image can be lost due to compression), such as JPG, and some compression formats are lossless, such as PNG. But when the image is uncompressed it will occupy the same amount of memory no matter how it is stored (as long as the color level - bits/pixel are the same).
i think roubal is somehow right if we talk about file format conversion. in detail we got to keep in mind that the jpg is not lossless. after uncompression of a jpg a lot pixels have changed their "original" values and mosttimes in a bad way so that now any compression algorithm cant be so effictiv any more if you save it. maybe conversion between losless formats wouldnt increase the image size.
but of course loading a jpg or a lossless png into octane wouldn´t make any changes in the memory usage.
- Mint 10 64bit nvidia drv 260.19.29/cudatoolkit3.0 intel q6600, 4gbRAM, GTX470 1,2GB
- Mint 10 64bit/ Win7 64bit nvidia drv 260.19.29/cudatoolkit3.2 amd X6, 16gbRAM, 2x GTX580 3GB
->Octane 2.44/ Blender2.5x
GeoPappas
Licensed Customer
Posts: 429
Joined: Fri Mar 26, 2010 5:31 pm

matej wrote: EDIT: can anyone point me to a renderer that uses compressed textures and decompresses them on the fly during rendering?
I don't think that you will find one. But that is because 95% of the renderers out there are CPU-based rendering engines, which don't have memory limit issues (when using a 64-bit O/S).

But GPU-based rendering is a different matter. Memory limits do come into focus for many people that are loading complex meshes, or loading lots of textures, or loading highly detailed HDRI backgrounds. This is one of the major complaints against GPU rendering.

So I thought that it might be worth looking into.

There was another topic that discussed trying to bring in the masses from the DAZ world over to Octane, but someone pointed out that many DAZ scenes contain lots of textures that would need gobs of memory. If Octane could get a handle on reducing memory usage, then they might have a much larger customer base (which would bring in more money for development).
colorlabs
Licensed Customer
Posts: 209
Joined: Sun Aug 08, 2010 2:53 pm

I did some experiments with VRAM and image size here:

http://www.refractivesoftware.com/forum ... f=9&t=5059
Mac OS X 10.8.0 | ASUS GTX 580 1.5GB | MSI GTX 470 | ATI Radeon 6870 | Core i7 2.9Ghz | 16GB
User avatar
abstrax
OctaneRender Team
Posts: 5506
Joined: Tue May 18, 2010 11:01 am
Location: Auckland, New Zealand

To cut the story short: We don't see a way to use compressed textures in Octane without making it slower than a standard CPU renderer.

Another problem: Even if we used compressed textures, people would then just throw even larger textures at Octane, which would then result in the same problem again. -> You really have to be conscious about usage of textures and geometry and I think it will stay like that for quite a while.

Anyway, one thing I really don't understand: Although just 3-4 years ago almost no PC had more than 2GB RAM and a 64bit OS, people were rendering happily away. Why is it not possible anymore?

Cheers,
Marcus
In theory there is no difference between theory and practice. In practice there is. - Yogi Berra
Post Reply

Return to “General Discussion”