There shouldn't be any when used as texture in rendering, since textures are uploaded to the GPU only as half float anyway. The only exception I can think of would be displacement mapping, but even there the different between single float (32 bit per channel) and half float (16 bit per channel) should be negligible.andw wrote:That's great!This will save about 66% of system memory used for HDR textures compared to previous versions.![]()
Just tested it, loading (not rendering) real scene with large HDRI in Environment nodes (two 16Kx8K + two 15Kx7.5K size, all are 96-bit).But now I have a question: is there are any losses in terms of quality?Code: Select all
3.06.2 10.2 Gb Working set 3.07 TEST 4 6.4 Gb Working set
96-bit HDR means the image file stores 3 channels (RGB) with 32 bit per pixel each, i.e. 3x32bit=96bit per pixel. After loading the image it will be stored with 4 channels (RGBA) of 16 bit per pixel each, i.e. 4x16bit=64bit per pixel. I.e. it doesn't look like a massive reduction in memory usage, but since RGB images are always loaded as RGBA images, they are occupying 128bit per pixel when loaded with single float precision (32 bit per channel). Plus they still need to converted to half float before getting uploaded to the GPU, and the half float result is cached so we don't have to do the conversion every time the HDR image is uploaded again. This means that in the past we stored the image twice: Once with 128 but per pixel and once with 64 bit per pixel. Now we store it only once with 64 bit per pixel if loaded with 16 bit per channel.The images parameters are displayed as '16384x8192 pixels, 96-bit RGB HDR, 1048576 KB'
But in the 'Import settings' dialog I see
HDR texture bit depth: 16-bit float (the default combo box selection)
Source file format: 96-bit RGB HDR
Loaded as: 64-bit RGBA HDR
Could you clarify, what these values means?
I would leave it at 16 bit.Should I change that setting to 'Automatic' to load image without any losses or the differences will be negligible?