|
From: | Thorsten Kaufmann |
Subject: | Re: [Openexr-devel] Slow deep exr |
Date: | Tue, 30 May 2017 07:28:39 +0000 |
Hey Peter, thanks for the in-depth feedback. I was not aware that there can be both flat and deep data in a single image. Are you aware of any implementations that I could use to give this a try? e.g. does nuke read these correctly and is there a way to generate such images? I have a very strong use-case for this depending on how good this works. Cheers, Thorsten --- Mackevision Medien Design GmbH T +49 711 93 30 48 661 address@hidden Geschäftsführer: Armin Pohl, Joachim Lincke, Jens Pohl --- From: Openexr-devel [mailto:openexr-devel-bounces+address@hidden
On Behalf Of Peter Hillman Hi Gonzalo, Apologies for the confusion in my reply. Yes, this file only contains deep image data, not flat. If you use the standard EXR API to read the file, it will composite the samples together to provide a representation of the image (see ImfCompositeDeepScanLine.h in the OpenEXR source for more details). This is exactly what's happening in your viewer under the hood: all the deep RGBA data is being composited to give you a single 'beauty' image representation. This compositing operation is slow because it has to process every deep sample in every scanline before it can be output to your framebuffer. Storing deep and flat within the same file is possible and supported. In my previous reply I assumed this was exactly how this file had been written, with the deep and flat parts representing the same data. The main reason to do this is for speed: the deep and flat parts would represent the same data, but the flat part is much faster to read when deep samples are not required. Since the deep file is already quite large, storing the flat part in the file as well is probably justifiable. The flat image would be stored as part 0, so an image viewer would read that by default, and ignore the deepscanline representation of the same channels in part 1. A viewer might offer a "deep image mode" to read the deep instead of the flat. In that case it would likely use the dedicated deep pixel API in OpenEXR so it can generate useful analytical data such as Thorsten's sample count visualisation or a 2.5D/3D representation of the data. As Thorsten suggests, it may be a more common practice to write that non-deep representation of the data as a completely separate EXR file. There are pros and cons to doing this: although this doesn't take more disk space than a single combined file and makes it easy to remove the deep image if it is later deemed unnecessary, it leads to more files in the system and possible confusion about which deep image goes with which flat one, particularly if only one of the files is later overwritten. Peter On 30/05/17 02:37, Thorsten Kaufmann wrote:
|
[Prev in Thread] | Current Thread | [Next in Thread] |