openexr-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Openexr-devel] Deep data API: another batch of questions.


From: Peter Hillman
Subject: Re: [Openexr-devel] Deep data API: another batch of questions.
Date: Tue, 29 Apr 2014 20:28:10 +1200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0

The only standard way to go is that you should write samples according to the "InterpretingDeepPixels" document, and particularly you should take care to guarantee that the "flattening" procedure described gives you the correct RGBA image.

Beyond that specification, there are no official recommendations regarding how many samples you should write, or how and when to merge samples.  It would make sense to anticipate what operations might be applied to the deep image before it is flattened, including merging with other deep images. One should ensure that there is appropriate information for those operations to work as expected without storing too many samples. Since it is hard for software developers to anticipate how images are to be used, it makes sense that tools provide options for end users to control output.




On 29/04/14 19:27, Michel Lerenard wrote:
Thanks a lot, that did the trick.

I got my export code working, both using Tiled and Scanline images, depending on user choice.

I still one have question, that is more related to deep data than OpenEXR itself: how should I write anti aliasing samples values. I've seen that Houdini  writes all anti aliasing samples into the same data stack, and has an option to merge values. Is it the standard way to go ? (if there is one...)

Thanks again for your help.

On 04/22/2014 09:51 AM, Peter Hillman wrote:
You can modify an existing FrameBuffer/DeepFrameBuffer object, using something like this (untested!) code:

outputfile.setFrameBuffer(...);
outputfile.writePixels(32);

/*later on*/

FrameBuffer myFrameBuffer = outputfile.frameBuffer();
myFrameBuffer["R"].base -= myFrameBuffer["R"].yStride*32;
myFrameBuffer["G"].base -= myFrameBuffer["G"].yStride*32;
myFrameBuffer["B"].base -= myFrameBuffer["B"].yStride*32;
outputfile.setFrameBuffer(myFrameBuffer);
outputfile.writePixels(32);


Just don't forget to call setFrameBuffer. Admittedly there's overhead, but not significant compared to the cost of writePixels.


On 22/04/14 19:42, Michel Lerenard wrote:
I had a look at the sources and the docs, I misunderstood you post yesterday, I thought you meant there was a way to modify the pointers of a framebuffer. Creating a new framebuffer and insert new DeepSlices for every batch of lines was what I was trying not to do.
But reading your messages I guess there's no other option.

On 04/21/2014 12:19 AM, Peter Hillman wrote:
You will need to call setFrameBuffer before every call to writePixels, as you need to update the frame pointers.
The pointer you pass to Slice/DeepSlice is the memory location of pixel (0,0) in the image. This point will move in memory as you update your memory block with different scanlines.

Your first call is probably doing the right thing. For each subsequent call you need to set up a new FrameBuffer with yStride*currentScanLine() subtracted from the base pointer, where currentScanLine() is the y offset of the first scanline you are writing.

The library will only access the memory locations it needs to for writePixels() - there's no problem in passing an "illegal address" as a base pointer to setFrameBuffer, as long as (base+yStride*currentScanLine() + dataWindow.min.x*xStride) is always a valid location when writePixels() is called.

The above is true for xSampling=1 and ySampling=1 - you may need to adjust the logic accordingly otherwise.


On 19/04/14 21:18, Lerenard Michel wrote:
Hi,

still trying to write deep data image, i'm struggling a bit with FrameBuffers.
As I need to write subsampled deep images, I cannot use Tiled images. I went for the scanline approach. My idea was to write batches of n scanlines, in increasing Y order.

This way I was thinking I would be able to limit the memory footprint:
OpenEXR would not need to cache data, and I would be able to reuse the same buffers for every batch of lines: one buffer for Z and one for each visible channel.

So I created a bunch of buffers, whose size was my image width * 32. (arbitrary value). I planned to feed these buffers to the DeepSlices I added to the FrameBuffer.
Thing is, it appears the FrameBuffer/Slices cannot work that way: they need to have memory allocated for the whole image. I couldn't find any function limiting / defining the region I want to work on.

Here are my questions:
- Is the statement above correct ?
- Should I work differently ? I doesn't look like using several framebuffers would help, I out of ideas at the moment.

I can explain in more details my process if it can help.


Thanks

Michel


_______________________________________________
Openexr-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/openexr-devel

.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]