No, not a big problem, I just wanted to make sure it was required. I'm not yet very deep (excuse the pun) in the "comp" part of deep comp -- I've written the renderer part, and know it's outputting sorted, non-overlapping samples, and as I write various reading software, I'm just trying to make sure that the added complexity of handling messy files (which I'm not generating) has a justification.On Oct 29, 2013, at 4:43 PM, Christopher Horvath < address@hidden> wrote: Hey Larry,
It's common to pre-comp work you're doing while compositing, or to work in stages. By not requiring a Deep Pixel (in Nuke, or in EXR2) to be "tidy", it makes merging deep pixels trivial, as well as keeping the amount of data significantly lower. By way of example, look at how the deep pixels in the paper have more points of data after being tidied. Peter Hillman had some specific examples showing how tidied volume renders became significantly larger than their untidied versions. Since the major downside, or limitation, of deep workflows is their increased data usage - every step which can prevent additional data bloat is one we should take!
Given that the paper provides a performant code sample which will tidy and merge samples, one that can be used by client applications, is this a major concern?
Chris
|