Disable Sample Transform decoding by default#3018
Conversation
maryla-uc
left a comment
There was a problem hiding this comment.
I think that the addition of AVIF_IMAGE_CONTENT_SAMPLE_TRANSFORMS is a great change, but I'm not convinced about the avifdec flag.
Although libavif only supports "recipes" for using sato for bit depth extension, I think that the decoding code supports other use cases. Tying sample transforms decoding to bit depth seems not very future proof. Just add a separate flag?
wantehchang
left a comment
There was a problem hiding this comment.
I agree that returning a 16-bit output image is an incompatible change, so users need to opt in to this new feature. Thank you for thinking of this issue!
| // Mostly used for bit depth extensions to go beyond the underlying codec capability | ||
| // (e.g. 16-bit AVIF). Not part of AVIF_IMAGE_CONTENT_ALL as this is a rare use case. | ||
| // Has no effect without AVIF_IMAGE_CONTENT_COLOR_AND_ALPHA. | ||
| AVIF_IMAGE_CONTENT_SAMPLE_TRANSFORMS = (1 << 3), |
There was a problem hiding this comment.
I have two comments on this.
- It may be better to control this with a new boolean field in the
avifDecoderstruct. Since sample transforms also produce color and alpha, sample transforms are not a new content type. - It may be better to restrict this control to bit depth extensions only rather than all sample transforms, if we can determine the output bit depth without applying the sample transforms. (Maryla also said this in her review comment.) I think we can get the bit depth from the 'pixi' box of the 'sato' derived image item, right?
In summary, please consider exposing this control as a new avifBool enableExtendedBitDepths field in the avifDecoder struct
There was a problem hiding this comment.
Thank you for the review.
- I disagree. Gain maps complement the primary color image but adding color information. Bit depth extensions, and thus Sample Transforms, also extend the color information in a similar way. The dimension is just bit depth instead of brightness, if I may say so.
AlsoavifImageContentTypeFlagsis a bit mask rather than an enum, meaning the differentavifImageContentTypeFlagvalues are more orthogonal concepts that can each be toggled on or off than distinct types.
EachavifImageContentTypeFlagmaps to a separate image item in AVIFv1 (primary color item, aux alpha, derived gain map, derived sample transform). - As of today, libavif applies any Sample Transform formula with at most 2 input items. Also, there could be plenty of different formulas for extending the bit depth, not only the few that are available at encoding right now. I do not think it is easy to precisely detect whether the decoded formula is a bit depth extension, and only that.
Maryla's comment "Tying sample transforms decoding to bit depth" was specifically about avifdec's--depthflag. I am not sure this is what you meant.
There was a problem hiding this comment.
I am commenting on the change to the public header include/avif/avif.h.
If a 'sato' derived image item has one 8-bit AV1 image item and adds the constant 0 to the input image item, the 'sato' derived image item should have a 'pixi' box that indicates bit depth 8. Current clients of libavif can handle this output image. This is why we only need to disable by default the 'sato' derived image items whose 'pixi' box indicates a bit depth other than 8, 10, 12.
Since the outut image is still represented with the avifImage struct, it seems that the only field in avifImage that may catch existing clients by surprise is the depth field. Are there other kinds of 'sato' derived image items that produce an output avifImage that current clients of libavif cannot handle?
There was a problem hiding this comment.
Current clients of libavif can handle this output image.
Yes. The question is: should it still be applied?
With this specific example (noop 0 addition), it is obviously better NOT to enable Sample Transforms, as it would just make decoding slower for the same result.
Are there other kinds of 'sato' derived image items that produce an output
avifImagethat current clients of libavif cannot handle?
I do not think so. But it does not mean it is beneficial to do so.
libavif only supports Sample Transform derived image items that are NOT the primary item, and in the same altr group as the primary item. So there is always an alternative, and the primary item should be the default output of libavif in my opinion.
Advanced users can enable Sample Transform decoding to extract more information when needed.
There was a problem hiding this comment.
I thought the only concern was breaking the clients unprepared to handle new bit depths. I did not realize the cost incurred by applying sample transforms was also a concern.
We need to provide guidance on whether libavif clients should enable sample transforms. Besides the two concerns mentioned above, are there other concerns?
I think we should constrain the new bit depths that libavif supports and document what they are. My code analysis shows that libavif malfunctions if decoder->data->meta->sampleTransformDepth > 16, so we need to at least disallow bit depths > 16.
Note: You can ignore the rest of this comment.
Re: AVIF_IMAGE_CONTENT_SAMPLE_TRANSFORMS vs. enableSampleTransforms: In your interpretation of avifImageContentTypeFlags based on the image item type, AVIF_IMAGE_CONTENT_SAMPLE_TRANSFORMS makes sense. I have a different interpretation of avifImageContentTypeFlags based on the output destination:
AVIF_IMAGE_CONTENT_COLOR_AND_ALPHA: the output goes todecoder->imageAVIF_IMAGE_CONTENT_GAIN_MAP: the output goes todecoder->image->gainMap->image
When sample transforms are enabled, the output still goes to decoder->image. Therefore sample transforms, unlike gain maps, do not behave like a new image content type.
There was a problem hiding this comment.
One more thing: avifDecoderSampleTransformItemValidateProperties() performs the following check:
for (uint8_t i = 0; i < pixiProp->u.pixi.planeCount; ++i) {
if (pixiProp->u.pixi.planeDepths[i] != pixiProp->u.pixi.planeDepths[0]) {
avifDiagnosticsPrintf(diag,
"Item ID %u of type '%.4s' has different depths specified by pixi property [%u, %u], this is not supported",
item->id,
(const char *)item->type,
pixiProp->u.pixi.planeDepths[0],
pixiProp->u.pixi.planeDepths[i]);
return AVIF_RESULT_NOT_IMPLEMENTED;
}
}
This check is redundant because avifParsePixelInformationProperty() already performs the same check.
There was a problem hiding this comment.
I wrote:
Possible solution: We can change
avifDecoderSampleTransformItemValidateProperties()to disallowpixiProp->u.pixi.planeDepths[0] > 16.
Your PR #3038 disallows bit depths > 16 in avifParsePixelInformationProperty(). That is stricter and also works.
There was a problem hiding this comment.
This check is redundant because
avifParsePixelInformationProperty()already performs the same check.
Done in #3040
There was a problem hiding this comment.
This bypasses the bit depth check in
avifImageCreate()that you mentioned.
Not really, because it goes through:
Lines 6846 to 6855 in 0812f44
dstImage->depth is likely 8 or 12 because it contains the primary coded image item. decoder->data->meta->sampleTransformDepth is 16 (or higher). So the condition is true, and avifImageCreate() is called with sampleTransformDepth bits. For sampleTransformDepth>16, avifImageCreate() returns null and avifDecoderApplySampleTransform() returns the wrong AVIF_RESULT_OUT_OF_MEMORY error code.
If I understand correctly, the only way to bypass this would be for the user to explicitly set avifDecoder::image::depth to exactly sampleTransformDepth after parsing/decoding the primary item but before avifDecoderApplySampleTransform() is called.
There was a problem hiding this comment.
Your PR #3038 disallows bit depths > 16 in
avifParsePixelInformationProperty(). That is stricter and also works.
I consider this thread resolved.
Add AVIF_IMAGE_CONTENT_SAMPLE_TRANSFORMS. 16-bit avifImage may not be gracefully handled by all current avifDecoder users, so it is safer to disable the feature by default.
Add AVIF_IMAGE_CONTENT_SAMPLE_TRANSFORMS.
16-bit avifImage may not be gracefully handled by all current avifDecoder users, so it is safer to disable the feature by default.