Since libjpeg-turbo does not support Exif, the only way it can embed
density information in a JPEG image is by using the JFIF marker, which
is only written if the JPEG colorspace is YCbCr or grayscale.
(Referring to the conversation under #793, we may need to further
restrict that to 8-bit-per-sample JPEG images, because the JFIF spec
requires 8-bit data precision.)
This is just a code readability thing. The logic effectively caused the
data precision to default to 8 if -precision was not specified, but that
was not obvious.
This commit also modifies tjcomptest so that it tests TJComp with no
-precision argument, to ensure that the data precision defaults to 8.
Since the introduction of TJFLAG_NOREALLOC in libjpeg-turbo 1.2.x, the
TurboJPEG C API documentation has (confusingly) stated that:
- if the JPEG buffer pointer points to a pre-allocated buffer, then the
JPEG buffer size must be specified, and
- the JPEG buffer size should be specified if the JPEG buffer is
pre-allocated to an arbitrary size.
The documentation never explicitly stated that the JPEG buffer size
should be specified if the JPEG buffer is pre-allocated to a worst-case
size, but since focus does not imply exclusion, it also never explicitly
stated the reverse. Furthermore, the documentation never stated that
this was contingent upon TJPARAM_NOREALLOC/TJFLAG_NOREALLOC. However,
effectively the compression and lossless transformation functions
ignored the JPEG buffer size(s) passed to them, and assumed that the
JPEG buffer(s) had been allocated to a worst-case size, if
TJPARAM_NOREALLOC/TJFLAG_NOREALLOC was set. This behavior was an
accidental and undocumented throwback to libjpeg-turbo 1.1.x, in which
the tjCompress() function provided no way to specify the JPEG buffer
size. It was always a bad idea for applications to rely upon that
behavior (although our own TJBench application unfortunately did.)
However, if such applications exist in the wild, the new behavior would
constitute a breaking change, so it has been introduced only into
libjpeg-turbo 3.1.x and only into TurboJPEG 3 API functions. The
previous behavior has been retained when calling functions from the
TurboJPEG 2.1.x API and prior versions.
Did I mention that APIs are hard?
With respect to tj3Transform(), this addresses an oversight from
bb1d540a807783a3db8b85bab2993d70b1330287.
Note to self: A convenience function/method for computing the worst-case
transformed JPEG size for a particular transform would be nice.
- When transforming, the worst-case JPEG buffer size depends on the
subsampling level used in the destination image, since a grayscale
transform might have been applied.
- Parentheses Police
-copynone --> -copy none
Add '-copy all', even though it's the default.
-rgb, -bgr, -rgbx, -bgrx, -xbgr, -xrgb, -gray, -cmyk -->
-pixelformat {rgb|bgr|rgbx|bgrx|xbgr|xrgb|gray|cmyk}
(This is mainly so -gray won't interfere with -grayscale.)
Fix an ArrayIndexOutOfBoundsException that occurred when passing -dct
to the Java version without specifying the DCT algorithm (oversight from
24fbf64d31a0758c63bcc27cf5d92fc5611717d0.)
Lossless cropping is performed after other lossless transform
operations, so the cropping region must be specified relative to the
destination image dimensions and level of chrominance subsampling, not
the source image dimensions and level of chrominance subsampling.
More specifically, if the lossless transform operation swaps the X and Y
axes, or if the image is converted to grayscale, then that changes the
cropping region requirements.
The JPEG-1 spec never uses the term "MCU block". That term is rarely
used in other literature to describe the equivalent of an MCU in an
interleaved JPEG image, but the libjpeg documentation uses "iMCU" to
describe the same thing. "iMCU" is a better term, since the equivalent
of an interleaved MCU can contain multiple DCT blocks (or samples in
lossless mode) that are only grouped together if the image is
interleaved.
In the case of restart markers, "MCU block" was used in the libjpeg
documentation instead of "MCU", but "MCU" is more accurate and less
confusing. (The restart interval is literally in MCUs, where one MCU
is one data unit in a non-interleaved JPEG image and multiple data units
in a multi-component interleaved JPEG image.)
In the case of 9b704f96b2dccc54363ad7a2fe8e378fc1a2893b, the issue was
actually with progressive JPEG images exactly two DCT blocks wide, not
two MCU blocks wide.
This commit also defines "MCU" and "MCU row" in the description of the
various restart marker options/parameters. Although an MCU row is
technically always a row of samples in lossless mode, "sample row" was
confusing, since it is used in other places to describe a row of samples
for a single component (whereas an MCU row in a typical lossless JPEG
image consists of a row of interleaved samples for all components.)
TJ*PARAM_RESTARTBLOCKS technically works with lossless compression, but
it is not useful, since the value must be equal to the number of samples
in a row. (In other words, it is no different than
TJ*PARAM_RESTARTINROWS, except that it requires the user to do more
math.)
Due to an oversight in the multi-precision feature,
TJCompressor.srcBuf12 and TJCompressor.srcBuf16 were not set to null
in TJCompressor.setSourceImage(YUVImage) or
TJCompressor.setSourceImage(BufferedImage, ...). Thus, if an
application set a 12-bit or 16-bit packed-pixel buffer as the source
image then set a BufferedImage with integer pixels as the source image,
TJCompress.compress() would compress from the 12-bit or 16-bit
packed-pixel buffer instead of the BufferedImage. The odds of an
application actually doing that are very slim, however.
... using the same matching algorithm as cjpeg/djpeg/jpegtran and, where
possible, the same abbreviations:
-a for -arithmetic
-b for -bmp
-cr for -crop
-g for -grayscale
-l for -lossless
-max for -maxmemory
-o for -optimize
-pre for -precision
-p for -progressive
-r for -restart
-s for -scale
-t for -transpose
-transv for -transverse
- "Optimized baseline entropy coding" = "Huffman table optimization"
"Optimized baseline entropy coding" was meant to emphasize that the
feature is only useful when generating baseline (single-scan lossy
8-bit-per-sample Huffman-coded) JPEG images, because it is
automatically enabled when generating Huffman-coded progressive
(multi-scan), 12-bit-per-sample, and lossless JPEG images. However,
Huffman table optimization isn't actually an integral part of those
non-baseline modes. You can forego Huffman table optimization with
12-bit data precision if you supply your own Huffman tables. The spec
doesn't require it with progressive or lossless mode, either, although
our implementation does. Furthermore, "baseline" describes more than
just the type of entropy coding used. It was incorrect to say that
optimized "baseline" entropy coding is automatically enabled for
Huffman-coded progressive, 12-bit-per-sample, and lossless JPEG
images, since those are clearly not baseline images.
- "Progressive entropy coding" = "Progressive JPEG"
"Progressive" describes more than just the type of entropy coding
used. (In fact, both Huffman-coded and arithmetic-coded images can be
progressive.)
- Mention that TJPARAM_OPTIMIZE/TJ.PARAM_OPTIMIZE can be used with
lossless transformation as well.
- General wordsmithing
- Formatting tweaks