LMS (long, medium, short), is a color space which represents the response of the three types of cones of the human eye, named for their responsivity (sensitivity) peaks at long, medium, and short wavelengths.
The numerical range is generally not specified, except that the lower end is generally bounded by zero. It is common to use the LMS color space when performing chromatic adaptation (estimating the appearance of a sample under a different illuminant). It's also useful in the study of color blindness, when one or more cone types are defective.
Definition
The cone response functions are the color matching functions for the LMS color space. The chromaticity coordinates (L, M, S) for a spectral distribution are defined as:
The cone response functions are normalized to have their maxima equal to unity.
XYZ to LMS
Typically, colors to be adapted chromatically will be specified in a color space other than LMS (e.g. sRGB). The chromatic adaptation matrix in the diagonal von Kries transform method, however, operates on tristimulus values in the LMS color space. Since colors in most colorspaces can be transformed to the XYZ color space, only one additional transformation matrix is required for any color space to be adapted chromatically: to transform colors from the XYZ color space to the LMS color space.[3]
In addition, many color adaption methods, or color appearance models (CAMs), run a von Kries-style diagonal matrix transform in a slightly modified, LMS-like, space instead. They may refer to it simply as LMS, as RGB, or as ργβ. The following text uses the "RGB" naming, but do note that the resulting space has nothing to do with the additive color model called RGB.[3]
The chromatic adaptation transform (CAT) matrices for some CAMs in terms of CIEXYZ coordinates are presented here. The matrices, in conjunction with the XYZ data defined for the standard observer, implicitly define a "cone" response for each cell type.
Notes:
- All tristimulus values are normally calculated using the CIE 1931 2° standard colorimetric observer.[3]
- Unless specified otherwise, the CAT matrices are normalized (the elements in a row add up to 1) so the tristimulus values for an equal-energy illuminant (X=Y=Z), like CIE Illuminant E, produce equal LMS values.[3]
Hunt, RLAB
The Hunt and RLAB color appearance models use the Hunt-Pointer-Estevez transformation matrix (MHPE) for conversion from CIE XYZ to LMS.[4][5][6] This is the transformation matrix which was originally used in conjunction with the von Kries transform method, and is therefore also called von Kries transformation matrix (MvonKries).
Bradford's spectrally sharpened matrix (LLAB, CIECAM97s)
The original CIECAM97s color appearance model uses the Bradford transformation matrix (MBFD) (as does the LLAB color appearance model).[3] This is a “spectrally sharpened” transformation matrix (i.e. the L and M cone response curves are narrower and more distinct from each other). The Bradford transformation matrix was supposed to work in conjunction with a modified von Kries transform method which introduced a small non-linearity in the S (blue) channel. However, outside of CIECAM97s and LLAB this is often neglected and the Bradford transformation matrix is used in conjunction with the linear von Kries transform method, explicitly so in ICC profiles.[8]
A "spectrally sharpened" matrix is believed to improve chromatic adaptation especially for blue colors, but does not work as a real cone-describing LMS space for later human vision processing. Although the outputs are called "LMS" in the original LLAB incarnation, CIECAM97s uses a different "RGB" name to highlight that this space does not really reflect cone cells; hence the different names here.
LLAB proceeds by taking the post-adaptation XYZ values and performing a CIELAB-like treatment to get the visual correlates. On the other hand, CIECAM97s takes the post-adaptation XYZ value back into the Hunt LMS space, and works from there to model the vision system's calculation of color properties.
Later CIECAMs
A revised version of CIECAM97s switches back to a linear transform method and introduces a corresponding transformation matrix (MCAT97s):[9]
The sharpened transformation matrix in CIECAM02 (MCAT02) is:[10][3]
CAM16 uses a different matrix:[11]
As in CIECAM97s, after adaptation, the colors are converted to the traditional Hunt–Pointer–Estévez LMS for final prediction of visual results.
Stockman & Sharpe (2000) Improved XYZ CMF's
From a physiological point of view, the LMS color space describes a more fundamental level of human visual response, so it makes more sense to define the physiopsychological XYZ by LMS, rather than the other way around.
A set of physiologically-based LMS functions were proposed by Stockman & Sharpe in 2000. The functions have been published in a technical report by the CIE in 2006 (CIE 170).[12][13] The functions are derived from Stiles and Burch[1] RGB CMF data, combined with newer measurements about the contribution of each cone in the RGB functions. To adjust from the 10° data to 2°, assumptions about photopigment density difference and data about the absorption of light by pigment in the lens and the macula lutea are used.[14]
The Stockman & Sharpe functions can then be turned into a set of three color-matching functions similar to the CIE 1931 functions. [15] Let be the three cone response functions, and let be the new XYZ color matching functions. Then, by definition, the new XYZ color matching functions are:
where the transformation matrix is defined as:
The derivation of this transformation is relatively straightforward.[16] The CMF is the luminous efficiency function originally proposed by Sharpe et al. (2005), but then corrected (Sharpe et al., 2011[17]). The CMF is equal to the cone fundamental originally proposed by Stockman, Sharpe & Fach (1999)[18] scaled to have an integral equal to the CMF. The definition of the CMF is derived from the following constraints:
- 1. Like the other CMFs, the values of are all positive.
- 2. The integral of is identical to the integrals for and .
- 3. The coefficients of the transformation that yields are optimized to minimize the Euclidean differences between the resulting , and color matching functions and the CIE 1931 , and color matching functions.
For any spectral distribution , let be the LMS chromaticity coordinates for , and let be the corresponding new XYZ chromaticity coordinates. Then:
or, explicitly:
The inverse matrix is shown here for comparison with the ones for traditional XYZ:
The above development has the advantage of basing the new XYZ color matching functions on the physiologically-based LMS cone response functions. In addition, it offers a one-to-one relationship between the LMS chromaticity coordinates and the new XYZ chromaticity coordinates, which was not the case for the CIE 1931 color matching functions. The transformation for a particular color between LMS and the CIE 1931 XYZ space is not unique. It rather depends highly on the particular form of the spectral distribution ) producing the given color. There is no fixed 3x3 matrix which will transform between the CIE 1931 XYZ coordinates and the LMS coordinates, even for a particular color, much less the entire gamut of colors. Any such transformation will be an approximation at best, generally requiring certain assumptions about the spectral distributions producing the color. For example, if the spectral distributions are constrained to be the result of mixing three monochromatic sources, (as was done in the measurement of the CIE 1931 and the Stiles and Burch[1] color matching functions), then there will be a one-to-one relationship between the LMS and CIE 1931 XYZ coordinates of a particular color. Please note that, as of Nov 28, 2023, these are proposals that have yet to be ratified by the full TC 1-36 committee or by the CIE.
Applications
Color blindness
The LMS color space can be used to emulate the way color-blind people see color. An early emulation of dichromats were produced by Brettel et al. 1997 and was rated favorably by actual patients. An example of a state-of-the-art method is Machado et al. 2009.[19]
A related application is making color filters for color-blind people to more easily notice differences in color, a process known as daltonization.[20]
Image processing
JPEG XL uses an XYB color space derived from LMS. Its transform matrix is shown here:
This can be interpreted as a hybrid color theory where L and M are opponents but S is handled in a trichromatic way, justified by the lower spatial density of S cones. In practical terms, this allows for using less data for storing blue signals without losing much perceived quality.[21]
The colorspace originates from Guetzli's butteraugli metric,[22] and was passed down to JPEG XL via Google's Pik project.
See also
References
- 1 2 3 Stiles, WS; Burch, JM (1959). "NPL colour-matching investigation: final report". Optica Acta. 6.
- ↑ "Stockman, MacLeod & Johnson 2-deg cone fundamentals (description page)". data retrieval page
- 1 2 3 4 5 6 Fairchild, Mark D. (2005). Color Appearance Models (2E ed.). Wiley Interscience. pp. 182–183, 227–230. ISBN 978-0-470-01216-1.
- ↑ Schanda, Jnos, ed. (July 27, 2007). Colorimetry. p. 305. doi:10.1002/9780470175637. ISBN 9780470175637.
- ↑ Moroney, Nathan; Fairchild, Mark D.; Hunt, Robert W.G.; Li, Changjun; Luo, M. Ronnier; Newman, Todd (November 12, 2002). "The CIECAM02 Color Appearance Model". IS&T/SID Tenth Color Imaging Conference. Scottsdale, Arizona: The Society for Imaging Science and Technology. ISBN 0-89208-241-0.
- ↑ Ebner, Fritz (July 1, 1998). "Derivation and modelling hue uniformity and development of the IPT color space". Theses: 129.
- ↑ "Welcome to Bruce Lindbloom's Web Site". brucelindbloom.com. Retrieved March 23, 2020.
- ↑ Specification ICC.1:2010 (Profile version 4.3.0.0). Image technology colour management — Architecture, profile format, and data structure, Annex E.3, pp. 102.
- ↑ Fairchild, Mark D. (2001). "A Revision of CIECAM97s for Practical Applications" (PDF). Color Research & Application. Wiley Interscience. 26 (6): 418–427. doi:10.1002/col.1061.
- ↑ Fairchild, Mark. "Errata for COLOR APPEARANCE MODELS" (PDF).
The published MCAT02 matrix in Eq. 9.40 is incorrect (it is a version of the HuntPointer-Estevez matrix. The correct MCAT02 matrix is as follows. It is also given correctly in Eq. 16.2)
- ↑ Li, Changjun; Li, Zhiqiang; Wang, Zhifeng; Xu, Yang; Luo, Ming Ronnier; Cui, Guihua; Melgosa, Manuel; Brill, Michael H.; Pointer, Michael (2017). "Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS". Color Research & Application. 42 (6): 703–718. doi:10.1002/col.22131.
- ↑ "CIE 2006 "physiologically-relevant" LMS functions (2-deg LMS fundamentals based on the Stiles and Burch 10-deg CMFs adjusted to 2-deg)". Color & Vision Research Laboratory/. Institute of Ophthalmology. Retrieved October 27, 2023.
- ↑ Stockman, Andrew (December 2019). "Cone fundamentals and CIE standards" (PDF). Current Opinion in Behavioral Sciences. 30: 87–93. Retrieved October 27, 2023.
- ↑ "Photopigments". Color & Vision Research Laboratory/. Institute of Ophthalmology. Retrieved November 27, 2023.
- ↑ "CIE 2-deg CMFs". cvrl.ucl.ac.uk.
- ↑ "CIE (2012) 2-deg XYZ "physiologically-relevant" colour matching functions". Color & Vision Research Laboratory/. Institute of Ophthalmology. Retrieved November 27, 2023.
- ↑ Sharpe, L.T.; Stockman, A.; et al. (February 2011). "A Luminous Efficiency Function, V*D65(λ), for Daylight Adaptation: A Correction". COLOR Research and Application. 36 (1): 42–46. doi:10.1167/5.11.3. S2CID 19361187.
- ↑ Stockman, A.; Sharpe, L.T.; Fach, C.C. (1999). "The spectral sensitivity of the human short-wavelength cones". Vision Research. 39: 2901–2927. Retrieved November 28, 2023.
- ↑ "Color Vision Deficiency Emulation". colorspace.r-forge.r-project.org.
- ↑ Simon-Liedtke, Joschua Thomas; Farup, Ivar (February 2016). "Evaluating color vision deficiency daltonization methods using a behavioral visual-search method". Journal of Visual Communication and Image Representation. 35: 236–247. doi:10.1016/j.jvcir.2015.12.014. hdl:11250/2461824.
- ↑ Alakuijala, Jyrki; van Asseldonk, Ruud; Boukortt, Sami; Szabadka, Zoltan; Bruse, Martin; Comsa, Iulia-Maria; Firsching, Moritz; Fischbacher, Thomas; Kliuchnikov, Evgenii; Gomez, Sebastian; Obryk, Robert; Potempa, Krzysztof; Rhatushnyak, Alexander; Sneyers, Jon; Szabadka, Zoltan; Vandervenne, Lode; Versari, Luca; Wassenberg, Jan (September 6, 2019). "JPEG XL next-generation image compression architecture and coding tools". In Tescher, Andrew G; Ebrahimi, Touradj (eds.). Applications of Digital Image Processing XLII. Vol. 11137. p. 20. Bibcode:2019SPIE11137E..0KA. doi:10.1117/12.2529237. ISBN 9781510629677.
- ↑ "butteraugli/butteraugli.h at master · google/butteraugli". GitHub. Retrieved August 2, 2021.