• Best Wishes to all for a Wonderful, Joyous & Beautiful Holiday Season, and a Joyful New Year!

RGB can NOT describe colour

  • Thread starter Deleted member 16349
  • Start date
John, regarding inline industrial cameras: if the sensor is working in a controlled environment, its spectral response is known and an ICC profile can be generated for it, doesn't that make it predictable and suitable for color measurement?

Even if an rgb camera is working in such an environment, it still can not obtain the spectral information needed to calculate accurate colour values.

If the rgb camera, scanner etc. is viewing materials that have known spectral reflection properties, then one can have a better chance of obtaining better colour results. The reason for this is if one knows the shape of the spectral curve coming from a material via transmission or reflection, then one does not have to measure the whole curve accurately but only a small part of the curve. That would be enough information to determine the whole curve. This is helpful when working with known materials such as photographic film and process inks. But the real world is filled with materials with unknown reflective properties and therefore an accurate measurement of the whole spectral curve is required.

Colour is a tricky subject because basically most people don't understand what it is. Colour does not exist in Nature. It does not exist in light. Colour is a perception in our minds from light stimulus to the eyes and that information being processed in the brain.

Where do colour values come from? They are not a direct measurement but are a calculation from measured information. Back in the 1930's researchers developed mathematical functions based on how a group of people matched colours. The Lab values that we used today are a direct mathematical construction of that original work.

So to get colour values, one has to calculate the X, Y and Z tristimulus values. This is done basically by using the three x,y and z colour matching functions determined from that work in the 30's and applying those functions to the measured spectral response coming from the object. This is done all across the visible spectrum. When you obtain the X,Y and Z tristimulus values, then these are used to directly calculate Lab values.

All of this is a strict mathematical operations and in general can not be shortcut by using RGB response curves.

If one could make a cameral that had filters that duplicated the x, y and z colour matching functions, then one would be able to get colour values out of a three sensor device. But having filters that can do this is not so easy to make and therefore, devices like spectrophotometers are needed that can divide the whole spectrum into many sections and measure them independently and apply the x, y and z colour matching functions.

With this method that spectrophotometers use, not only can one calculate the colour values but one can apply other functions to measure density etc.

The accuracy of the spectrophotometer will be related to how many sections can be measured along the visible spectrum. The more sections along the range that can be measured, the more accurate the calculation of XYZ will be. Graphic arts spectros might have 15 to 30 sections or zones while very expensive and more accurate spectros might have sections at each nm wavelength along the range which would be hundreds of zones. Basically graphic arts spectros will not be highly accurate.

Colour science has provided colour values that are not perfect indicators of how people see over the entire visual gamut but it is amazing at how well they have been for practical purposes.

So when we talk about colour values such as Lab etc. one needs to be aware of how these values were generated. There is a strict mathematical definition and shortcuts are only valid under conditions that do not violate the math.
 
The camera does not absolutely need to have XYZ filters. Any linear combination of the XYZ filters can be directly translated into XYZ filters without introducing error.

The LMS filters are an example of such a linear combination. These are named after "long", "medium" and "short", and they correspond closely to the response of the long, medium, and short wavelength responses of the three cones in the eye.
 
John, regarding inline industrial cameras: if the sensor is working in a controlled environment, its spectral response is known and an ICC profile can be generated for it, doesn't that make it predictable and suitable for color measurement?

Schnitzel,

The quick answer is "no".

In the mid 90's we had such a system, based on an RGB camera. We put around 20 man years into making the system stable at measuring light. It was a good densitometer with somewhere around 1,000 of them in the field.

We realized that we needed to provide L*a*b* from this instrument. I put a full year into working on this, and had the help of two other sharp engineers (about half time), along with the services of about ten other engineers as needed. Based on our results, we decided to invest ~$1M in developing an inline spectro.

The first problem (simply stated) it is possible to create a CMY gray that (according to the camera) is exactly the same RGB values as a gray made with just black. The eye will see them as different. This issue exists with any color than can be made with various amounts of GCR.

Beyond that... you could conceivably make a transform that was "kinda good", and make it work on a high gloss stock. If you then apply that same transform on data from midgloss stock or newsprint, it will completely fall apart.

Then bring in spot colors...
 
The camera does not absolutely need to have XYZ filters. Any linear combination of the XYZ filters can be directly translated into XYZ filters without introducing error.

The LMS filters are an example of such a linear combination. These are named after "long", "medium" and "short", and they correspond closely to the response of the long, medium, and short wavelength responses of the three cones in the eye.

Interesting. For some reason I don't think this is true but I am not a colour scientist so I will pose this question to a colour group and see what the consensus is on this.

In the mean time, do you have the math to show how the LMS values can be translated to XYZ. That would be of interest too.
 
I am looking for the mathematical formula to provide an exacting Lab value using any combination of the 12 different reds to provide a targeted Delta E of 'zero'. D

D Ink Man,

If I understand this correctly, you are looking for a quick and dirty kind of ink formulator.

If I can paraphrase, to make sure I am understanding correctly... You have created a bunch of similar recipes (same set of pigments, different concentrations) that come close to your target color. You proof these in your ink kitchen and measure the L*a*b* values.

From these, you want to figure out the recipe that will get you to the proper target color. It might be one of the recipes that you have already tried, but will likely be something somewhere between them.

Have I understood your question properly?

John
 
D Ink Man,

If I understand this correctly, you are looking for a quick and dirty kind of ink formulator.

If I can paraphrase, to make sure I am understanding correctly... You have created a bunch of similar recipes (same set of pigments, different concentrations) that come close to your target color. You proof these in your ink kitchen and measure the L*a*b* values.

From these, you want to figure out the recipe that will get you to the proper target color. It might be one of the recipes that you have already tried, but will likely be something somewhere between them.

Have I understood your question properly?

John

Absolutely correct John and thank you for the reply. Now, do you have the mathematic formula? Thank you in advance.

D Ink Man
 
Absolutely correct John and thank you for the reply. Now, do you have the mathematic formula? Thank you in advance.

D Ink Man

Yes, I know how to do this. It's not very simple math, though. I would not ever want to try it by hand. It's rather involved, but Excel could be taught how to do it.
 
Please teach me, I want to be educated in things I do not know.
However, no pressure with that being said. I still have my eye and basic L*a*b* manipulability. You are a credit to our society the graphics and science/art of color. Thank you for your intelligible to all John.

D Ink Man
 
LOL ... Like I said, it's involved.

My approach would be to assume the mathematical model that a unit change in each of the pigments would cause some corresponding change in L*, a*, and b*. For 5 inks, this would give 15 coefficients. Under this assumption, this can be solved through multiple linear regression.

Except that the characteristic equation is likely to be noninvertible, since with more than four inks, there are an infinite number of solutions to the problem - many ways to get there. The simple solution would be to use singular value decomposition to compute the Moore-Penrose pseudo-inverse. But, unfortunately, this function is not in Excel, so that would need to be a program.

Aside from the pseudo-inverse, additional constraints would make it invertible again. These might be "cheapest cost" or "don;t adjust this ink", or, you could do a spectral match to avoid issues with metamerism.

So, like I said, it's a bit involved. A couple of semesters of calc, of stats, and of linear algebra.
 
LOL ... Like I said, it's involved.

My approach would be to assume the mathematical model that a unit change in each of the pigments would cause some corresponding change in L*, a*, and b*. For 5 inks, this would give 15 coefficients. Under this assumption, this can be solved through multiple linear regression.

Except that the characteristic equation is likely to be noninvertible, since with more than four inks, there are an infinite number of solutions to the problem - many ways to get there. The simple solution would be to use singular value decomposition to compute the Moore-Penrose pseudo-inverse. But, unfortunately, this function is not in Excel, so that would need to be a program.

Aside from the pseudo-inverse, additional constraints would make it invertible again. These might be "cheapest cost" or "don;t adjust this ink", or, you could do a spectral match to avoid issues with metamerism.

So, like I said, it's a bit involved. A couple of semesters of calc, of stats, and of linear algebra.

WoW! Thank goodness I still have my actual physical eyesight to match the old fashioned way. However, without going to Professor Cory's multiple semesters of high tech mathematics, I feel I can blend my visual adjustments in concert with the corresponding Lab values to arrive a reasonable DE value.

Here's something else that may help, dunno. Please check out the following link and tell me what you think in relation to the quest. Unfortunately it is not downloadabe to a regular PC. It is an App for a cell phone. Thank you again John.

Apps by Shahar Klinger - Android

D Ink Man
 
The camera does not absolutely need to have XYZ filters. Any linear combination of the XYZ filters can be directly translated into XYZ filters without introducing error.

The LMS filters are an example of such a linear combination. These are named after "long", "medium" and "short", and they correspond closely to the response of the long, medium, and short wavelength responses of the three cones in the eye.

Well I have to say I have learned something quite new for me. It has been an interesting exercise.

I have asked the question about the LMS curves being suitable for a camera on a color science group and I got a response from Danny Rich. He provided a link to a paper on this subject.

http://www.cvrl.org/people/stockman/pubs/2006 Physiological CMFs SS.pdf

So I learned that the LMS curves were fairly recently developed and with a method that was similar to the methods used for developing the colour matching functions back in the 1930's. The cleaver adaption of this previous method, used subjects that had colour vision deficiencies in a way that made it possible for the L, M and S colour matching functions to be determined.

So you are right, the LMS curves could be used in a camera and they could provide the LMS tristimulus values that can be translated to XYZ and then to Lab values.

This was quite new to me and it also seems it is new to the colour science community. I was told that shortly the CIE will confirm that LMS colour matching method will become an official standard. Something like that.

Thanks for the education.
 
Ahhh.. Dr. Irwin Corey... I need to watch some YouTube videos of that guy!

I don't think the app can help, unfortunately. You're looking for something pretty specific.
 
Stephen, please feel free to put comments wherever it's convenient. I get comments on the blog, on LinkedIn, Facebook, and Twitter, wherever people see my links. Often I get private messages. If there are some interesting comments, I share them in a followup blog post.

Hi John, I have been waiting for the other questions to die down before I made my comment on your BLOG piece.

One thing bugs me, it is the image of the hand painted colour patches taped on your monitor next to the CG patches created in MS Paint. You mention that the camera did not “see” the painted patches on the paper the same as how your “eyes” viewed the entire scene, where you perceived the painted sample on paper as being close to CG patches in MS Paint.

A digital camera image needs white balancing, and it is obvious that the white balance was not set for the paper. I am not sure if the WB was set for the monitor white or for the “scene” lighting - whatever that was! A drawback with digital photography is that it does not “like” having two or more different illuminants in the same scene, as one can only apply white balance against one light source. So if the paper is white balanced, then the computer screen colours would be off etc.

I am not refuting your findings or your post, indeed I agree the human observer and the camera see things differently, it has always been this way with film or digital. It is just that to compare how a human perceives the scene, you would need to shoot in raw format (not jpeg) and then produce two different images - one white balanced for the paper and the other for the monitor image…then they would need to be masked together to show a composite image of both. Then it would be “fair” to compare the human observer and the camera results.

Even if/when this is done, there will be differences - but not as huge as your current image indicates.


Stephen Marsh
 

PressWise

A 30-day Fix for Managed Chaos

As any print professional knows, printing can be managed chaos. Software that solves multiple problems and provides measurable and monetizable value has a direct impact on the bottom-line.

“We reduced order entry costs by about 40%.” Significant savings in a shop that turns about 500 jobs a month.


Learn how…….

   
Back
Top