• Best Wishes to all for a Wonderful, Joyous & Beautiful Holiday Season, and a Joyful New Year!

A question for a college project

sat10

New member
Hi, I was planning to make a project on the difference in color reproduction (quality) between top mobile phone models.

But, from what I've been seeing during my research is that it is not possible to profile mobile phone cameras for color due to them having their own secret image processing algorithms, and also there are no proper standards against which measure their performance. Is that correct, or is there some way I can go about it?

I was wondering if it is possible to come to a conclusion about, maybe, which algorithm is the best in terms of color reproduction, by doing some measurements.

My prof said that standard color charts are not enough on their own, and I need to find out what standards these companies use.
 
In principle one can not obtain accurate colours from an RGB device. No amount of application of algorithms will totally overcome this.
 
In principle one can not obtain accurate colours from an RGB device.

People do it all the time with monitors, Erik.

--------------------------------------------------

sat10, currently none of the OS's for mobile devices include any kind of user-accessible color management. You cannot load any color management software on the device, you can't load an ICC profile onto the device, and the device couldn't use an ICC profile that was present.

What metrics would you use to determine which device is "best" in terms of color reproduction? Would you be able to separate the "image processing algorithms" from the OS or from the device itself. If not, that would mean that you could only comment on each complete system, which doesn't sound like what you want to do.
 
People do it all the time with monitors, Erik.

Rich, sat10 was asking about profiling the phone camera. One can not capture accurate colour with an RGB device such as a camera or a scanner. I used the word "obtain" but I should have been more clear and said capture.

You are right about monitors but they are not devices that capture images but ones that display images.
 
Last edited by a moderator:
Interesting if one is interested in the display. Not so relevant to the camera.

agreed. I missed that.

I tend to agree with Erik on camera profiling. I do think a competent profile could be generated for a studio environment with controlled lighting and subject matter. Though standard color targets are are very limited, I did have some success in profiling cameras for photographing artwork (success in that it saved significant color correction time over scanning or photographing without profiling). For scanning, so long as the color target represented the material to be scanned, color accuracy of the reproduction toward the original was improved significantly.

For a mobile phone camera, i'm not sure what the point of profiling would be with the environment and subject matter constantly in flux.
 
For a mobile phone camera, i'm not sure what the point of profiling would be with the environment and subject matter constantly in flux.

Yes, nature does not have a colour space.

I assume that some kind of profile is needed since the raw data from the image needs something to make it look reasonable. Profiling would be just a best guess on what it should look like.

A simple test can be done to show that an RGB device is not so good at capturing colour. Take a photo or scan of one of those RHEM light indicators. By definition, the bands on that indicator are the same colour for D50. When I have scanned one of these indicators, the bands are clearly seen. I am guessing the same will happen if one takes a photo of it. This little test shows that the device can not "see" the same colour as being the same.
 
Excluding expensive scanning back cameras and Foveon sensors (never caught on for the mainstream), the overwhelming majority of camera’s/phones use a sensor with a similar array to a Bayer pattern, so the RB values are interpolated from the G anyway (not a true RGB capture)!

One may be interested in reading up on scene referred vs. output/display referred rendering (what may be considered “accurate” may not be “pleasing” to the human observer).

Sat10, instead of colour, if you did a test of the “sharpness” of various similar phone cameras, that would be something that is much easier to quantify using a MTF target/ISO2233 chart and Imatest software:

http://www.imatest.com/docs/sfr_mtfplot/


Stephen Marsh
 
Last edited:
Thanks a lot for the help guys..What I was talking of exactly was being able to find the color gamut of the phone cameras, which we have decided to obtain by photographing a Gretag Macbeth ColorChecker test chart in studio conditions and then running them through a profiling software.

Now I'm scared that is not enough for a project. So I'm looking for other color parameters like dynamic range etc. I was also thinking of measuring phone displays like in the article that meddington mentioned. But not sure about the methodology there
 
Excluding expensive scanning back cameras ......, the overwhelming majority of camera’s/phones use a sensor with a similar array to a Bayer pattern, so the RB values are interpolated from the G anyway (not a true RGB capture)!

Stephen, even your references seem to state that they are RGB devices. Bayer's patent is based on an RGB display.
 
Stephen, even your references seem to state that they are RGB devices. Bayer's patent is based on an RGB display.

Erik, most digital cameras have a grayscale image sensor, the colour values are interpolated from the captured linear intensity levels that are passed through the CFA. Yes, after processing, the final output file is RGB, however the original raw data capture is greyscale only and the colour values are interpolated from the filter (not the same as say a true RGB capture or a Foveon image sensor capture).

Then with all (most?) camera phones, the final image file is processed using “secret sauce” and one does not have access to the raw image data.

EDIT: LINK UPDATED

http://www.adobe.com/digitalimag/pdfs/understanding_digitalrawcapture.pdf



Stephen Marsh
 
Last edited:
Erik, most digital cameras have a grayscale image sensor, the colour values are interpolated from the captured linear intensity levels that are passed through the CFA. Yes, after processing, the final output file is RGB, however the original raw data capture is greyscale only and the colour values are interpolated from the filter (not the same as say a true RGB capture or a Foveon image sensor capture).

Then with all (most?) camera phones, the final image file is processed using “secret sauce” and one does not have access to the raw image data.

www.adobe.com/digitalimag/pdfs/understanding_digitalrawcapture.pdf‎



Stephen Marsh

I'm sorry but I don't think you are right. It does not make sense. It was also not what I read in your initial references. As I understand it, there are RGB filters on the collectors.

Also your adobe link did not work.
 
I'm sorry but I don't think you are right. It does not make sense. It was also not what I read in your initial references. As I understand it, there are RGB filters on the collectors.

Also your adobe link did not work.

This is the correct link:
http://www.adobe.com/digitalimag/pdfs/understanding_digitalrawcapture.pdf

As Erik says the photosensitive detectors are covered by a filter - typically either Red, Green, and Blue - arranged in a Bayer pattern. They provide the data used to form the final image. Each channel R, G, B, is greyscale just as each RGB channel in PShop is greyscale.

gordo
 
Hi, I was planning to make a project on the difference in color reproduction (quality) between top mobile phone models. [SNIP]
I was wondering if it is possible to come to a conclusion about, maybe, which algorithm is the best in terms of color reproduction, by doing some measurements.

My prof said that standard color charts are not enough on their own, and I need to find out what standards these companies use.

I think you can measure the difference in color reproduction between the different mobile phones by feeding them a standard color test chart - with more patches than the Macbeth one. Perhaps a variant on the Kodak IT8 Target (35mm) for Transparencies. Then calculate the deviation from the input values. That way you bypass the camera's processing and see how accurate the displays are relative to one another.

Then, if you photograph a standard image like the MacBeth chart using each of the phones under controlled lighting and export them then you could bring all the images into PShop and measure the Lab values of the patches there to determine how they deviate from the measured target values.

gordo
 
I did say that the image sensor was covered by a CFA, the most famous of which is a Bayer pattern. ?

The sensor only captures linear intensity levels of light. The image processor knows the pattern of the CFA and can then match that up with the recorded intensity level from the sensor, arriving at a separate red, green or blue value for each sensor and thus pixel in the image. The final image colours are then interpolated, delivering a final image with RGB values for each pixel.

ShortCourses- How a Digital Camera Works

GUILLERMO LUIJK >> TUTORIALS >> DCRAW TUTORIAL

Decoding raw digital photos in Linux

Do I really have this all wrong, or am I just crap at trying to explain this in short form?


Stephen Marsh
 
I did say that the image sensor was covered by a CFA, the most famous of which is a Bayer pattern. ?

The sensor only captures linear intensity levels of light. The image processor knows the pattern of the CFA and can then match that up with the recorded intensity level from the sensor, arriving at a separate red, green or blue value for each sensor and thus pixel in the image. The final image colours are then interpolated, delivering a final image with RGB values for each pixel.

ShortCourses- How a Digital Camera Works

GUILLERMO LUIJK >> TUTORIALS >> DCRAW TUTORIAL

Decoding raw digital photos in Linux

Do I really have this all wrong, or am I just crap at trying to explain this in short form?


Stephen Marsh

I think you have explained it better now. They are r, g or b light before they hit the sensors.
 
Last edited by a moderator:

PressWise

A 30-day Fix for Managed Chaos

As any print professional knows, printing can be managed chaos. Software that solves multiple problems and provides measurable and monetizable value has a direct impact on the bottom-line.

“We reduced order entry costs by about 40%.” Significant savings in a shop that turns about 500 jobs a month.


Learn how…….

   
Back
Top