Color Sense & Sensibility

I’ve been “color correcting”  the images taken with my camera phone for some time.  I put ‘correcting’ in quotes because it’s been done with my naked eyes—manually, attempting to fit the image with my vision, or close enough.

Much of the time the images come out bluish.  Color-balance with camera software on the phone is automatic without option, so there’s no way to get an even tone.

[image]
original (scaled)—taken 4/18, no more snow…
[image]
“corrected” (scaled, cropped)
But beyond the unsharpening to reduce JPEG ‘blockiness’ (prior to scaling), and RGB balance, I came across a bigger problem.  There were superfluous greens in the mix, and I’m not talking grass or moss.  The wood coated metal you see at the bottom of the original image, above (click to enlarge), is supposed to be bluish/purplish to the eye (how the bare wood returns light under a cloudy sky), not something turquoise with the camera.  Similar greenish blues show up with the tree barks.

Part of “correcting” this has been color rotation, moving some of the blue channel into the red to compensate.  For this particular scene, it took venturing outside a few times to see the desired colors.  And I know I’ve gone too far when the whites on trees turn orange…

Side note: all processing has been done with GraphicConverter and sometimes ColorIt (for effects and color grain reduction), using the Mac emulator.

Well, it turns out, I was on to something with the color rotation.

A while back, I had noticed that the rods of the Quartz heater, when lit, were showing up pinkish.  To the human eye, however, electrified Quartz emits an orange color.  At first, I thought the camera was picking up ultraviolet light.  So on Saturday, I did some testing…

[invheater.jpg]
(horiz.-sharpened to reduce motion blur)
The above image shows not only the pinkish, but the fact that the camera shows the rods visibly lit while my eyes did not; the heater was just starting up.  The camera is sensitive to light in ways the human eye is not.

In further testing, I used my glasses as a filter (they have a UV filter).  (I haven’t worn them though, in part because of a vein above my right ear—it gets pinched.  Not wearing my glasses may have contributed to my vision loss…)

[image]

It would seem UV pickup was a problem because the light through the lens comes out darker.  But upon wearing the glasses to view the new TV, the colors came out better.  Now, how could that be?  Turns out the glasses block some of the violet spectrum!  It’s no wonder I couldn’t fix the colors on the screen no matter the settings—the blue filter on the screen is letting through violet light!  Maybe that part of the spectrum wasn’t anticipated with the LED backlighting…

Back to the camera, curious about what I was dealing with I looked up what I could find for the camera’s spectral sensitivities, and how close it is (or would be) to human vision.  Guess what I found?

First, I couldn’t find the exact camera model for the phone, but it’s a Sony, apparently.  The smartphone is a Motorola Luge (2014 model), which is pretty much an enhanced version of the Droid Razr M/XT907, a 2012 model.

Second, the spectral sensitivity specs differ from camera to camera.  Some are better than others, of course.  You can download the 2012 PDF I found, detailing characteristics of several kinds of cameras here.  (NOTE: 9.6 MB !)

Third, not only do digital cameras not pick up UV light (to any significant degree, anyway), but they fall out of the violet spectrum as well.  So despite color correction software (also present in modern digital cameras), any given lens may not know blue from violet, or even pick up violet.  (Try shooting scenes in black light.)  I know my camera doesn’t translate violet into violet when viewing the TV, blue-only.

Lastly, the human eye does not see color in RGB.  And this is important—no camera that picks up RGB the way digital lenses do will see what we see, no matter the auto-correction software.  Here’s why, and what I’ve come to understand about the human eye.

Cone cells (which got the name because their tops have a physical shape of a cone) do not see RGB wavelength ranges, but short, medium and long wavelength ranges closer to magenta, cyan and yellow at their peaks.  Rod cells, which make up the majority of low-light vision, have a more narrow range.  (Because rods interfere with color processing, they’re usually deactivated during periods of normal light.)

[image]
(R denotes rod sensitivity range)
SVG source: Wikipedia
The cone ranges overlap a lot, so it’s in the subtraction of these natural overlaps that we get there’s an antagonistic nature when it comes to combining the cone signals for a relative color point.  In the outset, we get violet due to a second resonance of long range (L) wavelengths.  The second resonance in the graph above is clipped, but you can start to see L rise again (toward the left).

To produce one of the tens of millions of possible colors we see every some 50th of a second in exposure time, it takes hundreds to high thousands of photon hits.  This signal processing is so dense that it’s performed entirely in the retina.  The visual cortex in the brain works to process the eye’s signals into perceived colors.

In all, despite our advancements in technology, human vision is still far better than the conventional cameras we’ve made.  At the very least, rod cells each require only one photon to activate, and populate the eye to the count of roughly 125 million.  Only modern night-vision gear has improved upon our night-vision, albeit at lower resolutions.

…Well, that’s my 900-word two cents.  I know now that no digital camera that I know of will do photography justice, not just my own.

Thanks for reading, and if I’ve gotten anything wrong, feel free to tell me.

Advertisements

12 thoughts on “Color Sense & Sensibility

  1. Fascinating! What an interesting post! I just put up a photo today from 2010, old Olympus Camera. Not nearly as many pixels as todays, but I still like it…..

    Like

  2. I don’t know anything much about photography and yet I still keep taking photos. I do know that human eyes also vary in their ability to “see” colors some seeing many more than others. Interesting article, thanks for putting all that together for us.

    Like

    1. Human vision is capable of perceiving over 600 levels per stepping. And although the blue range is the weakest, the total number of recognized colors may exceed 20 million, beyond the 10 million expressed in the text(s) referenced by Wikipedia. The colors we are capable of seeing goes beyond what we can name, and definitely beyond 24-bit “True Color.”

      Liked by 1 person

      1. I helped my sister study the human eye when she was learning it for a future of assisting an eye surgeon. It’s amazing stuff.

        Liked by 1 person

  3. I understood that our eyes interpret colors differently, but didn’t understand how. And I guess a very valid case can be made for “color correcting” a photo, especially in the photos you’ve pointed out. This takes the explanation of my two photos with my friend and I on the train another step further. Of course ours were two different camera phones taking the photos seconds apart, but it illustrates similarities. Very interesting post and very informative.

    Like

    1. Color correction, though limited, also exists in the smartphones. I would think it’s not just an improvement on the lens, but an improvement on the software as to the iPhone 6, compared to the iPhone 4 (regarding those two phone cameras you mentioned).

      Liked by 1 person

Thoughts? Reply:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s