Each has a rectangular grid (the size of which determines
the resolution) of photosites (called
picture elements, or pixels) that convert light to
electricity
CMOS sensors have an amplifier for each pixel; CCD pixels
have amplifiers for entire rows of sensors
Tradeoffs
Noise:
CMOS sensors are more susceptible to noise
Light Sensitivity:
CMOS sensors have lower light sensitivity (because
some of the light hits the amplifiers)
Power Consumption:
CMOS sensors consume less power
Measuring Color
Using Layered Sensors (e.g., Foveon X3):
The wavelength-dependent absoroption properties of the silicon
are used to separate the light to the three layers,
each of which measures a different color
Using a Beam Splitter (e.g. 3CCD):
The beam splitter creates three paths for the light
Each path is directed to a different sensor with
a different filter (i.e., red, green, or blue)
Using a Rotating Filter:
Red, green, and blue filters are rotated in front of a
single sensor in quick succession
Using a Filter Array:
A fixed filter grid is placed in front of a single
sensor so that each photosite measures a different color
The Bayer Filter
Visualization:
Properties:
Has as many green pixels as red and blue combined
(because the eye is not equally sensitive to all three colors)
A Simple Demosaicing Algorithm:
One color is measured exactly
The amounts of the other two colors are obtained by
interpolating the neighbors of that color
Storage
In Raw Format:
Each pixel requires storage for its red, blue, and green
components
Representation of Each Component:
Typically one byte (256 different values)
Total Storage:
The resolution times three (e.g., a 10 megapixel camera would
require 30 megabytes for an image)
Compression
The Idea:
Reduce the amount of space required to store an image
Approaches:
Lossless(e.g., .png)
Lossy (e.g., Most JPEG algorithms)
Compression (cont.)
A Simple Lossless Algorithm:
Run Length Encoding - replace a sequence of pixels of
the same color with the color and the number of times it
repeats