|More at Microscopy|
If you look at the enlarged picture at the right you see colored dots, These are the pixels – in a drawing.
You must combine at least three pixels to get a “cell” with full color information of a given point in the picture. Would circles line up like this naturally?
No, but you want to be square.
A natural, a denser packaging of such round pixels would produce a triangular structure; just try to place apples on a table.
To get optimal use of a surface, nature would use hexagons, like bees with their Honeycombs or flies with their arthropod eyes. So why use round pixels? The reason here is the little lens on top of each, to gather rather than stray the incoming light. More below.
And then, structurally: We want squares!
Why? We have to get light values (tiny little currents) from each pixel. So we connect the pixels to a matrix of wires, to a grid of horizontal and vertical wires. We reach, we address each pixel, and don’t need a third set of diagonal wires.
This lets us stay square for the light sensing pixels.
But: We must use four pixels for a cell, as we don’t want to waste space. You might use the fourth pixel to catch white light – just as ink jet printers use three colors plus black. You form the pixels into little rectangles, but in principle you’ve got to address four by four by four to “cell them”.
The picture comes from a proposed Fuji design here. There you can see another cell setup with 2 red, 2 blue and 5 green pixels to a cell.
Standard. When you’d like to be more compact and just do with three types of pixels, green, red and blue, you want to double the green ones.
come back to Microscopy’s in depth description and to this layout there: cells with a green bias of 2:1:1.
Incidentally: Bayer does not stand for classic headache pills but for Kodak engineer Bryce Edward Bayer.You find more on Bayer and his invention in Wikipedia and a obituary by the Telegraph. (Literally Bayer stands for a Bavarian in German.) His patent is from 1976.
Look at another picture from the Microscope. It shows the pixel’s sensitivity (not the eye’s!) and gives a good explanation why one red pixel suffices – and digital pictures of red roses disappoint as the red sensor takes them all.
There are more pixel setups, for example with colors not side by side as by Bayer here, but one on top of the other, threedimesionally. Look at the Foveon sensor as an example of this “thick” light sensing. And see an overwiev here.
Some sensors let the light go through and back again – see backside illumination in an article about a Sony sensor here with the interesting detail: sensor size 7.81 mm and pixel size 1.55 μm – divided that offers placement for five million pixels.
Rxerts guess how Olympus makes 40 Megapixels out of 16, and other rumours.
You can see a chip gallery – and understand even less having seen it.
Study the little round lenses for each pixel, I’d call them lenslets, and their shape.
A new type of picture is produced by light field cameras, elsewhere called plenoptic from plenus, complete or full, and optic).
I conclude for myself:
• There are far more pixels than picture cells, usually four times that many. “Megapixels” are overstated.
• For the picture quality the number of pixels is just one factor, and not the most influential. The pixel size, the technology of catching the light, all that’s more important.
• The manufacurers won’t tell or show you objectively what’s really there. The sensor is too small to look inside … – The official specifications of my camera just say: “1/2.3-type High Sensitivity MOS Sensor / Total Pixel Number 18.9 Megapixels / Primary Color Filter”. And I’m a writing type …
|Druckraster eines Zeitungsbildes, »Pixel« auch hier. An genau einer Stelle mit Picasa retuschiert. Quelle.|
Next I might want to follow the picture from its raw status to jpeg. Imagine a landcape taken in “landscape” having less bytes than taken upright, or many more …
| 1420 kByte 1390 kByte 609 kByte|
turned right Bonn, Kennedy bridge over the Rhine, original turned left