Every digital camera takes blurry pictures. This is not because of any conspiracy between camera makers. It’s simple physics and good design. Here’s why:
Let’s say you have a really good lens with outstanding image quality. And let’s say you’re using that lens to photograph a pinpoint light source like a star on a really clear still night. The lens won’t be able to focus the star to an infinitessimally small point, no matter how good it is. This is basic physics. Instead it will focus to a circle of some very small diameter, on the order of only a few microns for a good lens. For the sake of discussion let’s pick a number and say eight microns for our lens. Now let’s say we stick a digital detector behind the lens and try to image the star. If the detector has pixels that are twelve microns across, the star will under-fill a pixel and will show up as a single pixel in the images. If the detector has pixels that are two microns across, the image of the star will fill multiple pixels, and it will be resolved as a small, but blurry circle. The first case is called under-sampling. You’re not taking full advantage of the optical quality of the lens, and the resulting images may look somewhat jaggy. The second is called over-sampling. You’re trying to subdivide the light into too many pixels, and you wind up having to toss half your resolution away because the optical quality of the lens isn’t up to the task.
In an ideal situation a camera’s pixel size should be about half of the finest detail the lens can resolve. It’s a balance between the two conditions described above. It’s not so finely sampled that you run into the optical quality of the lens, and it’s not so coarsely sampled that details are lost inside a single pixel. The result is a fully resolved, but slightly fuzzy looking image. This balance of how finely to sample an analog signal was formalized in the Nyquist-Shannon Sampling Theorem. The theorem says that if your sampling frequency is twice the highest frequency in your analog data, you can fully reconstruct the original signal. Put in photographic terms, you want your pixels to be about half the size of the smallest feature your optics can produce.
How this usually works out in practice is this: A camera manufacturer will take a detector with a given pixel size, and want to build a camera around it. The specifications are then handed to their optical designers: pixel size, desired focal length, desired maximum and minimum apertures, etc. The optical designers then design a lens for that detector and hand it to the mechanical designers so they can build a camera body around the lens and detector that will bring the image the lens produces to a good focus on the detector. A camera built this way produces fully resolved, but slightly fuzzy looking images. From the standpoint of sampling theory, this is ideal.
But from the standpoint of graphic design, it’s not. A Nyquist sampled image may have preserved as much of the original analog signal as possible, but it does so at the cost of not having any truly sharp edges in the image. The images often lack that sharp snappy look that we associate with a really good picture. When you zoom in to the pixel level they wind up looking a little soft.
Because of this, one of the first things people like to do when bringing an image fresh off a camera into a program like Photoshop is to sharpen it up a little. Make it a little more snappy. There’s nothing wrong with this, but it needs to be done with some care. Over-doing the sharpening can result in an image that looks artifical, or just plain bad. One of the better tools for this is the Unsharp Mask took.
One question I hear frequently is, “If the tool sharpens the image, why is it called unsharp mask?” The reason is that the tool doesn’t increase the sharpness of the image. It decreases the unsharpness through the use of a slightly out of focus, or unsharp image. Here’s the idea behind it:
Every image has some fuzziness to it. In the film world it comes from basic physics: no lens is perfect, apertures cause diffraction, etc. In the digital world you add Nyquist sampling to the equation. The result, either way, is that every image has some fuzziness to it. So if you can subtract the fuzziness from the image, the sharpest parts should be what’s left. The trick is to make an unsharp image, or mask, to subtract.
In the digital world this is fairly straightforward. You take the original image, blur it out to some degree, and then subtract some percentage of that blurry image from the original. In the Unsharp Mask tool in Photoshop there are two sliders, Radius and Amount. These set how blurry your unsharp mask is, and how much of that is subtracted from the original image. From the previous description of Nyquist sampling, it should be apparent that the Radius needs to match the fuzziness of the image. It’s not arbitrary. In perfect Nyquist sampling, that radius should be close to 0.5 pixels. Designs rarely work out perfectly, though, so your camera’s numbers may vary. Likewise the amount to subtract is not arbitrary, and should be matched to the detector, lens, and aperture used. With both sliders, some experimentation is required.
I’ve continued to mention film in this article because unsharp masking is not strictly a digital tool. Like so many of the tools in Photoshop, unsharp mask has its origins in the film world. It’s a technique I’ve never used in the darkroom myself, but I’ve known photographers who have. By far the best description I’ve found is this article on unsharp masking by Alistair Inglis. It’s worth reading through his article even if you never intend to set foot in a darkroom. It will give you a better idea of why this technique works, and how it is being done in software. It’s interesting to see how much of the article focuses on keeping the original negative and the unsharp mask in perfect registration. This, of course, is not of great concern in the digital world where you can specify precisely where a given pixel will go. But in the world of film the ability to keep two or more images in perfect registration can make or break any number of techniques that have been developed over the years. It’s a fascinating article, and good food for thought.