The View Up Here

Random scribblings about kites, photography, machining, and anything else

  • Flickr Gallery

  • Advertisements

Posts Tagged ‘Photoshop’

Panorama Workflow

Posted by Tom Benedict on 23/07/2010

I’ve had some opportunities to play with my camera on the ground as well as in the air, and to test a number of image sets on the software I’ve been using.  Two days ago my wife and I took our kids to Pololu Valley to go hiking.  On the off-chance the weather would be nice, I brought my KAP gear.  The weather was fantastic, with solid winds for kite flying, and beautiful partly-cloudy skies.  Time to play!

I ran about 5GB of images through the camera from various vantage points.  In creating the base images I tried to incorporate everything I had learned from the earlier experiments.  The resulting photographs turned out quite well, so I’m considering the new workflow to be a win.  I’m sharing it here in the hopes that someone else doing kite aerial photography will give it a try and take it even further.  Here are the details:

  • If you can shoot RAW, shoot RAW.  I can’t, but in the near future I’ll be able to.
  • Use Manual Exposure mode on your camera.  Set it on the ground, check it, and double-check the histograms to make sure you’re getting bullseye exposures.
  • Use at least 1/1000 second exposure speed.  I’m using 1/1250.
  • Use the slowest ISO setting you can to control noise.  This is of less concern with a DSLR, but every bit helps.  I made this set at ISO 80.
  • Use the sweet-spot aperture on your lens if possible.  My lens is sharpest around f/4 to f/5.  I couldn’t use this aperture and hold the other numbers, so my lens is wider than ideal.  But the benefits in noise at ISO 80 make this a reasonable choice.  I give up some sharpness for lower noise, and keep the fast exposure speed to avoid blur.

Once the camera is in the air, all my panoramas were made with the camera vertical.  With a KAP rig this either means building the rig around a vertical camera (Brooxes BEAK rig), or using an L-bracket on a conventional rig, or having a dedicated Horizontal/Vertical axis on the rig.  I recently modified my rig to add the HoVer axis, so this is the route I went.

The idea with this technique is to start the rig on a slow spin, and to trigger the shutter continuously.  This technique was developed by a French KAPer who goes by the name of Vertigo on the KAP forums.  With a sufficiently fast shutter speed, this works perfectly.  My A650IS does one frame every 1.1 seconds.  With a 10-second-per-rev rotation rate, this works out about perfectly.  I’m upgrading to a Canon EOS T2i DSLR in the near future, which has a much faster frame rate.  I’m planning to build an electronic release cable for this camera that will give me the same 1-frame-per-second rate my A650IS has so I can continue to use this technique.

  • Start the rig rotating at a rate that gives you adequate overlap between images, and minimizes motion blur from the rotation, given the camera’s shutter speed.
  • Once the camera is rotating cleanly (no see-sawing on rotation, no jerkiness in the pan axis, no swinging around, etc.) trip the shutter.
  • Make at least two complete orbits of the camera, tripping the shutter non-stop the entire time.  This is for a couple of reasons:  First, it gives you plenty of frames to choose from in case one is blurry.  Next, it gives you a range of random tilt angles that you can use to fill in gaps later on.  Finally, if the rig starts to move, the second orbit will still produce a clean panorama.
  • If you want to make a larger panorama, change the tilt after two orbits and make two more orbits at the new tilt value.
  • While all of this is going on, do everything you can to minimize camera motion.

This should produce a nice set of images from which to work.  You may well end up using them all, so don’t toss any of them!

I use Autopano Pro for stitching.  Some of the tricks I’ve picked up will apply to other packages.  But if you find yourself scratching your head and thinking, “No, I’ve never seen that,” don’t sweat it.  Your software is different.  Skip that part.

One of the first problems I ran into is that Autopano Pro deals really well with point features, but not very well at all with linear features.  For example, it’ll match up individual stones on a beach like a champ, but it will produce lousy horizons if the horizon is just water and sky.  It makes no effort whatsoever to correct for lens distortions if the bulk of the picture is water and sky.

The fix I found was to use PTLens to correct lens distortions before using Autopano Pro.  PTLens is a $25 plug-in for Photoshop.  Even better, it’ll run as a stand-alone program and will batch process hundreds or even thousands of images at once.  If you’ve got a block of images you photographed as fodder for panorama stitching software, it’s no problem at all to batch process them all to remove lens distortions.  Water horizons should now be ramrod-straight lines across the frame.

So back to the process:

  • Run the entire image set through PTLens to remove barrel distortion, vignetting, and chromatic aberrations, but nothing else.
  • Process the images with Autopano Pro, or the panorama software of your choice.
  • Do everything you can to get completely horizontal, completely straight horizons for water.  Nothing kills a pano faster than a grossly errant horizon.
  • Save as 16-bit TIFF images.  16-bit workflow can be a real PITA, especially on a smaller machine, but it hides a lot of ills when it comes to large-scale processing like Levels and Curves.

At this point I open up the images in Photoshop.  I’m still using Photoshop 7.  I’ll upgrade to CS5 as soon as I can afford it.  But for now it still does everything I need.  Want is a whole ‘nuther story, but as far as my needs go, it’s fine.

  • View 100% and check for stitching errors.  Repair all of these with the rubber stamp or heal tools.
  • If your kite line shows up in the image, remove it using the same tools.
  • If you cropped your panorama wide enough to have gaps in ground or sky, open up all the images that went into the panorama, as well as the second orbit you made from that same location.  Use the rubber stamp tool to pull patches from any and all of the input images to repair problems on the panorama.  (This is one of the best reasons to make a second orbit!)  Since you used a fixed exposure, you should be able to rubber-stamp these into the panorama with no changes necessary.
  • Once the panorama is defect-free, look at your levels.  If you did your job setting the manual exposure on the ground, the exposure should be dead-nuts on, or need very little tweaking.
  • Do all your dodging and burning at this point to get the exposure just the way you want.  This can involve lots and lots of time, depending on how meticulous you are with your exposures.  If you’re the kind of person who got into photography in the days of film, and spent your afternoons in the positive darkroom dodging and burning the same negative over and over and over, you may be on this step for a while…

At this point the bulk of the workflow is complete.  But I would advise you not to stop here.  In Photoshop under the File menu is a command called File Info.  Click it.  It lets you edit the header information associated with your image.  At the very least I would fill out:

  • Title – What is the name of the original file on your computer?  Leave out the extension since that can change without changing the image.
  • Author – Your name.  You’re the author of your image.
  • Caption – Describe the photograph clearly and concisely, and include enough information so that you could read it and know where on the planet you were when you made the photograph.
  • Copyright Status – Change this to “Copyrighted Work”.  The moment you tripped the shutter, your photograph was a copyrighted work.  Not marking this just sets you up for someone to use your photograph without your knowledge.  If you choose to license your photographs under the Creative Commons license, of course, you should set this appropriately.
  • Copyright Notice – Mine reads: Copyright © Tom Benedict
  • Date Created – The date you tripped the shutter on your camera to make the photographs that went into this image.
  • City / State / Province / Country – Fill them in.
  • Source – Give yourself some hints here.  Is it a straight shot?  Digital?  Film?  Stitched?  My digital panoramas are all marked “Digital-Stitched”.

The neat thing is that most of the photo sharing sites on the Internet will automagically read your header information and fill in their own forms for you.  You may still want to provide more information than this, but the base information will be there.

The even neater thing is that in the event someone downloads your photograph and puts it on their own site without your knowledge, your header information is indexed by most search engines.  Even better, when you challenge them and they claim the photograph as an “orphaned work”, you can demonstrate that they did not make an honest effort to find the photographer in order to ask for permission since your info is all right there with the image.

So that’s it in a nutshell.  How well does it work?  See for yourself:

Pololu Valley Wetlands 2

– Tom


Posted in Kite Aerial Photography, Photography, Software | Tagged: , , , , , , , , , , , , , | Leave a Comment »

Nyquist Sampling and the Need for Unsharp-Masking

Posted by Tom Benedict on 31/05/2009

Every digital camera takes blurry pictures.  This is not because of any conspiracy between camera makers.  It’s simple physics and good design.  Here’s why:

Let’s say you have a really good lens with outstanding image quality.  And let’s say you’re using that lens to photograph a pinpoint light source like a star on a really clear still night.  The lens won’t be able to focus the star to an infinitessimally small point, no matter how good it is.  This is basic physics.  Instead it will focus to a circle of some very small diameter, on the order of only a few microns for a good lens.  For the sake of discussion let’s pick a number and say eight microns for our lens.  Now let’s say we stick a digital detector behind the lens and try to image the star.  If the detector has pixels that are twelve microns across, the star will under-fill a pixel and will show up as a single pixel in the images.  If the detector has pixels that are two microns across, the image of the star will fill multiple pixels, and it will be resolved as a small, but blurry circle.  The first case is called under-sampling.  You’re not taking full advantage of the optical quality of the lens, and the resulting images may look somewhat jaggy.  The second is called over-sampling.  You’re trying to subdivide the light into too many pixels, and you wind up having to toss half your resolution away because the optical quality of the lens isn’t up to the task.

In an ideal situation a camera’s pixel size should be about half of the finest detail the lens can resolve.  It’s a balance between the two conditions described above.  It’s not so finely sampled that you run into the optical quality of the lens, and it’s not so coarsely sampled that details are lost inside a single pixel.  The result is a fully resolved, but slightly fuzzy looking image. This balance of how finely to sample an analog signal was formalized in the Nyquist-Shannon Sampling Theorem.  The theorem says that if your sampling frequency is twice the highest frequency in your analog data, you can fully reconstruct the original signal.  Put in photographic terms, you want your pixels to be about half the size of the smallest feature your optics can produce.

How this usually works out in practice is this:  A camera manufacturer will take a detector with a given pixel size, and want to build a camera around it.  The specifications are then handed to their optical designers: pixel size, desired focal length, desired maximum and minimum apertures, etc.  The optical designers then design a lens for that detector and hand it to the mechanical designers so they can build a camera body around the lens and detector that will bring the image the lens produces to a good focus on the detector.  A camera built this way produces fully resolved, but slightly fuzzy looking images.  From the standpoint of sampling theory, this is ideal.

But from the standpoint of graphic design, it’s not.  A Nyquist sampled image may have preserved as much of the original analog signal as possible, but it does so at the cost of not having any truly sharp edges in the image.  The images often lack that sharp snappy look that we associate with a really good picture.  When you zoom in to the pixel level they wind up looking a little soft.

Because of this, one of the first things people like to do when bringing an image fresh off a camera into a program like Photoshop is to sharpen it up a little.  Make it a little more snappy.  There’s nothing wrong with this, but it needs to be done with some care.  Over-doing the sharpening can result in an image that looks artifical, or just plain bad.  One of the better tools for this is the Unsharp Mask took.

One question I hear frequently is, “If the tool sharpens the image, why is it called unsharp mask?”  The reason is that the tool doesn’t increase the sharpness of the image.  It decreases the unsharpness through the use of a slightly out of focus, or unsharp image.  Here’s the idea behind it:

Every image has some fuzziness to it.  In the film world it comes from basic physics: no lens is perfect, apertures cause diffraction, etc.  In the digital world you add Nyquist sampling to the equation.  The result, either way, is that every image has some fuzziness to it.  So if you can subtract the fuzziness from the image, the sharpest parts should be what’s left.  The trick is to make an unsharp image, or mask, to subtract.

In the digital world this is fairly straightforward.  You take the original image, blur it out to some degree, and then subtract some percentage of that blurry image from the original.  In the Unsharp Mask tool in Photoshop there are two sliders, Radius and Amount.  These set how blurry your unsharp mask is, and how much of that is subtracted from the original image.  From the previous description of Nyquist sampling, it should be apparent that the Radius needs to match the fuzziness of the image.  It’s not arbitrary.  In perfect Nyquist sampling, that radius should be close to 0.5 pixels.  Designs rarely work out perfectly, though, so your camera’s numbers may vary.  Likewise the amount to subtract is not arbitrary, and should be matched to the detector, lens, and aperture used.  With both sliders, some experimentation is required.

I’ve continued to mention film in this article because unsharp masking is not strictly a digital tool.  Like so many of the tools in Photoshop, unsharp mask has its origins in the film world.  It’s a technique I’ve never used in the darkroom myself, but I’ve known photographers who have.  By far the best description I’ve found is this article on unsharp masking by Alistair Inglis.  It’s worth reading through his article even if you never intend to set foot in a darkroom.  It will give you a better idea of why this technique works, and how it is being done in software.  It’s interesting to see how much of the article focuses on keeping the original negative and the unsharp mask in perfect registration.  This, of course, is not of great concern in the digital world where you can specify precisely where a given pixel will go.  But in the world of film the ability to keep two or more images in perfect registration can make or break any number of techniques that have been developed over the years.  It’s a fascinating article, and good food for thought.

— Tom

Posted in Photography | Tagged: , , , , , , | Leave a Comment »