The View Up Here

Random scribblings about kites, photography, machining, and anything else

Archive for October, 2011

Tools For Measuring Camera Stability – Part 3

Posted by Tom Benedict on 09/10/2011

Whenever questions of rig stability come up, one of the first suggestions that’s made is to shoot some video and watch it. If the rig is doing something that’s causing image blur, this is a great way to see it. Picavet rigs like to yaw back and forth. Pendulum rigs like to pitch fore and aft around an axis perpendicular to the kite line. Both kinds of suspension will sway side to side about the kite line if the kites have any side to side stability issues. The video makes it easy to spot all of these motions. But it can’t tell you how fast the rig is actually moving, or how many degrees of sway the rig is undergoing.

Or can it?

As it turns out it can. One of the best solutions for unstable video is to run it through a video stabilizer. My favorite is still Video Deshaker by Gunnar Thalin. It’s a filter that runs inside of Virtual Dub, a free video editing program. Video Deshaker does a good job of taking out rotation in all three axes. But more important for our purposes, it creates a log file that says what those rotations are.

The Video Deshaker log file saves, on a frame-by-frame basis, the pitch and yaw in pixels, and the roll in degrees. If you know the focal length of your lens and the physical size of your detector, it’s possible to calculate a pixels-per-degree coefficient and convert pitch and yaw to degrees as well. Even sampling at 30 frames per second, this provides a rich data set from which to analyze camera motion. 60 frames per second is even better. A thousand frames is enough to do some real statistics on, and it takes less than a minute to acquire all the data you need.

The Video Deshaker web page describes the format of the log file. The log file is a comma delimited ASCII file, which can be read into just about any spreadsheet or mathematical analysis package. For this example I’m using Excel, but most of these statistics could be done with Open Office, Matlab, MathCAD, IDL, or even simple UNIX command-line tools like awk. (I know… I’ve done it.) The only real trick is getting the data into the program of your choice. Once it’s there, it’s time to play.

There are a couple of statistics of interest:

The first is the maximum absolute value of motion in each axis. This gives you your maximum rate of change during the entire session. Be careful, though. I found that the act of holding my camera and pushing the button to start the video often produces the highest velocities the camera sees. I chop off the first ten and last ten seconds of the video before processing to avoid this.

The next is the mean absolute value of motion in each axis. This gives you an idea of what your typical rig motion looks like. It’s important with both of these to take the absolute value of your rotation rates before calculating the mean. If your rig is pointing in one direction for the duration, the average rotations should come out close to zero since the rig swings away, and then swings back toward the original pointing. But take the absolute value first and then calculate the average, and you have your average angular velocity.

The third statistic you want is the standard deviation of the motion in each axis. This tells you how good your data is. It also provides a really good metric to measure how choppy a flight is. If the camera is moving all over the place, the standard deviation of the velocities is all over the place, too.

The fourth statistic of interest is to integrate the angular rates to get the angular position as a function of time. This sounds awful, but all it means is you pick a starting point and keep adding on the rotations on a frame-by-frame basis. The result is a table of values telling you where your camera was pointing as a function of time. You can run statistics on this set similar to the ones you ran on the velocities: mean direction, standard deviation, etc. One that is interesting is to measure the maximum and minimum in each axis. This tells you how far the camera swung in each axis.

Before going further, let’s take a look at that last one:  One of the common misconceptions with camera stabilization is that it’s sufficient to make the angular excursions small. What this means is that the camera won’t swing too far off its subject, even if the motion itself around the subject is quite rapid. This is quite patently false. In terms of image blur, it’s more important to keep the rates small, even if the camera wanders significantly off the subject. In the case of the former the composition will be consistent, but the images will all be blurry. In the case of the latter, the compositions will vary from image to image, but they will all be sharper than in the former case. Of course the worst case is if the camera is swinging wildly and rapidly in all three axes. In that case the composition and blur is unacceptable.

There’s one last bit of information that can be extracted from all this data. It’s also one of the most useful  measurements you can have when looking at the stability of a dynamic system: the power spectral density, or PSD.

When you have a time sequence, as we have for the motion in each of the three axes of our camera, it’s possible to convert the data from the time domain to the frequency domain using a Fourier transform. The easiest way to do this is using a Fast Fourier Transform, or FFT. Most analysis packages include FFT routines, including Excel and I believe Open Office. Other tools like Matlab and MathCAD also include FFT routines, but since these aren’t common tools I won’t go into them.

One of the best tutorials I’ve seen for doing an FFT in Excel is Larry Klingenberg’s 2005 paper, “Frequency Domain Using Excel”. The FFT component of the spreadsheet I built for doing camera motion analysis was built using his instructions.

The result of an FFT on our data set is a new set of data that tells you how much of the motion of the camera is coming from a given frequency. The range of frequencies covered by the FFT is a function of how fast you can take the data. At 24 frames per second, you can measure frequencies from zero (uncoordinated motion) to 12Hz. At 30 frames per second, that range is from 0 to 15Hz. At 60fps you can measure from 0 to 30Hz.

In the next installment I’ll go through some test videos I made that demonstrate various aspects of this method, and start to apply it to a real KAP rig.

– Tom

Advertisements

Posted in Engineering, Kite Aerial Photography | 1 Comment »

Tools For Measuring Camera Stability – Part 2

Posted by Tom Benedict on 09/10/2011

In Part 1 of this series I introduced the idea that camera motion on a KAP rig is a measurable quantity, and that in order to understand the effects of changes being made to a KAP rig, this quantity should be evaluated before and after each change. Specifically we are interested in the rates of rotation about each of the three axes of the camera: pitch, yaw, and roll. Pitch is how much the camera tilts up and down. Yaw is how much it turns side to side. Roll is how much it turns about its optical axis. Before launching into the tools to measure these, let’s take a look at some simple solutions that almost everyone uses:

It’s possible to minimize the effect of motion in all three axes by using a faster shutter speed. Let’s say your rig is rotating at a rate of ten degrees per second. With a one second exposure, the rig rotates ten degrees. At a tenth of a second, it rotates by a degree. At a hundredth of a second it’s a tenth of a degree, and at a thousandth of a second it rotates by only one one hundredth of a degree. To give you an idea of what this means in terms of image blur, my camera has about 5000 pixels across the horizontal direction. The field of view of my camera is about 90 degrees. So each pixel represents 90/5000, or 0.018 degrees. A rotation of 0.01 degrees during an exposure means motion blur of just under half a pixel. So a 1/1000 second exposure will save the shot, whereas a 1/100 second exposure would be hopelessly blurred.

It’s possible to minimize the effects of pitch and yaw by using a wider lens.This is obvious to most photographers, so if you find yourself nodding feel free to skip this paragraph. If you find yourself scratching your head, I hope this will help: Let’s say your camera is rotating at ten degrees per second, again. If you have a lens on the camera that gives you a ten degree field of view, then in one second your scene has shifted by the full width of your frame, or 5000 pixels in my case. If you switch to a lens with a 100 degree field of view, a one second exposure will result in features smeared across a tenth of the frame, or about 500 pixels. It’s impractical to go much wider than this, but you get the idea. Wider lenses suffer less from rotation in pitch and yaw.

Home free, right?

Not quite. That only applies to pitch and yaw. But what about roll? Higher shutter speeds still help, but wider lenses don’t. If your camera rolls by a degree while the shutter is open, the center of the image may look reasonably sharp, but the outer edges will be blurred. This happens regardless of focal length. I won’t go into the math, but blur from roll is invariant of focal length. The only easy fix is to increase shutter speed.

Which brings us to the harder fixes for rig motion: active gyro stabilization, passive stabilization, flywheels, etc. It’s possible to spend a lot of time and even more money pursuing any or all of these. But which ones will help, and by how much? The answers aren’t always intuitively obvious, and some solutions that should help don’t. But without tools for measuring camera stability, all you have to go on are anecdotes. It’s imperative to have metrics before starting the engineering.

One solution would be to get an inertial measurement unit (IMU) that logs rotation rates in pitch, roll, and yaw, as a function of time. These are fairly inexpensive these days, and are readily available from places like Sparkfun. But they have limitations in terms of their resolution, sampling rate, drift, and noise. Ask anyone who has spent time working with rate gyros and IMUs and they’ll tell you it’s not as easy as just plugging the thing in. It’s also one more thing to buy, install, and fly in order to gather data. Not everyone has an IMU on their rig, so not everyone can take advantage of the data it offers. Since it doesn’t offer an immediate improvement in the performance of a KAP rig, it can be difficult to justify buying and flying one.

There’s another solution, though, that most people doing kite aerial photography, and really any form of aerial photography, already have: their camera.

In the next installment I’ll discuss how you can use your KAP camera and some readily available software tools to evaluate the stability of your KAP rig.

– Tom

Posted in Engineering, Kite Aerial Photography | Leave a Comment »

Tools For Measuring Camera Stability – Part 1

Posted by Tom Benedict on 09/10/2011

One of the grails with almost any form of aerial photography is camera stability. For video it’s the difference between a choppy nasty video that gives the viewer motion sickness, and a smooth clean video that gives the viewer the feeling that they’re a bird. For still photography it’s the difference between a blurry picture and a sharp one. Photo editing software can correct for the occasionally tilted horizon, but even the best deconvolution routines won’t fix a helplessly blurry shot.

The most common solution to motion blur is to increase the shutter speed on the cameras. I’ve done this in the past, and within reason it works. But eventually most KAPers will take at least a cursory look at their rig and wonder if it’s possible to make it more stable than it is: a different spacing of the Picavet lines, a stiffer pivot on a pendulum suspension, or adding some sort of stabilizing vane. Some take it to greater extremes than others, but everyone does it to some extent even if it’s only subconscious. No one likes a blurry photograph.

Any time you take on a problem like this, though, it’s important to have metrics that you can use to tell if you’re doing better, worse, or making no change at all. Sometimes these metrics are subjective. I know my photographic composition has improved over time. It’s nothing I can point to and say, “See? I’ve improved by 29.35%!” But subjectively I know it to be true. Sometimes these metrics are quantitative. I can look at a histogram on an image and know if I nailed the exposure, and how far off I was in the cases where I didn’t. Sometimes these metrics are a mix. Most KAPers keep a mental tab on what percentage of their shots are blurry. The number is quantitative, but what constitutes “blurry” is subjective.

In the case of camera stability, though, there are clear quantifiable metrics that can and should be used when evaluating changes to a camera rig. They are the rate of change in position in X, Y, and Z, and the rate of change in orientation in pitch, roll, and yaw. In the past I’ve made the argument that changes in position only really matter if you’re making panoramas because the rates required to cause motion blur in a KAP shot are extremely high. Of more interest are the rates of change in orientation in pitch, roll, and yaw. This is what I plan to discuss in the next several posts.

Before going into the technique, I’d like to point out that this topic is under discussion on the KAP Forums under the thread: Measuring Camera Stability – A Quantitative Approach. My thoughts on this subject are not universally accepted, so by all means take a look at the dissenting opinions and form your own conclusions.

In the next installment, I’ll discuss the technique I’m using, and the tests I made to verify that it works.

– Tom

Posted in Engineering, Kite Aerial Photography | Leave a Comment »