Aerial Perspective: Close-range Aerial Photogrammetry Comes of Age
Professional Surveyor Magazine -
July 2011
A client calls one morning, desperate to hire surveyors and photogrammetrists to prepare an orthophoto and digital terrain model on a square kilometer of land. The ground to be mapped includes an open pit mine, trees, hills, buildings, and wetlands. The mapping product required is an orthophoto with resolution of five cm per pixel, overlaid with one-foot contours. Also needed is a high-density 3D point cloud of the entire area, accurate to 10 cm tolerance in x, y, and z. The product must be delivered within a week, rain or shine … and money is no object.
What would you use? A half-dozen ground crews with differential GPS? Advanced aerial photography using the latest million-dollar digital sensor? The best airborne lidar that money can buy? Or a $300 digital camera that fits into your pocket?
Can I see the hands of all those who picked the pocket camera?
Non-metric, Digital Cameras

At first glance it makes no sense whatsoever. How can a consumer-grade digital camera outperform dedicated million-dollar aerial image platforms or lidar? The principle, in fact, is not hard to visualize: it has to do with getting close to your subject. Traditional aerial photography takes place at high altitudes. But fly the same area at 500 feet above the ground, image with a quality 12-megapixel pocket camera, and you can easily achieve the same, if not better, ground resolution.
Better yet, spend a few more dollars to use a high-end professional camera, such as a
Nikon D3 (about $5,000), and you will have a host of imagery advancements not found on million-dollar metric cameras. This is a function of physics (large-format cameras require more light) and economics. While cutting-edge aerial imagery systems are truly amazing, most available R&D money flows into the consumer market instead, where worldwide sales dwarf the specialty markets.
The D3, for example, comes with digital photography advancements the metric camera manufacturers can only dream about. This includes high ISO settings (up to 6400) with zero noise, allowing for insanely fast shutter speeds (freezing moments in time regardless of vibration and movement). Then there is the ability to image with high dynamic range, allowing mid-range exposures to see into the shadows without blowing the highlights. And let us not forget the nine-frame-per-second exposure rates that are possible, nearly four-times faster than the most-expensive metric cameras.
So why haven’t non-metric digital cameras been used for high-resolution photogrammetry in the past? Basically there are two reasons. First, without a precise camera calibration, images are useless for photogrammetric purposes under traditional methodology. Metric cameras are precisely calibrated. In other words, the lens distortions that are inherent in every camera have been carefully mapped by imaging dense arrays of control targets from different viewpoints. Metric lenses are precisely ground and radial distortions are predictable and consistent. It has always been assumed that a non-metric camera is not engineered to this tolerance. For example, while the lens and sensor can easily be calibrated, a disturbance, such as a hard bump, could affect the alignment, changing the calibration. Also, because the lenses are less carefully ground in consumer cameras, the radial distortion may have anomalies, like an astigmatism in the eye.

Second is the problem of area. A high-altitude flight is capable of covering several square kilometers in a single image. Fly this same area at a very low altitude, and the number of exposures needed multiplies exponentially. A single square kilometer of coverage at 400 ft. elevation above ground would require dozens of flight lines and hundreds of individual photos.
With these two reasons all too evident, is it any wonder that the practitioners of the measurement sciences have brushed aside any thought of considering inexpensive non-metric cameras as the coming new wave in high-resolution, precision orthoimagery and a viable alternative to airborne lidar?
The Mother of Invention
Our company was fortunate to stumble into a situation in 2007 that opened the window of possibility to this realm. Needing accurate orthophotos for a subdivision project at a remote Alaskan village and not having the funds to contract new imagery from traditional sources, we devised a simple workaround using a helicopter-borne Lumix camera and inexpensive 3D-modeling software. We used
Topcon ImageMaster software to calibrate the lens as well as to construct an orthophoto and DTM from the imagery.
It worked according to age-old photogrammetric principles, constructing oriented stereo pairs from registered GPS-measured ground targets. We generated a DTM from each stereo pair. We then joined into a mosaic the results for each set of pairs. Finally, we generated a serviceable orthophoto of the entire village, indexed to a local grid system, as an aid to the subdivision design.

It was only after the dust settled when we noticed what we had created. First, the resolution of the imagery was at the cutting edge of what is achievable in modern aerial photography, about three cm per pixel. Second, the accuracy of the orthophoto was nothing short of astounding. The first indication of this was the results of the software bundle adjustment of our aerial target-control points, which returned sub-pixel residuals.
Intrigued, we then sent the crew out to perform additional control checks, using differential GPS on discrete identifiable points across the imagery, such as a manhole or homeplate on a softball field. The coordinates from these GPS measurements were then downloaded into an AutoCAD drawing containing the orthophoto mosaic and were found to match the imagery at pixel level. How was this possible? Were we just lucky to have an exceptionally good camera calibration and a perfectly ground Leica lens?
We strongly suspected the near-perfect accuracy of the orthophoto and DTM was the result of the high-density control targeting used. For each stereo pair, we had established six, small, ground-control targets, totaling nearly 30 targets in an area less than a square kilometer. We reasoned that, like thumbtacks on a corkboard, the dense control targeting constrained the imagery to high tolerances.
An internet search provided additional insight. A series of papers by R. Wackrow and J.H. Chandler, of Loughborough University in the UK, described research into the
photogrammetric accuracies of non-metric cameras in a rigorous testing environment. First, they observed that the internal geometry of these cameras, against much speculation, can be maintained, allowing for repeatable calibration. Second, they discovered that inaccuracies in lens models can be greatly reduced by convergent image configurations, which opened the door for new imaging techniques.
Photogrammetric Survey
Intrigued by the possibilities, we then improved our methodology, swapping out the expensive helicopter for a small, fixed-wing aircraft with a camera port. This we chartered from a local air service for about $350 per hour. Then we invested in a better camera and incorporated on-board GPS to control the flight lines and tag the images. Further testing that included unregistered control targeting revealed that orthophoto accuracies remained comfortably in the sub-pixel range. Better yet, the resolution of the imagery increased to two cm per pixel. Thus equipped, we created a new survey product, christened the Photogrammetric Survey, a plat representing ground-surveyed boundary lines exactly where they fall on the image, depicting improvement locations equal to the most precise as-builts, but adding the component of full-color orthoimagery while covering an entire village.
A useful byproduct of the imaging was the ability to create highly accurate surface models from the 3D imagery, allowing precise contour mapping, identification of drainage patterns, and accurate volume measurements. What began as a specific need to create high-resolution ortho imagery as an overlay for subdivision design soon ballooned into a multi-application technique useful for mining reclamation, erosion studies, geological assessments, utility design, and road construction as-builts. The possible applications are limited only by the imagination of the surveyor.
Two years passed, then we hit the wall. Everything worked beautifully for small, well-defined areas measuring a square kilometer or less. For larger areas, it fell apart. One problem was that the number of ground-control points needed increased beyond practical limits. Worse, the increased volume of images choked and crashed ImageMaster, which was never designed for aerial applications.

The breakthrough came not with better cameras, but rather with a recent revolution in imaging software. An entirely new paradigm in photogrammetric triangulation has improved upon the traditional stereo pair model. The new model is based on analysis of multiple images with higher overlap. For example, with high overlap, each discrete point on the ground may be visible in portions of a dozen different images, allowing the computer to sort out similarities from a large number of possibilities, as opposed to a single pair.
This is made possible by pixel-matching algorithms that allow for automatic tie-point recognition. Every image is automatically populated with thousands of unique pixel combinations that can be compared with the unique pixel combinations of all the other images. This reduces, or even eliminates, the need for ground control (that is, if real world coordinates are not required). In other words, the ground itself becomes a vast target field for camera calibration. Among billions of possibilities, only a single mathematically precise solution works that merges together the multiple images into a pattern that matches both the actual ground terrain as well as a unique set of camera locations and calibrations. Thus, precise external and reasonable internal orientations of cameras can now be derived “on the fly” based on nothing more than the images themselves.
Improved Post-processing
The trend is towards fully automated, high-precision, post-processing on the cloud, but even in its infancy, emerging software solutions can result in a workflow that little resembles traditional photogrammetric procedures. For example, we have found that raw imagery can be processed automatically and quite nicely by an inexpensive software package, such as PhotoScan, or a web-based photomosaic service, such as Pix4D. While the returned orthophoto and DTM product will be very good (sub-meter, usable for many applications), it is not the highest possible accuracy.
However, a useful byproduct of this inexpensive process is an accurate listing of the external camera orientations (x, y, z, yaw, pitch, and roll) for each image. These camera orientations can then be combined with an in-house-derived, precision, internal-lens calibration (dedicated software exists for this), then fed with the imagery into a more rigorous, aerial-triangulation regime such as EnsoMosaic.
Through successive iterations that refine the image data into ever-denser grids and ever-more-accurate DTMs, high-speed desktops can then fine-tune the mapping product to pixel-level accuracies in a newly generated, seamless, and color-balanced orthoimage mosaic. Better yet, dense x, y, z point clouds are also created by this process, which are represented as full-color image pixels, each with a 3D coordinated value. For a truly professional product, these point clouds can be classified and loaded into CAD software along with the DTM for design and analysis, similar to lidar data. The turnaround time, from image acquisition to final mapping product, is often three days or less.
Enter the UAV

With the software reaching this level of sophistication, it was only a matter of time before aerial photo platforms themselves evolved to simplify and reduce the cost of data acquisition. Enter the UAV: small, unmanned aircraft, such as the Gatewing X100. These speedy little aircraft, easily launched and recovered, can serve as a platform to automatically capture up to a thousand low-altitude images in less than an hour by following a computer-generated flight plan that positions the aircraft more accurately and efficiently than any manned aircraft is capable of.
Instead of waiting for good weather, it can fly under the clouds. Instead of burning barrels of expensive avgas, it consumes only a few watts of electricity from a rechargeable lithium battery pack. What’s the camera used in the X100? A pocket-sized digital with a fixed focal-length lens made by Ricoh. What are the point cloud accuracies? Five cm.
Close-range aerial photogrammetry, though far less expensive than traditional methodology, will never totally replace it. The method currently is best suited for mapping areas under a few square kilometers in size, and at the moment is spectrally confined to RGB and near infrared. But the technology can deliver an exceptionally high-resolution and precise-mapping product, and, like the arrival of the total station and GNSS, could become a valuable and indispensable tool in the surveyor’s kit.
Eric Stahlke is the survey manager of Tanana Chiefs Conference, an Alaskan tribal corporation comprised of 42 native villages and based in Fairbanks, Alaska. A surveyor for 41 years, he specializes in managing project-level surveys in the Alaska bush for both government and corporate clients.
» Back to our July 2011 Issue