top of page

How to Choose a UAV Mapping System

Author: Lewis Graham, July 29, 2021

Our solutions for Unmanned Autonomous Vehicle (UAV) aerial (drone) mapping systems range from entry level photogrammetry “guest” systems through GeoCue-designed survey grade RIEGL-based 3D Imaging Systems (3DIS®). While some clients are very experienced in aerial mapping and know just what they want, others are just now augmenting ground-based techniques and have a number of open questions. #1 What is Your Overall Goal The number one consideration is “what do you need to accomplish?” While I cannot address every scenario we have encountered, the problem statements might look similar to this list:

  1. We have a bare earth mine site with known control marks. We just need to perform periodic volumetric analysis.

  2. We are a services firm who are going to start out with non-vegetated mapping at sites where we have no a priori ground control

  3. We are an industrial operator who need to do topo and volumetrics on a variety of sites, many of which are vegetated

  4. We are a land developer who need to collect planning topographic data (“topos”). We have no other mapping needs

  5. We are a mapping services firm who will do drone mapping in a wide variety of applications ranging from topographic mapping to wire collection

  6. We are a survey services firm who will be doing high network accuracy, high precision data collects for demanding customers such as Departments of Transportation

The choices in technology range from a camera system with no augmented positioning to a full True View LIDAR/Camera (3D Imaging System). Of course, pricing ranges from sub $10,000 up to $200,000 or more, depending on the system. Generally, their are three choices in drone sensor survey/mapping technology:

  • Photogrammetry – a three dimensional (3D) point cloud is derived from overlapping images

  • LIDAR – a 3D point cloud is directly collected using a ranging laser scanner

  • 3DIS® – a 3D point cloud is directly collected using a ranging laser scanner. Each point of the cloud is colorized using cameras calibrated and synchronized with the laser scanner

Photogrammetry Approach Photogrammetry systems are the least expensive but not necessarily the easiest to post-process. No matter what you might hear in the popular hype, photogrammetry systems basically work with bare earth collections. A photogrammetry system requires that the same “object point” (e.g. a spot on the ground) be seen from multiple camera positions. Due to this geometry constraint, these systems will not collect:

  • Ground beneath any sort of overhead vegetation canopy (so useless for operations such as estimating for grubbing, initial topo surveys and so forth)

  • Any sort of overhead or thin linear structure such as poles, wires, piping, conveyors, railroad tracks and so forth

  • Areas with “urban canyons” such as closely spaced buildings, containers and so forth.

LIDAR Approach Laser scanners “image” object points with a single laser pulse. This pulse (especially from UAV altitudes) is rather small in diameter (several 10’s of centimeters) and thus can penetrate through gaps in tree canopy, spaces between buildings and so forth. By combining position and orientation information from the UAV-carried Position and Orientation System (POS) with range data from the laser scanner, a high accuracy 3D point can be discerned from a single return pulse. This is powerful stuff!!

3DIS Approach A 3DIS adds coincident imagery that is used to precisely colorize the point cloud. In addition, the images from a 3DIS can be used for any photogrammetric purpose such as creating digital orthophotos so a 3DIS provides all the capabilities of both a photogrammetry and LIDAR solution. I have put together some examples in Table 1 that provide project-driven guidance on the type system suitable for a particular type of work.

Table 1: Applicable Technology

Positioning Equipment

Once you have selected the type system most appropriate for your type projects, you need to consider peripheral equipment. The most important in this category is positioning equipment. There are two areas you will need to address; a positioning reference scheme for the sensor and a way to verify network accuracy. For both photogrammetric and LIDAR systems, it is necessary to know the exact position of the sensor (X, Y, Z) and the attitude (pitch, yaw, roll). In most cases, a satellite-based solution is used for determining X, Y, Z (Position). There are a number of Global Navigation Satellite System (GNSS) constellations that can provide this service, the most immediately familiar being the US Navstar Global Positioning System (GPS). The second most used system is the Russian GLONASS. Other systems that might be included are the European Union’s Galileo and China’s BeiDou. The general idea of GNSS positioning is:

  • The location of each satellite is know to some degree of accuracy

  • The satellites broadcast a precise time signal to which receivers can synchronize

  • By using a variety of sophisticated propagation models, a receiver can determine its range to each satellite

  • By combining range information, the receiver can determine its X, Y, Z location

There are some tricks to improving the positional accuracy of the receiver located on the drone (termed the “Rover”). These include:

  • Doing nothing extra – this gives positional accuracy in the range of several meters. We call this NAV grade accuracy.

  • Using a differential method which essentially compares the roving receiver’s data to a second, stationary receiver positioned on a known ground location. The stationary receiver placed on a known location is termed a “base station.” This method is called differential GNSS. We call this RTK or PPK grade accuracy.

  • Using a differential scheme such as above but using a network of “base stations” that are used to form a “virtual” base station near the Rover. We also term this RTK or PPK accuracy

  • Using a service that provides a sophisticated model of propagation that does not rely on a local or virtual base station. These solutions are termed Precise Point Positioning (PPP) solutions. An example is Trimble’s PP-RTX service.

  • An additional method of improving knowledge of the position of the sensor is to post-process the images (if the sensor is a camera or a 3DIS) what is called a photogrammetric block bundle adjustment. We term this level of positioning accuracy “BBA.”

There are two ways to do the computation of solving the position from the information collected by GNSS receivers used in differential mode (the recommended mode for survey grade accuracy):

  • Real Time Kinematic (RTK) – This scheme solves for a moving solution (the ‘kinematic” in RTK) within 1/10 of a second or so of collecting data; e.g. in near “Real Time.” This is very useful when you need precise position information for navigation or you are walking about on the ground with a rover pole and want an immediate positional result. Since RTK is solving in real time, a data link between the base and rover is required. This is usually accomplished by a UHF data radio built in to the base and rover. For mapping, RTK is seldom needed since we are after a “product” solution, not a real time position.

  • Post-Processed Kinematic (PPK) – In PPK, rover and base “observations” are recorded but not processed till after all data are successfully collected.

In general, PPK is more robust and more accurate than RTK. This is due to several factors:

  • A data link (the UHF radios in the base and rover) drop-out corrupts RTK data. This is not a problem in PPK since we do not transmit information between the base and rover.

  • RTK solutions must use the best estimated satellite positions at the time of data collection. The satellite location information (as well as related data) is termed “ephemeris.” Since the RTK solution is solving in real-time, it has to use the least accurate ephemeris data, the so-called “broadcast ephemeris” data. Over time, the ephemeris data are improved by applying “post-pass” orbital mathematics, using ground-based observations of satellite positions. Levels that are computed include Broadcast, Ultra Rapid, Rapid and Final. All but Broadcast are not available until some time following collection. Only PPK can use these more accurate solutions.

  • The position estimates of the flight can be improved via a process called digital filtering (specifically, a process called adaptive Kalman filtering). This improvement algorithm creates a series of positions going forward in time (a “trajectory”) and then computes the same in reverse time. These two trajectories are processed together (since, in theory, they should be identical) to optimize a final trajectory. This forward/reverse process cannot be done in RTK because I need the answer (“real time”) before I get to the end of the flight.

So, considering all of the above, the most accurate positions will be obtained by using a tripod-mounted multi-frequency, multi-constellation base station which records the session for later PPK processing. This is the standard configuration for all True View sensors.

Orientation Orientation (also called attitude) is the pitch, yaw and roll of the sensor as a function of time. It is not needed for photogrammetry systems since it is part of the solution of block bundle adjustment (BBA). For LIDAR systems which fire many thousands of pulses per second, a high frequency knowledge of orientation is needed. This is generally accomplished by an Inertial Measurement Unit (IMU) that forms part of the sensor system.

Inertial Measurement Unit An IMU contains a number of sensors such as accelerometers and rate gyroscopes. These sensors produce the raw data that feeds sophisticated attitude/position estimation algorithms. For nearly all sensors (including all GeoCue True View 3DIS), an integrated system that solves for both position (again, using a GNSS receiver) and orientation is used. These merged sensors are termed Position and Orientation Systems or POS. All GeoCue True View 3DIS used survey grade POS from Applanix, a division of Trimble. Now recall I said that a base station needs to be placed on a known position. Unless you are repeatedly mapping the same site, it is not practical to create a known base station location prior to showing up at a drone mapping site.

Fortunately, there is a service provided by most federal level governments to provide an on-line solution for a base station location if the base station sits on the site for a sufficiently long time. In the United States, this free service is provided by the National Geodetic Survey (NGS) via its Online Positioning User Service (OPUS, OPUS provides a service where you can upload an observation file (a data file collected by your base station) to their web site for computation of the base station location. The longer this base observation, the more accurate. The current minimum observation time is 15 minutes. The service sends an email back to you with the solution when it is complete (usually just a few minutes from submission) and you can download the report from this link. I think this is a fantastic service in that it not only gives the base location but also provides a very nice accuracy report.

Final Results OK, so now you are good to go – you have a sensor system capable of RTK and/or PPK and a base station as a reference. One additional consideration is validation of your final result. For example, if you are collecting a base model for cut and fill computations (for example, collecting a data set to create a pre-construction 3D model), you will want some way to validate that your model is accurate with respect to whatever spatial reference system (SRS) you are using; for example, State Plane. Testing the network accuracy of a project requires some ground samples, tied to the SRS network, that can be measured in the products you produce. These samples are called Ground Check Points (GKP). GKPs can be of several types:

  • Planimetric Only (horizontal) – these GKPs can be used for checking X, Y in your products but not Z (vertical)

  • Vertical only – these GKPs are used for checking vertical only

  • Full – A full GKP can be used for checking both planimetric and vertical at the same test location

Ground points can also be used for correcting a model in which case the points used in the correction are called “Control” rather than “Check” points. As an example, GCPs can be introduced into a photogrammetry process to tie the model to the spatial reference system. Of course, for obvious reasons a GCP cannot be used for both control and check (hence within GeoCue we typically use GKP for a check point and GCP for a control point). We tend to use white ceramic floor tiles (either 12” or 18” depending on image resolution) for Check/Control targets. We create either a cross using black masking tape or stick on a square pattern. We use tiles because they are harmless to equipment tires and cause no problem if they go through a mine site crusher. At just a few dollars each, they are also not a big loss if they are damaged or have to be abandoned at a job site. We strongly recommend against “smart” tiles since, in our experience, they offer a lot of negatives (expensive, easily damaged, complicated to use, …) and no real positives. For vertical-only shots, no marker is needed so long as you know the vertical probe planimetric point within a half a meter or so. Obviously you will need some equipment to measure the center location of your check/control targets. This brings a ground-based GNSS Rover in to the picture. Thus rather than just a base station, the professional drone mapper will own a full Base/Rover RTK kit. The same base used as a reference for the drone is also used for the rover employed in measuring GCPs.

At this point, you may be thinking “if I need to lay out check points, why bother with RTK/PPK on the drone?” This is a valid question since laying out GKPs takes time and seems to defeat the whole point in having precise positioning on the sensor itself. Let’s discuss this a bit. First of all, for some types of projects, 1 or no GKPs will be sufficient. Now I do recommend that you place your base station pole on a GKP target and keep the base within the area that will be captured by the UAV mission. This allows the use of the base station location (which you can derive via a service such as OPUS) for a single check point. Examples of base station single check point projects include (note that all of my examples are assuming you do have a precise positioning system on the UAV and you are using a calibrated sensor!):

  • Orthophoto coverage mosaic – all you are doing is collecting image data to orthorectify and mosaic into a coverage image. You can use the base target to ensure you have reasonable horizontal accuracy

  • Relative volumetrics – this is the scenario where you are doing stockpile volumetrics using a “toe” drawn into the same data set being used for computing volumes (for example, you are using the automatic toe generator in True View EVO). Here a vertical error with respect to the SRS does not matter – whatever vertical error is present is canceled when the toe-defined base of the stockpile is subtracted from the “hull” defined by the point cloud.

  • Any other project type that relies on relative rather than network accuracy. Examples include biomass surveys, most transmission line work, measuring characteristics of rails (rail to rail spacing, super elevation, etc.) and so forth.

In all the above scenarios, the single base check point will give you a “sanity” check on data validity. If you see some gross error in this check point, you know something went wrong. The size of “gross” depends on the sensor you are using. For example, with a DJI Zenmuse L1 LIDAR, perhaps 75 cm horizontal and 30 cm vertical. For a True View system, perhaps 15 cm horizontal and 5-10 cm vertical. Note that these are not the ranges of the accuracy of your data but rather the thresholds you may want to set to signal trouble. Other situations require a more rigorous check of network accuracy and often a removal of systematic vertical bias. There are tools in True View EVO that allow vertical shifting of the 3D point cloud. This process is colloquially referred to as a “Z bump.” This is a completely valid process so long as you are sure the vertical bias is a systematic error, not a local phenomenon. I’ve not read any fixed process for this determination so we have developed some guidelines; if the standard deviation of the mean (SDOM – a measure of the uncertainty of the value of the mean) is much smaller than the absolute value of the mean, it is OK to do this Z adjustment. Since the SDOM is inversely proportional to the square root of the number of samples, the more GKPs you have, the more confident you are in the mean and hence are justified in removing this systematic bias. In many countries (including the USA), there is a publicly available set of permanent base stations from which you can obtain an observation file. These networks are typically called Continuously Operating Reference Stations (CORS). These make it tempting to fly a PPK sensor with no local base station at all since a nearby CORS can be used as a hassle-free (and monetarily free!) base station. This will, in general, work but let me warn you of this practice. First of all, you will have no local check to assess data correctness unless you are collecting data from a site with permanent, known control. Secondly, the accuracy of the UAV-based PPK solution is inversely proportional to the distance to the CORS you select. This is particularly true for vertical accuracy. We have noticed a change in vertical bias of 7 cm using a base station that is only 7 miles from our office. This is a significant vertical error and hence I recommend using CORS only as a backup to a local base station failure. \

Summary Thus the summary of what you need for drone mapping is:

  • Sensor – This is the first consideration. Always select a drone that will carry your needed sensor. Never buy a drone first and then see what sensors it might be able to carry. It is the sensor that delivers your payday, not the drone (hence the term “pay load”!)

  • On-board PPK position system for image data; onboard Position and Orientation System (POS) for a LIDAR or 3DIS. Note that for camera systems, the PPK GNSS receiver may be part of the drone rather than the sensor (example are the DJI P4 RTK and the M300 RTK).

  • A drone (UAV, UAS, etc.) appropriate for the environment and type of sensor you intend to fly. For example, if you intend to fly photogrammetry, a DJI P4 RTK may be fine. If you will encounter a lot of strong winds, you may want to upgrade to a DJI M300 RTK drone with a Zenmuse P1 camera.

  • A multifrequency, multi-constellation base station that can perform in both RTK and PPK modes

  • An RTK rover compatible with your base for collecting GCP/GKPs

  • A collection of homemade ceramic tile targets

  • Various accessories such as tripods, poles, charging systems, spare batteries and so forth

Again, for certain scenarios you can get away with a subset of the above but these really are the exception rather than the rule. For example, you might have a mine site with a local, permanent base station (these are very common at mine and industrial sites since they are needed for automatic machine control systems) and permanent ground check points (such as paint marks on paved roadways, etc.). Don’t worry if you do not understand the minutia of the above. When you engage with us for a solution, we will always do a fairly thorough interview and discuss your specific needs. We can address just about any UAV mapping need that requires a camera, a LIDAR or a 3DIS®. Our goal is always to ensure you end up with the kit you will need to do your planned work within a budget that is doable for you.



bottom of page