In September of 2017, AirGon LLC released Loki, a direct geopositioning system for the DJI Inspire 2 drone as well as generic drones carrying a digital single lens reflex (DSLR) camera. Since that time, Loki has been adapted to the Phantom 4 Pro and the m2xx series of DJI drones. A direct geopositioning system monitors the position of a drone to a high level of accuracy, synchronizes this position to camera events and records information that can be used, in a post-processing step, to provide a priori estimates of the camera location for each acquired image.
The goal of a direct geopositioning system is to either reduce the amount of ground control needed to achieve a specific level of accuracy or, in some cases, to eliminate the need for ground control altogether. Thus, direct geopositioning can significantly improve both the accuracy and financial bottom line of projects.
This report provides a bit of a feel for the accuracy of using Loki in an actual data collection scenario. In mid-November of 2017 we flew a limestone mine site with the goal of generating 1 foot topographic contours1 (“topo”) to be used for planning operations. For comparison, we have the good fortune of having 1 foot contours available that were produced in January of 2017 from high accuracy airborne topographic LIDAR. These data give us the opportunity to explore the vertical conformance of the drone derived point cloud in areas of undisturbed ground throughout the project.
The Accuracy Triangle – the Photogrammetric Triad:
The information in this section is meant to be a thought exercise, not a mathematically precise discussion of the elements that affect photogrammetric accuracy. I find it quite useful for making “rule of thumb” decisions regarding accuracy planning. If we consider the triangle of Figure 1, we recall that given any two sides, we can solve for the third. Similarly, given any two angles, the third is defined.
We can think of the photogrammetric solution in this same way (see Figure 2). The elements that form the triad of the structure are:
Exterior Orientation (EO) – The location (X, Y, Z) and orientation (Pitch, Roll, Yaw) of the camera for each image
Interior Orientation (IO) – The intrinsic parameters of the camera(s) being used in the imaging operation such as the focal length and lens distortion; the camera calibration parameters
Object Space (OS) – Known positions in the object space. The object space is what we are imaging (e.g. the mine surface)
In general, the more accurate I know two of these three sets of contributors, the less I need to know about the third and still achieve the desired project accuracy. The situation is, of course, much more complex than this, but it does give you a good framework to discuss maintaining or improving accuracy.
Elements of these factors are:
Exterior Orientation (EO) – We can get a first cut at the position (X, Y, Z) portion of the EO from the navigation grade Global Navigation Satellite System (GNSS) that forms part of the drone’s autopilot. A navigation grade GNSS will provide around 8 feet (2.5 m) of horizontal accuracy and perhaps 15 feet (5 m) vertical. An approximation of the orientation of the camera can be obtained from the autopilot orientation sensor and/or camera gimbal sensor although these are not really necessary in the Structure from Motion (SfM) algorithms used for creating 3D point clouds from overlapping images. We can dramatically improve our estimate of the EO position by adding a “survey grade” dual frequency, differential phase detection GNSS to the drone. This is a fancy way of saying a GNSS survey “rover” similar to those used in land surveying. This kit will improve our a priori estimates of camera position to an accuracy of about an inch (several centimeters) – this is the function of AirGon’s Loki system.
Interior Orientation (IO) – The IO is the model of the geometry of the sensor (in our case, the camera). We can model the sensor by a laboratory calibration procedure (preferred) or by deriving these elements using a technique called self-calibration or in situ calibration. AirGon provides laboratory calibration of drone cameras as a service.
Object Space (OS) – We usually add Object Space information to our solution by using image or point cloud identifiable markers whose locations are known to a high level of accuracy/precision. We usually call these Ground Control Points or Ground Check Points (GCP). At AirGon, we typically use a white ceramic bathroom floor tile (obtainable from Lowe’s for about one dollar) with a cross or diamonds added using black duct tape. The location of these targets is measured using a standard GNSS base-rover survey kit. An example of a GCP target being placed is shown in Figure 3.
Using our idea of the photogrammetric accuracy triangle, we see how to control project accuracy in a fairly straightforward (albeit heuristic) way.
A Discussion of the Photogrammetric Accuracy2 Triad:
The (photogrammetric accuracy) triad makes it easy to think about drone mapping accuracy in a generalized way. The highest possible accuracy will be achieved if you know all three elements of the triad (Exterior Orientation, Interior Orientation, Object Space reference points) to a high degree of accuracy. For example, if you are doing cut and fill computations where every inch matters, you should use direct geopositioning on the drone (high exterior orientation knowledge), a laboratory calibrated camera (well-known interior orientation) and dense ground control points (high object space reference point knowledge).
On the low side of the accuracy scale, you might be simply collecting data to form an orthophoto mosaic that will be used only for synoptic viewing. Here you might choose to use navigation grade EO, selfcalibration for the camera Interior Orientation (e.g. no a priori IO at all) and no ground control (no knowledge of OS locations). Your mosaic will look fine but its absolute position3 will be wrong (meaning it will not correctly register with other data such as a map on Google Earth) and it could have incorrect scale (poor local accuracy).
We are often faced with situations where a certain level of network accuracy is required but we have limited access to the site for placing ground control points. For example, if the product to be produced from the collected data is a set of 1 foot contours, a network vertical accuracy of 4” (10.2 cm) is required of the source data4 . Reducing GCPs means we have a lower confidence in the Object Space (OS) part of the accuracy triad. This means we will have to increase our knowledge of the other two “sides” of the triangle; Exterior Orientation (EO) and Interior Orientation (IO). We can increase the accuracy of IO by calibrating the camera. We can increase the accuracy of the camera exposure estimates (EO) by using a survey grade direct geopositioning system (e.g. Loki) on the drone.
The Test Site and data collection:
The test site is the pit area of a limestone mine (quarry) in central Alabama (Figure 4). The test area is approximately 175 acres. The product deliverable was a set of 1 foot contours of the pit region. To support the product requirement, we need 4 inches or better (Root Mean Squared Error, RMSE) of vertical accuracy.
We flew this project for a customer who is evaluating the effectiveness of our Loki direct geopositioning system. We were fortunate in that the customer was able to supply 1 foot topographic contours of the area. These contours were derived from a high accuracy airborne topographic LIDAR project flown in January of 2017. These contours are valuable for testing conformance (how well our point cloud conforms to the actual object space) in areas that have not changed.
The technology used in data collection included:
An iGage (CHC) RTK survey grade GNSS kit comprising a base station and rover
A DJI Inspire 2 drone using Ground Station Pro as the mission planning and control software
A DJI X4S camera (this camera has a mechanical shutter, a requirement for high accuracy mapping)
An AirGon Loki GNSS Post-Process Kinematic (PPK) direct geopositioning system
“Homemade” 12-inch ceramic tile ground control points marked with either opposing triangles or crosses
A GNSS base station was situated at our launch area (Figure 5). While we do set our base using a plumb pole over a GCP tile, we typically do not use this position in analysis since the tripod can cause anomalies in the point cloud. The base station is placed on an unknown point and observes for the duration of the project. The coordinates of this point are derived in post-processing using the National Geodetic Survey (NGS) Online Positioning User Service (OPUS).
18 ceramic tile ground control points (GCPs) were laid out over the site and surveyed using a real time kinematic (RTK) rover communicating with the base station. These points are indicated by the red labels in Figure 4. Depending on our test, the points were used variously as either control points or check points. All points exhibited less than 0.5 cm error (horizontal and vertical) at a 95% confidence level relative to the base station. The mission was planned with a flying altitude of 100m6 (328 ft) above the launch position (near the base station).
The base station was positioned at an altitude of 508 feet (NAVD88, Geoid 12B). The general mission parameters were:
Flying height = 836 feet (NAVD88, Geoid 12B) – 100 m above the launch site
End Lap = 85%
Side Lap = 65%
Flight Lines = 9
Total flight length = ~32,000 feet (~6 miles)
Flight Time = 22.5 minutes
Number of images = 530
Dominate flight direction = Northeast/Southwest
Highest GCP in the project = 507 feet
Lowest GCP in the project = 247 feet
Vertical terrain relief (between highest and lowest GCP) = 260 feet
We conducted the mission using two separate flights of the DJI Inspire 2, primarily to demonstrate how to change batteries mid-project. The mission was actually just within the capabilities of a single set of batteries.
The post-processing workflow comprised:
AirGon Sensor Package Suite (ASPSuite) – GNSS Post-Processed Kinematic (PPK) workflow and image event tagging (image geocoding)
Agisoft PhotoScan Pro – photogrammetric bundle adjustment, point cloud creation, orthophoto mosaic creation
GeoCue’s8 LP360 point cloud software – accuracy analysis, data cleaning, product extraction
The general parameters of the raw data (point cloud, orthomosaic) produced by PhotoScan were:
Point Cloud density = 4.4 points per ft2 (47.6 points per m2 )
Point Cloud Nominal Point Spacing (NPS) = 5.7” (14.5 cm)
Orthophoto Mosaic ground sample distance = 1.43” (3.62 cm)
Results – Loki Only (Direct Geopositioning, No Ground Control):
This first look at results is for direct geopositioning only. This is using our laboratory calibration parameters for the X4S camera with exterior orientation (EO) estimates provided by the Loki direct geopositioning system. However, no ground control points (GCPs) at all have been used in the solution. The results, using GeoCue’s LP360 point cloud software accuracy assessment tools, are shown in Figure 6.
The summary results are shown in Table 1. Obviously, we have come in way below our network accuracy requirement of 4” vertical RMSE for 1 foot contours! These are outstanding results that clearly indicate that, if proper procedures are used, direct geopositioning with no ground control can produce high network accuracy data models.
Results – Control Only:
In this second test, we are once again using the laboratory calibrated X4S camera, but this time we use the navigation grade image positions (no direct geopositioning) and 13 of our ground control points as project control. The remaining five are used as check points for testing accuracy. The accuracy results, using GeoCue’s LP360 point cloud software for analysis, are shown in Figure 7.
The summary results are shown in Table 2. Again, we have come in way below our network accuracy requirement of 4” vertical RMSE for 1 foot contours. Note that the planimetric accuracy is better than we achieved with Loki only. However, the vertical accuracy with control is actually slightly worse than with the Loki direct geopositioning system!
In conformance testing, we are interested in how well our 3D point model conforms to the actual surface in Object Space (OS). Since we generally do not have the true surface, we must do the comparison to some reference such as a preexisting model known to be “good,” specifically placed targets, survey “pogo” shots9 and so forth.
In our example, we have topographic contours derived from LIDAR data collected in January of 2017. If we assume the LIDAR contours as “truth” and look at areas that have not been disturbed between January 2017 and our flight, we can do a bit of analysis. In Figure 8 is depicted the contours derived from the January 2017 LIDAR data (green contours). We will analyze a segment of road that has been relatively undisturbed since the LIDAR flight. This is the cyan line indicated by the red arrow in Figure 8.
In LP360, we have a tool that can insert vertices in a line drawn over an existing line such that the newly sketched line will have a vertex at each crossing point with the elevation of the vertex copied from the existing line at the crossing point. Thus, the cyan line in Figure 8 (pointed by the red arrow) has a vertex at each point where it crosses a green contour line. Each vertex has an elevation value (Z coordinate) equal to the elevation of the crossed contour line.
Finally, LP360 has a different feature edit tool that can extract geometry from one type of feature into another. In our case, I extract the vertices of our test line into points. This gives me a point feature at each intersection between the test line and the existing LIDAR-derived contours. Each extracted point will have a Z value equal to the contour line that lies under the point. This final result is shown in Figure 9 as the magenta points.
I can now take these points into our LP360 accuracy tool and use them as Ground Check Points against the surface model we have created from the drone data. Using the data from the Loki only model (where no ground control points were included in the PhotoScan Pro point cloud generation process), we obtain the results of Figure 10. Examining the column of elevation values highlighted in Figure 10, it is clear that we did precisely pick up the contour elevation values since we see these elevations in integer increments (e.g. 312.00, 313.00, 314.00, …). The RMSE error between these probe points and our model created from the direct geopositioned drone data is 0.132 feet (1.6”, 4.0 cm). This is surprisingly close to our overall RMSE of the control point data.
When you think about the results we are seeing, you will agree that they are quite remarkable. First of all, with no ground control whatsoever and a vertical ground (Object Space) extent of about 260 feet (79 m), we are achieving a network vertical accuracy of 0.112 feet (1.3”, 3.4 cm). Selecting a random section of the data and comparing the vertical to a LIDAR survey conducted 11 months prior to the drone flights, we are seeing a vertical conformance of 0.132 feet (1.6”, 4.0 cm). Considering that the complete drone system (including camera) with Loki direct geopositioning is less than US $10,000, this is quite a bargain!
This study of an actual mine site shows that even in the presence of fairly significant vertical relief over the project area (260 feet), high accuracy can be achieved using direct geopositioning with no ground control whatsoever. Not only do we observe high vertical network accuracy at surveyed check points but also high conformance with an existing model assumed as truth. Using the photogrammetry accuracy triad - Exterior Orientation (EO), Interior Orientation (IO), Object Space (OS) - you can make suitable decisions regarding achieving the accuracy needed to support specific product generation. For example, if you are using direct geopositioning (good a priori EO estimates) as well as a laboratory calibrated camera (good a priori IO), you can achieve good Object Space accuracy without the need of ground control points.