top of page

LiDAR Data Smoothing, Redux

Updated: May 5, 2022

Author: Lewis Graham, August 12, 2021

Welcome to TrueView Bulletin, our periodic in-depth look at various aspects of our True View drone LIDAR/Imaging ecosystem. As you can see from the title, in this issue I will cover a topic near and dear to anyone doing high accuracy mapping – project control. Before you get too excited (you are excited, right?), let me tell you that I won’t be giving you specific definitive answers about how much and where to place control but rather guidelines. The main reason is that a lot of work still remains to be done by our UAV mapping industry in this area. What is Project Control?

Project Control generally refers to marks in the “object” space being imaged (e.g. the ground) that can be measured in the collected data. These Control marks are typically precisely measured, independently from the airborne sensor system. The most common approach is to use a Global Navigation Satellite System (GNSS) rover referenced to a base station. For imagery and high resolution laser scanners such as the True View 515 and True View 6xx systems, marked points on the ground can often suffice for both planimetric and 3D control. Of course, for vertical only, no image identifiable markings are needed. We often refer to deliberately placed marks that can been seen in sensor data as “signalized” control. Figure 1 illustrates two different types of control marking. It also illustrates (via the crushed tile marker on the left) why, for many projects, it is a bad idea to use expensive control markers!



Figure 1: Ground Control Point Examples

Ground Control Points vs. Ground Check Points Unfortunately, folks who have spoken photogrammetry for many years refer to points that will be used to build a model and points that will be used to verify a model both as Ground Control Points, GCP. In fact, control points are brought into the modeling process to “tie” the sensor data to the ground whereas “check” points are withheld from modeling and used only to verify the resultant models. Again, unfortunately, both acronyms are “GCP!” We sometimes use the acronym “GKP” for checkpoints to ensure folks know what we mean (as opposed to the more common term “withheld points”).

Laying out GCPs on a project site has a negative impact on project logistics and time lines and thus project designers attempt to minimize or even eliminate their use. The most common approach is to add so-called “direct geopositioning” to the drone sensor itself. This is effectively putting a GNSS L1/L2 rover on the drone, operating in either Real-Time Kinematic (RTK) or Post-Process Kinematic (PPK) mode (PPK being our recommended approach for a number of reasons).

Direct Geopositioning Using direct geopositioning can, indeed, eliminate the need for Ground Control Points (GCP) for controlling the model but it does not eliminate the need to have some checkpoints (GKP) scattered about. We, at GeoCue, always tend to overdo this (see Figure 2) in the projects we design since a GKP can be used as a control point (turning it into a GCP) should the need arise. The need can arise if the direct geopositioning system fails, the base station battery goes dead mid-project and there is no close CORS and so forth.



Figure 2: Project Check Points For Sensor Testing You will not need as many check points for a LIDAR vertical data set (in a True View 3DIS where we have both LIDAR and cameras, the LIDAR is invariably used for vertical) as you would for pure photogrammetry.

The reason is a bit subtle. A LIDAR system is a direct ranging device. Thus if the system holds a good, steady range, any vertical error from control tends to follow the GNSS vertical error. Photogrammetry is a bit more difficult for direct positioning. You must use a calibrated camera and that camera must hold calibration over the entire project. Photogrammetry derives Z by correlating the same object location in multiple images and thus the vertical (Z) is derived, not directly measured. Anywhere correlation is poor (e.g. the vegetated areas in Figure 2), there is a high probability of vertical error (well , a lot more than just vertical – this is why we use LIDAR!).

Use a Local Base Station One bit of advice I find to invariably be good – use a local base station, set the antenna pole of the base station on an easy to see target and make sure the target will be visible in the data set you collect with the drone. This will give you a few options for tying the project to the ground, should something go awry.

Example Cases of Control Strategies If I were doing a flight of 40 hectares (100 acres) with a True View 3DIS (meaning I am using LIDAR for vertical), I would want to have 5 “signalized” check points; four near the project edges and one in the center. These would be in addition to the aforementioned base station positioning. In addition, I would collect a few vertical only points. These are really easy – just “pogo” with the rover; no target needed. These vertical only shots add a lot of confidence for vertically debiasing data (discussed below).

If, on the other hand, I were flying a P4 RTK or an M300 RTK with a Zenmuse P1, I would bump my control/check up to around 8 to 10 points since photogrammetry tends to give a weaker “uncontrolled” vertical network accuracy.

I always process with all points treated as check points, meaning they are not brought in to the model (because I am lazy and don’t want to measure them!). We typically use Agisoft Metashape for the photogrammetry portions of projects so for this stage, we use our own camera calibration and the direct geopositioning results from our PPK processing in EVO. I then do an accuracy test using GeoCue’s True View EVO American Society for Photogrammetry and Remote Sensing (ASPRS)-compliant accuracy testing. If an unacceptable vertical bias is observed, I will test to see if it is systematic or “noise” by used a Standard Deviation of the Mean (SDOM) test. This SDOM testing is included in True View EVO so you won’t be required to dust off your old statistics book! If the SDOM test indicated systematic bias, I will use another tool in EVO to shift the point cloud data.

If we are running a photogrammetry-only project (e.g. a P4 or P1) and everything goes to hell in a handbasket, we use 75% of the GCP for true control (using the rather tedious workflow of measuring GCPs in Metashape) and use the remaining 25% to, once again, validate in EVO. In reality, we very seldom get in to this sort of trouble. It can happen if the base station fails and there is no suitable alternative, Fortunately, when you are running an EVO post-processing PPK flow, you can often buy Trimble PP-RTX by the flight minute (right from within EVO) and be happy you have a project solution, albeit, with accuracy a bit degraded as compared to a local base station.

Please don’t worry if you find the above a bit confusing. We provide the details in our training, regardless of the type sensor you are going to deploy. We have flown thousands of projects and assist True View customers on a daily basis. We will work with you, as partners, to ensure you get this right!


23 views0 comments

Commenti


bottom of page