6.1.4. Considerations for Interpreting InSAR Datasets

When interacting with and using the SCEC CGM InSAR product (or any InSAR dataset), we need to take several things into account that affect how we interpret this data: how we’re viewing the displacement at the surface, with what spatial reference are we interpreting this data, and how many dimensions of displacement estimates we are actually using at any given time.

Look Direction and Range Change

When looking at an interferogram, the direction of measured displacement depends on how the satellite is looking at the surface, so to accurately interpret surface motions we need to know what direction the satellite is facing when it collects its data. In other words, the change in distance between the satellite and a point on the surface, or range change, will appear different depending on whether the satellite is ascending or descending, or looking right or left. Usually, this information is conveyed to the viewer through plotting the “look direction” and/or the flight direction of the satellite as an arrow on an interferogram figure. This look direction represents the direction that the satellite looks down at the surface, and the direction in which it is able to observe ground deformation.

Additionally, range change may be represented in different ways, depending on who makes the interferogram figure. Some authors may define a positive range change as surface motion towards the satellite (often consistent with uplift), while others may define a positive change as a range increase, which would mean the surface moved away from the satellite (often consistent with subsidence). These opposite conventions can cause confusion. All InSAR interferograms and displacement maps should clearly define their range-change conventions. It is critical to understand which sign convention is being used when viewing an interferogram or a figure created with InSAR data.

One-dimensional Deformation Measurement

InSAR measurements fundamentally represent only one dimension of displacement information, and this direction is the Line-Of-Sight (LOS) from surface to the satellite. In areas where we have both an ascending and descending scene, we technically have two different LOS directions, which can be combined to produce a two dimensional product. We usually do not have enough different look angles to produce a three dimensional product from only InSAR data, and to decompose the current InSAR data products into 3D requires the user to make assumptions about where the third component of deformation data is coming from. This type of decomposition is an active area of current research in the InSAR community (e.g. Wright et al., 2004; Shen & Liu, 2020; Xu et al., 2021)

Relative Measurement: No Built-in Reference Frame

InSAR measurements are not absolute measurements of displacement. They are made relative to a satellite, and relative to a SAR image of a certain date, and are not placed in an absolute reference frame during processing. This fact is one major difference between InSAR and other geodetic methods, where other methods like Global Navigation Satellite System (GNSS) measurements are processed within a precisely defined global reference frame. Interferograms and other InSAR products can be defined relative to a certain point, such that all measured displacements are relative to that chosen point. InSAR measurements can also be tied to GNSS data solutions to provide control points to the images (e.g. Neely et al., 2020). Inherently, however, the displacements in InSAR products are relative, not absolute, and to make accurate interpretations of them, a reference point or reference points should be applied.

InSAR Measurement Uncertainties Difficult to Constrain

InSAR includes many possible sources of error both in different processing steps, as well as in the measurement collection itself. Due to the presence of many different noise sources, it is difficult to separate out the amount of uncertainty introduced by each source. Additionally, as these measurements are collected as an image of millions of pixels, calculating an uncertainty value for each of those pixels becomes computationally intensive and even prohibitive depending on the resolution of your interferograms (Agram & Simons, 2015). Some promising approaches have been developed to attempt to tackle this problem (e.g. Agram & Simons, 2015; Tong et al., 2013), but it remains an area of active research.