Difference between revisions of "Satellite-derived nearshore bathymetry"
Dronkers J (talk | contribs) |
Dronkers J (talk | contribs) |
||
Line 3: | Line 3: | ||
Remote sensing therefore offers an attractive alternative for bathymetric measurements. Various methods have been developed: methods using fixed installations on the coast and methods using airborne sensors. Remote sensing from the coast uses radar or video cameras. Depth estimates are obtained indirectly through the observable influence of water depth on wave propagation in shallow water. The use of video cameras is described in the article [[Argus applications]] and the use of radar in the article [[Use of X-band and HF radar in marine hydrography]], both with references to other related articles. However, the nearshore zone covered by fixed observation installations is of limited size. Airborne remote sensing can cover larger areas. Accurate bathymetric measurements are possible with airborne LIDAR, see for example [[Data processing and output of Lidar]], but only in very clear water. LIDAR observations are also expensive. Airborne remote sensing with hyperspectral sensors has wide applications in nearshore coastal waters, not only for bathymetric surveys but also for mapping submarine habitats. Examples are discussed in the article [[HyMap: Hyperspectral seafloor mapping and direct bathymetry calculation in littoral zones]]. All the methods described above require investments in equipment and means for installation and/or transportation. In contrast, a huge amount of remote sensing data is collected by satellites every day globally for all coastal zones. These remote sensing images are freely available for investigating physical, chemical and biological processes and characteristics of land and ocean, including coastal areas. | Remote sensing therefore offers an attractive alternative for bathymetric measurements. Various methods have been developed: methods using fixed installations on the coast and methods using airborne sensors. Remote sensing from the coast uses radar or video cameras. Depth estimates are obtained indirectly through the observable influence of water depth on wave propagation in shallow water. The use of video cameras is described in the article [[Argus applications]] and the use of radar in the article [[Use of X-band and HF radar in marine hydrography]], both with references to other related articles. However, the nearshore zone covered by fixed observation installations is of limited size. Airborne remote sensing can cover larger areas. Accurate bathymetric measurements are possible with airborne LIDAR, see for example [[Data processing and output of Lidar]], but only in very clear water. LIDAR observations are also expensive. Airborne remote sensing with hyperspectral sensors has wide applications in nearshore coastal waters, not only for bathymetric surveys but also for mapping submarine habitats. Examples are discussed in the article [[HyMap: Hyperspectral seafloor mapping and direct bathymetry calculation in littoral zones]]. All the methods described above require investments in equipment and means for installation and/or transportation. In contrast, a huge amount of remote sensing data is collected by satellites every day globally for all coastal zones. These remote sensing images are freely available for investigating physical, chemical and biological processes and characteristics of land and ocean, including coastal areas. | ||
− | We discuss in this article three main techniques to extract nearshore bathymetry from satellite remote sensing: two so-called 'passive' techniques based on the interpretation of water color and the detection of wave propagation and an 'active' technique, space-borne LIDAR (Laser Imaging, Detection And Ranging). LIDAR does not need daylight, unlike color remote sensing | + | We discuss in this article three main techniques to extract nearshore bathymetry from satellite remote sensing: two so-called 'passive' techniques based on (1) the interpretation of water color and (2) the detection of wave propagation characteristics and (3) an 'active' technique, space-borne LIDAR (Laser Imaging, Detection And Ranging). LIDAR does not need daylight, unlike color remote sensing. LIDAR and color remote sensing both require calm clear water. Wave remote sensing also needs daylight, and works best with energetic waves - especially high swell waves. The three techniques are therefore complementary<ref>Almar, R., Bergsma, E. W., Thoumyre, G., Baba, M. W., Cesbron, G., Daly, C., Garlan, T. and Lifermann, A. 2021. Global satellite-based coastal bathymetry from waves. Remote Sensing 13, 4628</ref>. |
Line 59: | Line 59: | ||
Depth information can also be retrieved by analyzing wave propagation characteristics detected by satellite remote sensing. Satellite wave-based bathymetry does not require in situ data for calibration or training. The technique is based on a formula that expresses the wave propagation speed (wave celerity) in shallow water as a function of wavelength and water depth, see [[Shallow-water wave theory]]. This formula holds in principle for uniform depth, but is generally a reasonable approximation in situations where the depth is smoothly varying. Values of the wave celerity and the wavelength can be derived from images of the sea surface at successive times. The water depth can then be computed from the analytical formula of the wave celerity. This technique has been commonly applied for sea surface images obtained from shore-based video cameras such as [[Argus applications|ARGUS]] <ref>Almar, R., Bonneton, P., Senechal, N. and Roelvink, D. 2009. Wave celerity from video imaging: A new method. In Proceedings of the 31st international conference coastal engineering. Vol. 1 :661–673</ref><ref>Holman, R. A., Plant, N. and Holland, T. 2013. Cbathy: A robust algorithm for estimating nearshore bathymetry. Journal of Geophysical Research: Oceans 118: 2595–2609</ref> or from images taken by drones<ref>Bergsma, E. W. J., Almar, R., de Almeida, L. P. M. and Sall, M. 2019. On the operational use of uavs for video-derived bathymetry. Coastal Engineering 152, 103527</ref><ref>Lange, A.M.Z., Fiedler, J.W., Merrifield, M.A. and Guza, R.T. 2023. UAV video-based estimates of nearshore bathymetry. Coastal Engineering 185, 104375</ref>. | Depth information can also be retrieved by analyzing wave propagation characteristics detected by satellite remote sensing. Satellite wave-based bathymetry does not require in situ data for calibration or training. The technique is based on a formula that expresses the wave propagation speed (wave celerity) in shallow water as a function of wavelength and water depth, see [[Shallow-water wave theory]]. This formula holds in principle for uniform depth, but is generally a reasonable approximation in situations where the depth is smoothly varying. Values of the wave celerity and the wavelength can be derived from images of the sea surface at successive times. The water depth can then be computed from the analytical formula of the wave celerity. This technique has been commonly applied for sea surface images obtained from shore-based video cameras such as [[Argus applications|ARGUS]] <ref>Almar, R., Bonneton, P., Senechal, N. and Roelvink, D. 2009. Wave celerity from video imaging: A new method. In Proceedings of the 31st international conference coastal engineering. Vol. 1 :661–673</ref><ref>Holman, R. A., Plant, N. and Holland, T. 2013. Cbathy: A robust algorithm for estimating nearshore bathymetry. Journal of Geophysical Research: Oceans 118: 2595–2609</ref> or from images taken by drones<ref>Bergsma, E. W. J., Almar, R., de Almeida, L. P. M. and Sall, M. 2019. On the operational use of uavs for video-derived bathymetry. Coastal Engineering 152, 103527</ref><ref>Lange, A.M.Z., Fiedler, J.W., Merrifield, M.A. and Guza, R.T. 2023. UAV video-based estimates of nearshore bathymetry. Coastal Engineering 185, 104375</ref>. | ||
− | ==Wave propagation from satellite images== | + | ====Wave propagation from satellite images==== |
− | + | The wave celerity can be derived from a time series of satellite images of the sea surface if certain requirements are met<ref name=P16>Poupardin, A., Idier, D., de Michele, M. and Raucoules, D. 2016. Water depth inversion from a single spot-5 dataset. IEEE Trans. Geosci. Remote Sens. 119 : 2329–2342</ref> | |
# the spatial coverage comprises at least one wavelength, | # the spatial coverage comprises at least one wavelength, | ||
# the pixel diameter is much smaller than the wavelength (to enable wavelength estimation), | # the pixel diameter is much smaller than the wavelength (to enable wavelength estimation), | ||
Line 91: | Line 91: | ||
We then have | We then have | ||
− | <math>r_{water} = K \, r_{in} R_W \int_0^ | + | <math>r_{water} = K \, r_{in} R_W \int_0^h e^{-2Kz} dz = \large\frac{1}{2}\normalsize r_{in} R_W (1 - e^{-2Kh}) , \quad r_{\infty} = \large\frac{1}{2}\normalsize R_W , \quad r_{bed} = r_{in} R_B e^{-2Kh} , \qquad (A2)</math> |
− | where <math> | + | where <math>h</math> is the water depth and <math>R_W, \, R_B</math> are backscattering and reflection coefficients for water and for the seabed. It then follows that |
− | <math>r_{rs} = r_{\infty} \, (1 - e^{- | + | <math>r_{rs} = r_{\infty} \, (1 - e^{-2Kh}) + R_B e^{-2Kh} . \qquad (A3)</math> |
Considering that downwelling and upwelling attenuation can be different, the formula for the remote sensing reflectance reads<ref name=AM>Albert, A. and Mobley, C.D. 2003. An analytical model for subsurface irradiance and remote sensing reflectance in deep and shallow case-2 waters. Opt. Express 11: 2873–2890</ref>: | Considering that downwelling and upwelling attenuation can be different, the formula for the remote sensing reflectance reads<ref name=AM>Albert, A. and Mobley, C.D. 2003. An analytical model for subsurface irradiance and remote sensing reflectance in deep and shallow case-2 waters. Opt. Express 11: 2873–2890</ref>: | ||
− | <math>r_{rs} = r_{\infty} \, \big(1 - e^{- (K_D +K_U) | + | <math>r_{rs} = r_{\infty} \, \big(1 - e^{- (K_D +K_U)h} \big) + R_B e^{-(K_D +K_U)h} . \qquad (A4)</math> |
Derivation of the remote sensing signal observed by satellite <math>R_{rs}</math> from the below-surface reflectance <math>r_{rs}</math> requires several correction factors: the refractive | Derivation of the remote sensing signal observed by satellite <math>R_{rs}</math> from the below-surface reflectance <math>r_{rs}</math> requires several correction factors: the refractive | ||
Line 107: | Line 107: | ||
<math>R_{rs} \approx \Large\frac{0.5 r_{rs}}{1 - 1.5 r_{rs}}\normalsize . \qquad (A5)</math> | <math>R_{rs} \approx \Large\frac{0.5 r_{rs}}{1 - 1.5 r_{rs}}\normalsize . \qquad (A5)</math> | ||
− | If the attenuation coefficients <math>K_D, \, K_U</math> and the seabed reflection coefficient <math>R_B</math> are known for a specific wavelength <math>\lambda</math>, then the water depth <math> | + | If the attenuation coefficients <math>K_D, \, K_U</math> and the seabed reflection coefficient <math>R_B</math> are known for a specific wavelength <math>\lambda</math>, then the water depth <math>h</math> can be derived by comparing the formulas (A4, A5) with the reflectance <math>R_{sat}</math> recorded by the satellite sensor for the corresponding wavelength band. |
Using empirical expressions for <math>K_D, \, K_U, \, R_B</math>, Lee et al. (1999)<ref name=L99>Lee, Z., Carder, K. L., Mobley, C. D., Steward, R. G. and Patch, J. S. 1999. Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization. Appl. Opt. 38: 3831–3843</ref> determined the bathymetry of Florida Bay from remote sensing reflectance data with an accuracy better than 10% for these shallow waters with a uniform, sand-type bottom. | Using empirical expressions for <math>K_D, \, K_U, \, R_B</math>, Lee et al. (1999)<ref name=L99>Lee, Z., Carder, K. L., Mobley, C. D., Steward, R. G. and Patch, J. S. 1999. Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization. Appl. Opt. 38: 3831–3843</ref> determined the bathymetry of Florida Bay from remote sensing reflectance data with an accuracy better than 10% for these shallow waters with a uniform, sand-type bottom. | ||
==Appendix B: Semi-Empirical Model== | ==Appendix B: Semi-Empirical Model== | ||
− | The first model is the one proposed by Lyzenga et al. (2006)<ref name=L6>Lyzenga, D.R., Malinas, N.P. and Tanis, F.J. 2006. Multispectral bathymetry using a simple physically based algorithm. IEEE Trans. Geosci. Remote Sens. 44: 2251–2259</ref>, which assumes a linear relationship between the log-transformed wavelength bands and depth <math> | + | The first model is the one proposed by Lyzenga et al. (2006)<ref name=L6>Lyzenga, D.R., Malinas, N.P. and Tanis, F.J. 2006. Multispectral bathymetry using a simple physically based algorithm. IEEE Trans. Geosci. Remote Sens. 44: 2251–2259</ref>, which assumes a linear relationship between the log-transformed wavelength bands and depth <math>h</math>, |
− | <math> | + | <math>h = h_0 + \sum_{j=1}^N h_j \, X_j , \quad X_j = \ln \big( R_{rs}(\lambda_j) - R_{\infty} (\lambda_j) \big) , \qquad (B1)</math> |
− | where <math>N</math> is the number of bands considered and <math>h_0, h_1, …, h_N</math> are coefficients that can be estimated through linear (multiple) [[Linear regression analysis of coastal processes|regression]] from known depth values <math>\hat{ | + | where <math>N</math> is the number of bands considered and <math>h_0, h_1, …, h_N</math> are coefficients that can be estimated through linear (multiple) [[Linear regression analysis of coastal processes|regression]] from known depth values <math>\hat{h}</math> in a number of points of the coastal area where the bathymetry is to be determined. |
− | The second model by Stumpf et al. (2003)<ref name=S3>Stumpf, R.P., Holderied, K. and Sinclair, M. 2003. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 48: 547–556</ref> proposed an empirical linear relationship between the water depth <math> | + | The second model by Stumpf et al. (2003)<ref name=S3>Stumpf, R.P., Holderied, K. and Sinclair, M. 2003. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 48: 547–556</ref> proposed an empirical linear relationship between the water depth <math>h</math> and the ratio of the log-transformed green or red band (<math>\lambda_1</math>) to the log-transformed blue band (<math>\lambda_2</math>), |
− | <math> | + | <math>h = m_0 + m_1 \Large\frac{\ln(1000 \, R_{rs}(\lambda_1))}{\ln(1000 \, R_{rs}(\lambda_2))}\normalsize . \qquad (B2)</math> |
The values of the parameters <math>m_0, \, m_1</math> have to be adjusted by linear regression to predetermined depths in a number of points for estimating the bathymetry in other points of the coastal area<ref>Caballero, I. and Stumpf, R.P. 2020. Towards Routine Mapping of Shallow Bathymetry in Environments with Variable Turbidity: Contribution of Sentinel-2A/B Satellites Mission. Remote Sens. 12, 451</ref>. | The values of the parameters <math>m_0, \, m_1</math> have to be adjusted by linear regression to predetermined depths in a number of points for estimating the bathymetry in other points of the coastal area<ref>Caballero, I. and Stumpf, R.P. 2020. Towards Routine Mapping of Shallow Bathymetry in Environments with Variable Turbidity: Contribution of Sentinel-2A/B Satellites Mission. Remote Sens. 12, 451</ref>. | ||
Line 131: | Line 131: | ||
==Appendix E: Analysis of wave characteristics from satellite images== | ==Appendix E: Analysis of wave characteristics from satellite images== | ||
− | + | According to linear wave theory, the water depth <math>h</math> can be computed from the wave dispersion relation, | |
− | <math> | + | <math>h = \Large\frac{\lambda}{2 \pi }\normalsize \, \tanh^{-1} \Big( \Large\frac{2 \pi c^2}{g \lambda}\normalsize \Big), \qquad (E1)</math> |
− | if the wavelength <math>\lambda</math> and the wave celerity <math>c</math> are known. In | + | if the wavelength <math>\lambda</math> and the wave celerity <math>c</math> are known. |
+ | However, in deep water, <math>2 \pi h / \lambda >> 1</math>, which implies that the wave celerity does not depend on the depth and consequently the water depth cannot de computed from Eq. (E1). | ||
+ | |||
+ | In shallow water, <math>h << \lambda / 2 \pi</math>, the relationship (E1) between water depth and wave celerity becomes <math>h \approx c^2 / g</math>. The validity of this expression is questionable for the highly distorted bore-type waves on the upper shoreface. A more accurate relationship can be derived from cnoidal wave theory, <math>c^2 \approx g (h + H)</math>, where <math>H</math> is the wave height<ref>Thornton, E.B. and Guza, R.T. 1982. Energy saturation and phase speeds measured on a natural beach. J. Geophys. Res. 87, 9499</ref>. | ||
Wavelength and wave celerity can be extracted from the pixel intensity pattern <math>I(x,y)</math>, where <math>x, y</math> are the spatial coordinates of the remote sensing image. As the pixel intensity pattern may contain a great amount of noise, a common technique to enhance linear patterns is the so-called Radon Transform (RT) <ref name=BAM>Bergsma, E.W.J., Almar, R. and Maisongrande, P. 2019. Radon-augmented sentinel-2 satellite imagery to derive wave-patterns and regional bathymetry. Remote Sens. 11, 16</ref>. Even if wave fronts are not obviously apparent, they normally will appear through the Radon Transform. Linear features coinciding with wave fronts are represented by lines <math>r=x \cos \theta + y \sin \theta</math>, where <math>\theta</math> is the wave incidence angle and <math>r</math> the distance of the wave front to <math>x=0, \, y=0</math> along the wave ray. The Radon Transform is then given by the integral over the image area <math>A</math> along possible wave fronts (depending on <math>\theta</math>), | Wavelength and wave celerity can be extracted from the pixel intensity pattern <math>I(x,y)</math>, where <math>x, y</math> are the spatial coordinates of the remote sensing image. As the pixel intensity pattern may contain a great amount of noise, a common technique to enhance linear patterns is the so-called Radon Transform (RT) <ref name=BAM>Bergsma, E.W.J., Almar, R. and Maisongrande, P. 2019. Radon-augmented sentinel-2 satellite imagery to derive wave-patterns and regional bathymetry. Remote Sens. 11, 16</ref>. Even if wave fronts are not obviously apparent, they normally will appear through the Radon Transform. Linear features coinciding with wave fronts are represented by lines <math>r=x \cos \theta + y \sin \theta</math>, where <math>\theta</math> is the wave incidence angle and <math>r</math> the distance of the wave front to <math>x=0, \, y=0</math> along the wave ray. The Radon Transform is then given by the integral over the image area <math>A</math> along possible wave fronts (depending on <math>\theta</math>), |
Revision as of 16:49, 31 December 2023
The traditional way to determine the bathymetry of the near-shore zone is by means of in-situ sonar observations, nowadays mainly with multibeam echosounders. In this way the depth can be measured with order of decimeter accuracy. However, these measurements are labor-intensive, require a lot of shipping time and are therefore expensive. This places important limitations on the frequency with which surveys can be carried out. Major parts of world’s nearshore waters have never been surveyed with in-situ sonar equipment. Remote sensing therefore offers an attractive alternative for bathymetric measurements. Various methods have been developed: methods using fixed installations on the coast and methods using airborne sensors. Remote sensing from the coast uses radar or video cameras. Depth estimates are obtained indirectly through the observable influence of water depth on wave propagation in shallow water. The use of video cameras is described in the article Argus applications and the use of radar in the article Use of X-band and HF radar in marine hydrography, both with references to other related articles. However, the nearshore zone covered by fixed observation installations is of limited size. Airborne remote sensing can cover larger areas. Accurate bathymetric measurements are possible with airborne LIDAR, see for example Data processing and output of Lidar, but only in very clear water. LIDAR observations are also expensive. Airborne remote sensing with hyperspectral sensors has wide applications in nearshore coastal waters, not only for bathymetric surveys but also for mapping submarine habitats. Examples are discussed in the article HyMap: Hyperspectral seafloor mapping and direct bathymetry calculation in littoral zones. All the methods described above require investments in equipment and means for installation and/or transportation. In contrast, a huge amount of remote sensing data is collected by satellites every day globally for all coastal zones. These remote sensing images are freely available for investigating physical, chemical and biological processes and characteristics of land and ocean, including coastal areas.
We discuss in this article three main techniques to extract nearshore bathymetry from satellite remote sensing: two so-called 'passive' techniques based on (1) the interpretation of water color and (2) the detection of wave propagation characteristics and (3) an 'active' technique, space-borne LIDAR (Laser Imaging, Detection And Ranging). LIDAR does not need daylight, unlike color remote sensing. LIDAR and color remote sensing both require calm clear water. Wave remote sensing also needs daylight, and works best with energetic waves - especially high swell waves. The three techniques are therefore complementary[1].
Contents
Satellite color-based bathymetry
Sunlight reflected from the seabed in shallow water contains information on the water depth. A simple example is the shallow vs. deep ends of a swimming pool that appear as different colors to the human eye.
Several satellites, such as Landsat 8, Sentinel 2, WorldView-2, QuickBird, IKONOS, SPOT, are equipped with optical sensors that measure sunlight reflected from the sea surface in different wavelength bands. Google Earth Engine is a cloud-based geospatial computing platform which offers a petabyte-scale archive of freely available optical satellite imagery. Among characteristics, it features the whole archive of Landsat, the first three Sentinel missions, and full Moderate Resolution Imaging Spectroradiometer (MODIS) data and products. Spectra recorded by these optical sensors contain information on the bathymetry of nearshore coastal waters. A plethora of factors affect the state of the atmosphere (e.g., haze, aerosols, and clouds), sea surface (e.g., sunglint, sky glint, and white caps) and water column (e.g., sedimentation, turbidity and variable optical properties) originating from either atmospheric interference or ocean surface. For instance, the top-of-atmosphere signal for blue-to-red spectral bands could consist of up to 90% of scattering due to ozone and Rayleigh effects[2]. The influence of these factors has to be removed by available preprocessing algorithms to obtain the remote sensing reflectance [math]R_{rs}[/math].
Several methods have been developed to extract the seabed bathymetry from the remote sensing reflectance. Most methods are based on models that require bathymetric data from other sources for calibration. One of the possible sources is the LIDAR camera mounted on the ICESat-2 satellite.
Satellite-derived bathymetry is restricted to shallow coastal areas. In deep water, the light in the water column is too strongly attenuated for producing a significant reflected signal from the seabed. In clear water the maximum depth is about 25 m and in turbid water less than about 5 m[3]. The accuracy of satellite-derived bathymetry is usually not much better than about 10% of the water depth.
Remote sensing reflectance
Optical remote sensing provides a measure of reflectance [math]R_{rs}(\lambda)[/math] in different wavelength bands [math]\Delta \lambda[/math]. Reflectance is defined as the ratio of upwelling radiance and downwelling irradiance, [math]R_{rs}=R_{up}/R_{in}[/math]. Upwelling radiance [math]R_{up}(\lambda)[/math] is the radiance emerging from below the water surface, i.e. the total light energy in a wavelength band [math]\Delta \lambda[/math] emitted per unit time in all directions from a unit area of the sea surface. Downwelling irradiance [math]R_{in}(\lambda)[/math] is the incoming light energy in a wavelength band [math]\Delta \lambda[/math] received directly and indirectly from the sun per unit time and unit sea surface.
Upwelling radiance is the sum of the light emitted from the water column, [math]R_{water}[/math], and from the seabed, [math]R_{bed}[/math]. It is also the difference between the total emitted sea surface radiance and the reflection of the sky-radiance originating from the upper hemisphere including both the direct (sunglint) and the diffuse components (sky glint). Sunglint occurs in imagery when the water surface orientation is such that the sun is directly reflected towards the sensor; and hence is a function of sea surface state, sun position and viewing angle. The upwelling radiance depends on water constituents (e.g. suspended sediment, Chlorophyl, gelbstoff), bottom reflectance and water depth.
Downwelling surface irradiance is the sum of the direct and diffuse components of sunlight attenuated by reflection at the air-sea interface. Reflectance of the direct sun beam depends on the solar zenith angle and the index of refraction of seawater. Reflectance of the diffuse irradiance is related to the roughness of the sea surface. Reflectance due to foam can be related to the wind speed, and it affects both the direct and the diffuse components. Downwelling irradiance can be estimated from analytical expressions[4] or from measurements[5].
Retrieving water depth
Several methods have been developed to retrieve water depth from remote sensing reflectance spectra. A few popular methods are briefly described below.
Semi-analytical methods
Semi-analytical methods were developed by Lee et al. (1999)[6] who analyzed water’s absorption and backscatter properties (including the influence of suspended matter, phytoplankton and gelbstoff). From these properties they derived semi-empirical formulas for the contributions of absorption and backscatter to radiative transfer in seawater and a corresponding formula for the remote sensing reflectance (see Appendix Semi-Analytical Model). In shallow waters, the remote sensing reflectance depends not only on the absorption and scattering properties of dissolved and suspended material in the water column, but also on the bottom depth and the reflectivity of the bottom, or the bottom albedo. For the bottom depth to be retrieved, the water-column contributions to the reflectance have to be removed. The bathymetry of a shallow-water coastal field site then can be obtained by matching the semi-empirical formula with observed remote sensing reflectance values for a particular wavelength, without using any bathymetric data. The accuracy is on the order of ± 1 m for depths less than about 10 m.
Semi-empirical methods
Semi-empirical methods use relationships between reflected radiation and water depth without considering light transmission in water. In addition to depth values measured in a number of locations, semi-empirical methods require certain bands in the visible wavelength, with blue and green being the most widely used, as inputs in simple or multiple linear regressions. Two well-known methods exist to estimate bathymetry in a given area., which are linear transform (Lyzenga et al. 2006[7]) and ratio transform (Stumpf et al. 2003[8]), see Appendix Semi-Empirical Model.
The physical concept underlying the ability to estimate bathymetry from multi-spectral imagery is the wavelength-dependent attenuation of light in the water column. The ratio transform method with the blue and red bands may perform better than with the blue and green bands in very shallow water because the red band or bands with longer wavelengths have stronger absorption than the green band or bands with shorter wavelengths[9]. With the decrease of water depth, the sensitive wavelength band in water-leaving reflectance for the water depth varies from the shorter wavelength band to the longer wavelength band (from the green band to the red-edge band).
Empirical method
If the water depth is known at a sufficiently large number of locations, machine learning (ML) techniques can be used to predict bathymetry directly from the remote sensing reflectance [math]R_{rs}(\lambda)[/math] in different wavelength bands. Compared with the classic method, the machine learning method does not require any empirical knowledge of attenuation, water quality, or bottom type, and has wider applicability. The core assumption is that the bathymetry and the seabed have spectral signatures that can be differentiated within the remote sensing data. The empirical approach is easy to apply and tools are readily available to process and analyze data, which are major advantages. Recent developments in ML techniques enhance the efficiency to process huge in-situ data. The limitations are requirement of in-situ data and the adaptation to a specific site; results usually cannot be transferred to other sites.
Machine learning techniques have also been applied to remote sensing reflectance without atmospheric correction by using the near-infrared (NIR) wavelength band. The NIR band is widely used for atmospheric correction; when training the model using the remote sensing images without atmospheric correction, including the NIR band helps to improve the training accuracy[10]. The bathymetric data obtained with atmospheric correction are more accurate than those without, but in some cases avoiding the complex atmospheric correction processes may yield acceptable results.
Popular machine learning techniques for retrieving the bathymetry directly from remote sensing reflectance are Principal Component Analysis[11], Artificial Neural Networks (ANN)[12], Random Forest regression networks (RF)[13] and Support Vector Regression (SVR) algorithms[14]. RF and SVR are briefly described in Appendix C and D.
Satellite-born LIDAR
Airborne lidar bathymetry provides an efficient alternative to vessel-based echo-sounding techniques, particularly in transparent shallow waters.
ICESat-2 ATLAS is a space-based laser altimeter launched in September 2018. It is a photon-counting lidar with a revisit period of 91 days. ATLAS uses a green laser (532 nm) with 10 000 pulses per second, a vertical resolution of 4 mm, a footprint of 13 m in diameter, and an along-track sampling interval of 0.7 m. It has three laser beams along the track, and the distance between adjacent beams is approximately 3.3 km. Each beam is divided into strong and weak sub-beams. The energy of the strong sub-beam is approximately four times than that of the weak sub-beam, and the distance between them is 90 m. The detector is very sensitive, so the raw photon data in the ATL03 dataset are noisy, especially during the day due to solar activity[15].
The ICESat-2 geolocation photon data are available in the ATL03 product, which is disseminated through the National Snow and Ice Data Center (NSIDC). ICESat-2 was not originally designed for marine applications and its trajectories have a limited global distribution. However, the use of a 532-nm laser and the high-accuracy of the altimetry give it great potential in nearshore bathymetry for depths up to about 40 m in optically clear waters[3]. Refraction and tide corrections are essential for nearshore bathymetry when using the ATLAS remote sensing images. Even if the spacing between the beam pairs is too wide to generate high-resolution bathymetric results, water depth profiles derived from ICESat-2 can be used to feed the (semi)empirical methods for satellite-derived bathymetry.
Satellite wave-based bathymetry
Depth information can also be retrieved by analyzing wave propagation characteristics detected by satellite remote sensing. Satellite wave-based bathymetry does not require in situ data for calibration or training. The technique is based on a formula that expresses the wave propagation speed (wave celerity) in shallow water as a function of wavelength and water depth, see Shallow-water wave theory. This formula holds in principle for uniform depth, but is generally a reasonable approximation in situations where the depth is smoothly varying. Values of the wave celerity and the wavelength can be derived from images of the sea surface at successive times. The water depth can then be computed from the analytical formula of the wave celerity. This technique has been commonly applied for sea surface images obtained from shore-based video cameras such as ARGUS [16][17] or from images taken by drones[18][19].
Wave propagation from satellite images
The wave celerity can be derived from a time series of satellite images of the sea surface if certain requirements are met[20]
- the spatial coverage comprises at least one wavelength,
- the pixel diameter is much smaller than the wavelength (to enable wavelength estimation),
- the timelapse between two successive images is smaller than the wave period (to enable identification of individual waves),
- the ratio of pixel diameter and timelapse is smaller than the wave celerity (to enable estimating the shift in wave phase),
- the overlap of two successive images contains the same nearshore area.
These requirements restrict the use of satellite images, because the pixel diameter is large, O(10 m), and the timelapse between two images is often longer than the ratio of area width and satellite speed, meaning that successive images do not have sufficient overlap. In the best case 2 successive images are available, if there is a small timelapse of O(1 s) between images taken for different wavelength bands. For example, for Sentinel-2’s bands with best resolution (10 m) and time lapse of 1 s, phase shift estimation requires about 7-8 pixels, meaning that wavelengths smaller than 70–80 m are not sufficiently resolved for phase shift estimation[21].
The identification of waves in the remote sensing images is based on the analysis of variations in pixel intensity, see Appendix E. A complicating factor is the irregular character of natural waves, with varying wave height, wave period and wave incidence direction. The uncertainty in the results is largely due to the uncertainty in the estimation of the wave celerity. Currently, the accuracy margins of depths obtained from satellite-based techniques are substantially larger than those of operational shore-based video cameras, and an order of magnitude larger than those of in-situ echo-soundings[22].
Jurisdictional issues
In situ bathymetric surveys in territorial waters typically require permission of the host country. Often, this permission comes with the request that survey findings are not made publicly available and the host country receives a copy of the data. Many countries are reluctant to grant foreign entities access to territorial waters, due to the risk that bathymetric survey information could be used to facilitate undersea navigation by military vessels[23]. Bathymetry from space that makes no use of in-situ data circumvents host country permissions. Satellite-derived bathymetry can also be used to determine areas such as reefs, atolls and shoals that could be built upon to create artificial islands and lay claim to territorial waters and exclusive economic zones. Although currently hypothetical, the buildup of shallow water by states seeking to expand their influence is a worry in some regions of the world and satellite-derived bathymetry could play a role in both expanding or refuting this practice[23].
Appendices
Appendix A: Semi-Analytical Model
The below-surface remote reflectance [math]r_{rs}[/math] is given by[6]
[math]r_{rs} = \Large\frac{r_{up}}{r_{in}}\normalsize = \Large\frac{r_{water}}{r_{in}} + \frac{r_{bed}}{r_{in}}\normalsize , \qquad (A1)[/math]
where [math] r_{in}(\lambda)[/math] is the downwelling irradiance below the sea surface. The upwelling radiance below the sea surface, [math]r_{up}(\lambda)[/math], has contributions from water column backscattering, [math]r_{water}[/math], and from seabed reflection, [math]r_{bed}[/math]. In deep water, the back radiation from the seabed is almost nil. The deep water reflectance [math]r_{\infty}[/math] is thus equal to [math]r_{\infty} = \Large\frac{r_{deep water}}{r_{in}} .[/math]
We now make 2 assumptions:
- The incoming light is exponentially attenuated with depth [math]z[/math] measured from the water surface, with attenuation coefficient [math]K[/math] that represents the sum of absorption and backscattering,
- The attenuation coefficient is the same in shallow water and deep water.
We then have
[math]r_{water} = K \, r_{in} R_W \int_0^h e^{-2Kz} dz = \large\frac{1}{2}\normalsize r_{in} R_W (1 - e^{-2Kh}) , \quad r_{\infty} = \large\frac{1}{2}\normalsize R_W , \quad r_{bed} = r_{in} R_B e^{-2Kh} , \qquad (A2)[/math]
where [math]h[/math] is the water depth and [math]R_W, \, R_B[/math] are backscattering and reflection coefficients for water and for the seabed. It then follows that
[math]r_{rs} = r_{\infty} \, (1 - e^{-2Kh}) + R_B e^{-2Kh} . \qquad (A3)[/math]
Considering that downwelling and upwelling attenuation can be different, the formula for the remote sensing reflectance reads[24]:
[math]r_{rs} = r_{\infty} \, \big(1 - e^{- (K_D +K_U)h} \big) + R_B e^{-(K_D +K_U)h} . \qquad (A4)[/math]
Derivation of the remote sensing signal observed by satellite [math]R_{rs}[/math] from the below-surface reflectance [math]r_{rs}[/math] requires several correction factors: the refractive index of water, the water-to-air internal reflection, the radiance transmittance from below to above the surface, the irradiance transmittance from above to below the surface and the ratio of the upwelling radiance to the upwelling irradiance. An approximate expression for the relationship between [math]r_{rs}[/math] and [math]R_{rs}[/math] is[25]
[math]R_{rs} \approx \Large\frac{0.5 r_{rs}}{1 - 1.5 r_{rs}}\normalsize . \qquad (A5)[/math]
If the attenuation coefficients [math]K_D, \, K_U[/math] and the seabed reflection coefficient [math]R_B[/math] are known for a specific wavelength [math]\lambda[/math], then the water depth [math]h[/math] can be derived by comparing the formulas (A4, A5) with the reflectance [math]R_{sat}[/math] recorded by the satellite sensor for the corresponding wavelength band.
Using empirical expressions for [math]K_D, \, K_U, \, R_B[/math], Lee et al. (1999)[6] determined the bathymetry of Florida Bay from remote sensing reflectance data with an accuracy better than 10% for these shallow waters with a uniform, sand-type bottom.
Appendix B: Semi-Empirical Model
The first model is the one proposed by Lyzenga et al. (2006)[7], which assumes a linear relationship between the log-transformed wavelength bands and depth [math]h[/math],
[math]h = h_0 + \sum_{j=1}^N h_j \, X_j , \quad X_j = \ln \big( R_{rs}(\lambda_j) - R_{\infty} (\lambda_j) \big) , \qquad (B1)[/math]
where [math]N[/math] is the number of bands considered and [math]h_0, h_1, …, h_N[/math] are coefficients that can be estimated through linear (multiple) regression from known depth values [math]\hat{h}[/math] in a number of points of the coastal area where the bathymetry is to be determined.
The second model by Stumpf et al. (2003)[8] proposed an empirical linear relationship between the water depth [math]h[/math] and the ratio of the log-transformed green or red band ([math]\lambda_1[/math]) to the log-transformed blue band ([math]\lambda_2[/math]),
[math]h = m_0 + m_1 \Large\frac{\ln(1000 \, R_{rs}(\lambda_1))}{\ln(1000 \, R_{rs}(\lambda_2))}\normalsize . \qquad (B2)[/math]
The values of the parameters [math]m_0, \, m_1[/math] have to be adjusted by linear regression to predetermined depths in a number of points for estimating the bathymetry in other points of the coastal area[26].
Appendix C: Random Forest Regression short introduction
A regression tree (a decision tree with a numerical target variable) is a machine learning algorithm where the target variable is discretized into a number of classes and the output is assigned to one of the classes based on a sequence of decisions about the input variables. In the present case, the target variable is the depth distribution of the bathymetry. Input variables (also called features or predictors) are the remote sensing reflectance for different wavelength bands (corrected for sunglint, atmospheric influences), turbidity, seabed characteristics, etc. In a simple regression tree, a split value learned from the training data separates the input data into two subsets of output classes. In the following step each subset is split again, till at the end of the tree a single class (a so-called leaf) is left. The training of the regression tree on known depth values is based on finding the most efficient split for each input variable at each level of the regression tree. This consists of minimizing the value of the sum of the squared residuals of the known depths with respect to the average depth for each subset. The simple regression tree tends to overfit the training data; the noise in the training data is reflected in the regression tree. Noise can be minimized by considering a multitude of regression trees, each trained on a subset of the training data and a subset of the variables. The subsets are chosen at random and some of the training data may appear more than once in the subsets. This multitude of regression trees is called a random forest. The final output is the mean of the outputs of all the regression trees. Some regression trees make use of a boosting algorithm (Adaboost) to enhance the performance of a weak classifier by attributing different weights to the outcomes of the forest of random regression trees.
Appendix D: Support Vector Regression short introduction
Support vector regression (SVR) is a machine learning algorithm that learns the nonlinear relationship between observed surface reflectance and water depth without empirical knowledge of the processes that would affect surface reflectance, such as attenuation, turbidity, or bottom type. The training and test data are assumed to be independent and must be preprocessed in order to follow identical distributions. No explicit model is required for the dependence of reflectance on water depths and other physical quantities. If the dependence is non-linear, a linear classifier will fail. By performing kernel transformation, a low dimensional space is converted into a high dimensional space where a linear hyper-plane can classify the data points, thus making the support vector machine a de facto non-linear classifier. A number of kernels exist such as Polynomial Functions and Radial Basis Functions (RBF) that enable non-linear classification. Support vectors correspond to the data points that are near to the hyper-plane and help in orienting it. Being a highly sophisticated and mathematically sound algorithm, Support Vector Regression is one of the most accurate machine learning algorithms.
Appendix E: Analysis of wave characteristics from satellite images
According to linear wave theory, the water depth [math]h[/math] can be computed from the wave dispersion relation,
[math]h = \Large\frac{\lambda}{2 \pi }\normalsize \, \tanh^{-1} \Big( \Large\frac{2 \pi c^2}{g \lambda}\normalsize \Big), \qquad (E1)[/math]
if the wavelength [math]\lambda[/math] and the wave celerity [math]c[/math] are known. However, in deep water, [math]2 \pi h / \lambda \gt \gt 1[/math], which implies that the wave celerity does not depend on the depth and consequently the water depth cannot de computed from Eq. (E1).
In shallow water, [math]h \lt \lt \lambda / 2 \pi[/math], the relationship (E1) between water depth and wave celerity becomes [math]h \approx c^2 / g[/math]. The validity of this expression is questionable for the highly distorted bore-type waves on the upper shoreface. A more accurate relationship can be derived from cnoidal wave theory, [math]c^2 \approx g (h + H)[/math], where [math]H[/math] is the wave height[27].
Wavelength and wave celerity can be extracted from the pixel intensity pattern [math]I(x,y)[/math], where [math]x, y[/math] are the spatial coordinates of the remote sensing image. As the pixel intensity pattern may contain a great amount of noise, a common technique to enhance linear patterns is the so-called Radon Transform (RT) [21]. Even if wave fronts are not obviously apparent, they normally will appear through the Radon Transform. Linear features coinciding with wave fronts are represented by lines [math]r=x \cos \theta + y \sin \theta[/math], where [math]\theta[/math] is the wave incidence angle and [math]r[/math] the distance of the wave front to [math]x=0, \, y=0[/math] along the wave ray. The Radon Transform is then given by the integral over the image area [math]A[/math] along possible wave fronts (depending on [math]\theta[/math]),
[math]R_I (\theta, r) = \int \int_A \, I(x,y) \, \delta(r-x \cos \theta -y \sin \theta) \, dx dy , \qquad (E2)[/math]
where [math]\delta[/math] is the Kronecker distribution ([math]\delta (x) = 0[/math] for [math]x \ne 0 , \; \int_{x\lt y}^{x\gt y} f(x) \delta (x-y) dx = f(y)[/math]). Wavefronts correspond to values of [math]\theta, r[/math] where [math]R_I[/math] is maximum.
The pixel intensity pattern is discretized ([math]r \rightarrow r_0, r_1, …, r_N[/math]) by assuming periodicity ([math]R_I(\theta, r_0) = R_I(\theta, r_N)[/math]) for successive wave fronts. The discrete complex Fourier transform of [math] R_I (\theta, r)[/math], [math]\; \tilde{ R_I }(\theta, k) = \sum_{n=0}^{N-1} R_I (\theta,r_n) \exp(-i 2 \pi k n/N), \; [/math] gives the intensity and the phase of the pixel pattern in wavenumber space [math]k = 2 \pi / \lambda[/math]. From the phase difference [math]\Delta \Phi[/math] between two successive images ([math]\Delta t[/math]) the wave celerity [math]c[/math] can be derived:
[math]\Phi = \tan^{-1} \Big( \Large\frac{\Im \tilde{ R_I }(\theta, k)}{\Re \tilde{ R_I }(\theta, k)}\normalsize \Big) , \quad c = \Large\frac{\lambda}{2 \pi}\frac{\Delta \Phi}{\Delta t}\normalsize . \qquad (E3)[/math]
Other techniques can also be used for analyzing the remote sensing images, for instance wavelet analysis [20].
Related articles
- HyMap: Hyperspectral seafloor mapping and direct bathymetry calculation in littoral zones
- Use of X-band and HF radar in marine hydrography
- Use of Lidar for coastal habitat mapping
- Data processing and output of Lidar
- Tidal flats from space
- Optical measurements in coastal waters
- Instruments for bed level detection
References
- ↑ Almar, R., Bergsma, E. W., Thoumyre, G., Baba, M. W., Cesbron, G., Daly, C., Garlan, T. and Lifermann, A. 2021. Global satellite-based coastal bathymetry from waves. Remote Sensing 13, 4628
- ↑ Mishra, D. R., Narumalani, S., Rundquist, D. and Lawson, M. 2005. Characterizing the Vertical Diffuse Attenuation Coefficient for Downwelling Irradiance in Coastal Waters: Implications for Water Penetration by High Resolution Satellite Data. ISPRS Journal of Photogrammetry and Remote Sensing 60: 48–64
- ↑ 3.0 3.1 Cesbron, G., Melet, A., Almar, R., Lifermann, A., Tullot, D. and Crosnier, L. 2021. Pan-European Satellite-Derived Coastal Bathymetry—Review, User Needs and Future Services. Frontiers in Marine Science 8, 740830
- ↑ Gege, P. 2012. Analytic model for the direct and diffuse components of downwelling spectral irradiance in water. Applied Optics 51: 1407-1419
- ↑ Ruddick, K.G., Voss, K., Boss, E., Castagna, A., Frouin, R., Gilerson, A., Hieronymi, M., Johnson, B.C., Kuusk, J., Lee, Z., Ondrusek, M., Vabson, V. and Vendt, R. 2019. A Review of Protocols for Fiducial Reference Measurements of Downwelling Irradiance for the Validation of Satellite Remote Sensing Data over Water. Remote Sens. 11, 1742
- ↑ 6.0 6.1 6.2 Lee, Z., Carder, K. L., Mobley, C. D., Steward, R. G. and Patch, J. S. 1999. Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization. Appl. Opt. 38: 3831–3843
- ↑ 7.0 7.1 Lyzenga, D.R., Malinas, N.P. and Tanis, F.J. 2006. Multispectral bathymetry using a simple physically based algorithm. IEEE Trans. Geosci. Remote Sens. 44: 2251–2259
- ↑ 8.0 8.1 Stumpf, R.P., Holderied, K. and Sinclair, M. 2003. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 48: 547–556
- ↑ Liu, Y., Tang , D., Deng, R., Cao, B., Chen, Q., Zhang, R., Qin, Y. and Zhang, S. 2020. An adaptive blended algorithm approach for deriving bathymetry from multispectral imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 14: 801–817
- ↑ Xie, C., Chen, P., Zhang, Z. and Pan, D. 2023. Satellite-derived bathymetry combined with Sentinel-2 and ICESat-2 datasets using machine learning. Front. Earth Sci. 11, 1111817
- ↑ Sanchez-Carnero, N., Ojeda-Zujar, J., Rodriguez-Perez, D. and Marquez-Perez, J. 2014. Assessment of different models for bathymetry calculation using SPOT multispectral images in a high-turbidity area: The mouth of the Guadiana Estuary. International Journal of Remote Sensing 35: 493–514
- ↑ Lumban-Gaol, Y. A., Ohori, K. A. and Peters, R. Y. 2021. Satellite-derived bathymetry using convolutional neural networks and multispectral sentinel-2 images. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives 43(B3-2021): 201-207
- ↑ Mudiyanselage, S.S.J.D., Abd-Elrahman, A., Wilkinson, B. and Lecours, V. 2022. Satellite-derived bathymetry using machine learning and optimal Sentinel-2 imagery in South-West Florida coastal waters. GIScience & Remote Sensing 59: 1143-1158
- ↑ Misra, A., Vojinovic, Z., Ramakrishnan, B., Luijendijk, A. and Ranasinghe, R. 2018. Shallow water bathymetry mapping using Support Vector Machine (SVM) technique and multispectral imagery. International journal of remote sensing 39: 4431-4450
- ↑ Zhang, X., Chen, Y., Le, Y., Zhang, D., Yan, Q., Dong, Y., Han, W. and Wang, L. 2022. Nearshore Bathymetry Based on ICESat-2 and Multispectral Images: Comparison Between Sentinel-2, Landsat-8, and Testing Gaofen-2. In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15: 2449-2462
- ↑ Almar, R., Bonneton, P., Senechal, N. and Roelvink, D. 2009. Wave celerity from video imaging: A new method. In Proceedings of the 31st international conference coastal engineering. Vol. 1 :661–673
- ↑ Holman, R. A., Plant, N. and Holland, T. 2013. Cbathy: A robust algorithm for estimating nearshore bathymetry. Journal of Geophysical Research: Oceans 118: 2595–2609
- ↑ Bergsma, E. W. J., Almar, R., de Almeida, L. P. M. and Sall, M. 2019. On the operational use of uavs for video-derived bathymetry. Coastal Engineering 152, 103527
- ↑ Lange, A.M.Z., Fiedler, J.W., Merrifield, M.A. and Guza, R.T. 2023. UAV video-based estimates of nearshore bathymetry. Coastal Engineering 185, 104375
- ↑ 20.0 20.1 Poupardin, A., Idier, D., de Michele, M. and Raucoules, D. 2016. Water depth inversion from a single spot-5 dataset. IEEE Trans. Geosci. Remote Sens. 119 : 2329–2342
- ↑ 21.0 21.1 Bergsma, E.W.J., Almar, R. and Maisongrande, P. 2019. Radon-augmented sentinel-2 satellite imagery to derive wave-patterns and regional bathymetry. Remote Sens. 11, 16
- ↑ Bergsma, E.W.J., Almar, F., Rolland, A., Binet, R., Brodie, K.L. and Bak, A.S. 2021. Coastal morphology from space: A showcase of monitoring the topography-bathymetry continuum. Remote Sensing of Environment 261, 112469
- ↑ 23.0 23.1 Dickens, K. and Armstrong, A. 2019. Application of Machine Learning in Satellite Derived Bathymetry and Coastline Detection. SMU Data Science Review 2, Article 4
- ↑ Albert, A. and Mobley, C.D. 2003. An analytical model for subsurface irradiance and remote sensing reflectance in deep and shallow case-2 waters. Opt. Express 11: 2873–2890
- ↑ Lee, Z., Carder, K. L., Mobley, C. D., Steward, R. G. and Patch, J. S. 1998. Hyperspectral remote sensing for shallow waters: 1. A semi-analytical model. Appl. Opt. 37: 6329–6338
- ↑ Caballero, I. and Stumpf, R.P. 2020. Towards Routine Mapping of Shallow Bathymetry in Environments with Variable Turbidity: Contribution of Sentinel-2A/B Satellites Mission. Remote Sens. 12, 451
- ↑ Thornton, E.B. and Guza, R.T. 1982. Energy saturation and phase speeds measured on a natural beach. J. Geophys. Res. 87, 9499