This document aims to give background information to the GitHub project: prhuppertz/Burned_Area_Detection

Motivation

The increasing intensities and frequencies of wildfires with climate change create a growing demand for informed decision-making to mitigate the ecological and socioeconomic costs of wildfires. Burned area (BA) detections provide post-fire information on extents, frequencies, and characteristics of wildfires and allow fire managers, policy-makers, and climate scientists to improve decision-making with better estimates for costs, emissions, and regional trends.

As most global burned area models are operating with the low spatial resolution remote sensing data of MODIS (500m-1000m) this comes with several disadvantages concerning the accuracy of the models, partly ignoring up to 80% of wildfires in areas such as Sub-Saharan Africa (Roteta et al. 2019). On the other hand, remote sensing datasets with higher spatial resolutions suffer problems with high amounts of data, computational costs and low temporal resolutions and are thus only deployed on regional scales.

To tackle the inaccuracies that result from the use of coarse spatial resolution data and create a model that is efficient and scalable, a Deep Learning approach for BA detection with the medium-high spatial resolution (10-60m) Sentinel-2 remote sensing data is explored in this project.

Task

The task is to create semantic segmentations of BAs from remote sensing data. This task should be achieved by creating a Deep Learning model that uses Sentinel-2 remote sensing data and burned area ground truth data to train and validate its accuracy. The effectiveness and accuracy of the Deep Learning approach should be compared to a simple baseline model.

Data

We use two main data sources for training and validating the Deep Learning and baseline model:

  1. Sentinel-2 Level 1-C Data, atmospherically corrected with SIAC, focussing on the use of Band 03 (Green), Band 8A(NIR) and Band 11(SWIR) at 20-60m spatial resolution and 5-10 day temporal resolution for 2016 - in TIFF format at 100km x 100km scale
  2. Burned area ground truth data from the ICNF for the whole of Portugal for 2016 - in shape file format with additional information, including: fire start date, fire end date, area burned, type of fire and the land use type of the area (e.g. forest, agriculture,...)

Notes on data choices:

Figure 1. Spectral reflectance of healthy vegetation and burned vegetation in the different spectral bands (Wasser and Cattau 2020)

Figure 1. Spectral reflectance of healthy vegetation and burned vegetation in the different spectral bands (Wasser and Cattau 2020)

Figure 2a. False-color images with a combination of SWIR-short, SWIR-long and NIR make burned areas visible but difficult to distinguish from their environment

Figure 2a. False-color images with a combination of SWIR-short, SWIR-long and NIR make burned areas visible but difficult to distinguish from their environment

Figure 2b. Combination of SWIR, Green and NIR spectral bands emphasise burned areas as dark brown patches in a bright green environment

Figure 2b. Combination of SWIR, Green and NIR spectral bands emphasise burned areas as dark brown patches in a bright green environment

Data Processing

The data pipeline absolves the following steps on the Sentinel-2 and ground truth data to a create processed, trainable dataset for our deep learning and baseline model: