Regular talk - 10 min
in
Workshop: AI for Earth and Space Science
Testing Interpretability Techniques for Deep Statistical Climate Downscaling
Jose González-Abad · Jorge Baño-Medina · José Manuel Gutiérrez
Deep Learning (DL) has recently emerged as a promising Empirical Statistical Downscaling perfect-prognosis technique (ESD-PP), to generate high-resolution fields from large-scale climate variables. Here, we analyze two state-of-the-art DL topologies for ESD-PP of different levels of complexity over North America. Besides classical validation leaning on accuracy metrics (e.g., Root Mean Squared Error (RMSE)), we evaluate several interpretability techniques to gain understanding on the inner functioning of the DL models deployed. Taking as reference the RMSE both topologies show similar values. Nonetheless, by analyzing the resulting interpretability maps, we find that the simplest model fails to capture a realistic physics-based input-output link, whilst the complex one describes a local pattern, characteristic of downscaling. In climate change scenarios, where weather extremes are exacerbated, erroneous patterns can lead to highly biased projections. Therefore, including interpretability techniques as a diagnostic of model functioning in the evaluation process can help us to better select and design them.