Poster
DAM: A Foundation Model for Forecasting
Luke Darlow · Qiwen Deng · Ahmed Hassan · Martin Asenov · Rajkarn Singh · Artjom Joosen · Adam Barker · Amos Storkey
Halle B
It is challenging to scale time series forecasting models such that they forecast accurately for multiple distinct domains and datasets, all with potentially different underlying collection procedures (e.g., sample resolution), patterns (e.g., periodicity), and prediction requirements (e.g., reconstruction vs. forecasting). We call this general task universal forecasting. Existing methods usually assume that input data is regularly sampled, and they forecast to pre-determined horizons, resulting in failure to generalise outside of the scope of their training. We propose the DAM - a neural model that takes randomly sampled histories and outputs an adjustable basis composition as a continuous function of time for forecasting to non-fixed horizons. It involves three key components: (1) a flexible approach for using randomly sampled histories, from a long-tail distribution, that enables an efficient global perspective of the underlying temporal dynamics while retaining focus on the recent history; (2) a transformer backbone that is trained on these actively sampled histories to produce, as representational output, (3) the basis coefficients of a continuous function of time. We show that a single DAM, trained on 10 common time series datasets, either outperformed or closely matched existing SoTA models, even though those models were trained to specialise on specific dataset and horizon combinations. The DAM also performs well at imputation, transfers well to held-out datasets, is interpretable via its basis function composition and the attention mechanism it uses, can be tuned for different inference-cost requirements, is easy to deploy, is robust to missing and irregularly sampled data by design, and can forecast effectively to distant horizons without adaptation or repeated autoregressive prediction.