Abstract:
There are many uncertainties in radiation fog forecast. Continuous effort is being made to improve the forecast. A supplementary and increasingly popular approach to numerical weather forecast is the forecast with machine learning (ML) algorithms. While numerical weather forecast is based on mathematical models with partial differential equations, ML algorithms take a more heuristic approach. The latter strategy calls for three steps.
Precise data preprocessing is the initial step. This implies that after preprocessing, the dataset must contain the forecast-relevant information in a way that the algorithm can learn from it. This is not a trivial step because it necessitates a thorough understanding of the fundamental principles underlying radiation fog. Even when the relevant information is contained in the data, it is not always evident, especially in severely unbalanced fog datasets. The best strategy to achieve a pleasing result may therefore not be to simply feed the algorithm all the data and variables that are available. So that the appropriate dynamics may be detected by the algorithm, the data and information should be adjusted accordingly.
The second step is the data splitting into training, validation and test datasets. The ability to predict fog is driven by the temporally linked process that describes the ongoing change in atmospheric state but in order to guarantee constant independence between the training, validation and test dataset, the data splitting method must consider this temporally linked information between the individual datapoints. Otherwise, the algorithm’s forecast accuracy can be based on the temporally correlated information content of the individual data points.
The third step is the interpretation of the model scores. When looking at the forecast score alone, it is a very abstract number that does not directly allow a statement about the forecast performance of the model. In order to evaluate the model performance, two baselines are of relevance: algorithm complexity and dataset complexity. A baseline for algorithm complexity justifies the chosen algorithm and also classifies the model performance. A baseline for dataset complexity also classifies the model performance and enables a better comparability of different datasets.
Following these principles, our current objective is to improve the ML based fog forecast with XGBoost for a forecasting period up to four hours for the station in Linden-Leihgestern (Germany). The training and evaluation are based on the Expanding Window Approach (Vorndran et al. 2022) that considers the autocorrelation of a fog time series and maintains the temporal order during both training and evaluation. The evaluation is based on a score for each of the following categories: Overall performance, fog formation, and fog dissipation. The results are set in relation to different baselines to evaluate the performance and the dataset complexity. Building on this scheme, newly preprocessed data led to an improvement in the prediction of radiation fog for the station in Linden-Leihgestern. We will present the most recent findings from our research.