Incremental complexity: A forecaster’s odyssey

23 minute read

Published:

1 Introduction

Welcome! Our adventures in time-series analysis continue, but this time with a focus on forecasting. Forecasting is an incredibly important tool for many researchers and analysts, but is often one that is easily abused, obfuscated, or just outright done poorly. In this article, we are going to explore forecasting from a ‘bottom-up’ approach to complexity. That is, we want to try simple models first, before jumping into something more complex, and, therefore, harder to interpret. This is something I call incremental complexity, and is an idea I introduced in this recent paper. Now, in some applications, you might not need to care about interpretability, such as maybe process engineering, where maximising accuracy is what matters. But for most applied settings in which I typically work, people need to know how conclusions were reached, or how numbers were generated. Using some off-the-shelf convoluted neural network just does not cut it. Besides, that stuff is boring anyway, we care about statistical uncertainty and distributions in this house!

Before we get started, we will load the R packages we need:

library(dplyr)
library(ggplot2)
library(lubridate)
library(tsibble)
library(feasts)
library(fable)

2 Downloading the dataset

We are going to analyse monthly laboratory confirmed influenza cases in Australia for no reason other than it seemed interesting. There is a bit of data wrangling to get it in the format we need, so stay with me here. We are just going to take the data up to and including 2017 to avoid the weirdness of COVID-19 (I will save that for a future post!) and the strange dip exhibited in 2018. Since data wrangling is not the focus of this article, feel free to just run this and move on:

library(httr)
library(readxl)
library(janitor)

url <- "https://www.health.gov.au/sites/default/files/2023-09/national-notifiable-diseases-surveillance-system-nndss-public-dataset-influenza-laboratory-confirmed.xlsx"

GET(url, write_disk(tf <- tempfile(fileext = ".xlsx")))
flu_2008_2015 <- read_excel(tf, sheet = 1, skip = 1)
flu_2016_2018 <- read_excel(tf, sheet = 2, skip = 1)

flu <- bind_rows(flu_2008_2015, flu_2016_2018) %>%
  clean_names() %>%
  mutate(week_ending_friday = as.Date(week_ending_friday, format = "%Y-%m-%d"),
         month = yearmonth(as.Date(week_ending_friday, format = "%Y-%m-%d"))) %>%
  reframe(cases = n(), .by = "month") %>%
  mutate(id = 1) %>%
  filter(month <= yearmonth(as.Date("2017-12-31", format = "%Y-%m-%d")))

Here is a quick visualisation of what we are dealing with:

flu %>%
  ggplot(aes(x = month, y = cases)) +
  geom_line(aes(group = 1)) +
  labs(x = "Month",
       y = "Lab confirmed influenza cases",
       colour = NULL) +
  theme_bw() +
  theme(legend.position = "bottom")

However, the temporal dynamics are quite difficult to observe on this scale. Let’s log-scale it and try again:

flu %>%
  ggplot(aes(x = month, y = cases)) +
  geom_line(aes(group = 1)) +
  labs(x = "Month",
       y = "log-scaled lab confirmed influenza cases",
       colour = NULL) +
  scale_y_log10() +
  theme_bw() +
  theme(legend.position = "bottom")

Beautiful! We can now make out a seasonal pattern and an upwards trend. But more on these things later! It's time to try some basic models.

3 Fitting forecast models

In R, forecasting should be done using the tidyverts – a collection of packages developed by the incredible Rob Hyndman and colleagues for handling forecasting and time-series data more generally using tidy data principles. Specifically, we are interested in the following packages:

  • tsibble – package for the tsibble data type, which is a time series version of the regular tibble data frame object
  • feasts – package for computing time-series features and useful associated quantities
  • fable – the actual forecasting package

To start with, we will convert our data frame to a tsibble:

tsbl <- as_tsibble(flu, key = id, index = month)
head(tsbl)
## # A tsibble: 6 x 3 [1M]
## # Key:       id [1]
##      month cases    id
##      <mth> <int> <dbl>
## 1 2008 Jan   121     1
## 2 2008 Feb   155     1
## 3 2008 Mar   149     1
## 4 2008 Apr   163     1
## 5 2008 May   270     1
## 6 2008 Jun   250     1

3.1 Some simple approaches

Okay, so where should we start? Well, if you open the highly-recommended and exceptional Forecasting: Principles and Practice (3rd Edition), you would find a short list of so-called ‘simple’ forecasting methods. These include:

  • Mean method (MEAN function in fable) – forecasts of all future values are equal to the average (i.e., mean) of the historical data. With historical data denoted by \(y_{1}, \dots , y_{T}\), forecasts can be written as:

\[ \hat{y}_{T+h|T} = \bar{y} = \frac{(y_{1} + \dots + y_{T})}{T} \] where \(\hat{y}_{T+h|T}\) basically says ‘the estimate of the value for \(y_{T+h}\) based on the data \(y_{1} + \dots + y_{T}\)’.

  • Naive method (NAIVE function in fable) – forecasts are the value of the last observation. Mathematically, this approach is expressed by:

\[ \hat{y}_{T+h|T} = y_{T} \]

  • Seasonal naive method (SNAIVE function in fable) – forecasts are equal to the last observed value from the same season. Mathematically, this is:

\[ \hat{y}_{T+h|T} = y_{T+h-m(k+1)} \] where \(m\) is the seasonal period and \(k\) is the number of complete years in the forecast period.

  • Drift method (DRIFT function in fable) – forecasts are similar to the naive method, but they are allowed to increase or decrease over time, where the amount of change over time (i.e., ‘drift’) is the average change observed in the historical data. Mathematically, this is:

\[ \hat{y}_{T+h|T} = y_{T} + \frac{h}{T-1} \sum_{t=2}^{T}(y_{t}-y_{t-1}) = y_{T}+h (\frac{y_{T}-y_{1}}{T-1}) \] In fpp3, Rob Hyndman cleverly describes this is being equivalent to drawing a straight line between the first and last observations, and extrapolating it forward.

3.1.1 Fitting simple models in R

As highlighted above, there are simple one-line model functions in fable that enable us to fit these models. But there is another reason why fable is so good – we can fit them all at the same time using the model function! This makes our code more compact and also makes it easier to compare models. Here is an example using our sample data, remembering that we want to log-scale cases1:

fit <- tsbl %>%
  model(
    mean = MEAN(log(cases)),
    naive = NAIVE(log(cases)),
    snaive = SNAIVE(log(cases) ~ lag("year")),
    drift = RW(log(cases) ~ drift()),
  )

fit
## # A mable: 1 x 5
## # Key:     id [1]
##      id    mean   naive   snaive         drift
##   <dbl> <model> <model>  <model>       <model>
## 1     1  <MEAN> <NAIVE> <SNAIVE> <RW w/ drift>

This returns a mable object (a ‘model fable’) where each row corresponds to a unique time series (in our case we only have one) and each column contains an object for each model that was fit. Very neat.

What can we do with this? Well, a lot!

We might want to produce some quick forecasts for an entire year for all models:

first_fc <- fit %>% 
  forecast(h = 12)

head(first_fc)
## # A fable: 6 x 5 [1M]
## # Key:     id, .model [1]
##      id .model    month          cases .mean
##   <dbl> <chr>     <mth>         <dist> <dbl>
## 1     1 mean   2018 Jan t(N(7.4, 2.3)) 3638.
## 2     1 mean   2018 Feb t(N(7.4, 2.3)) 3638.
## 3     1 mean   2018 Mar t(N(7.4, 2.3)) 3638.
## 4     1 mean   2018 Apr t(N(7.4, 2.3)) 3638.
## 5     1 mean   2018 May t(N(7.4, 2.3)) 3638.
## 6     1 mean   2018 Jun t(N(7.4, 2.3)) 3638.

The forecast function knows how to handle mable objects which makes our life easy. We now have a dataframe containing full forecast distributions for our forecast period. We can easily plot them:

first_fc %>%
  autoplot(tsbl) +
  scale_y_log10()

We can also disable the prediction intervals to just observe the means for visual clarity:

first_fc %>%
  autoplot(tsbl, level = NULL) +
  scale_y_log10()

This is great, and it appears that the seasonal naive model performs the best, but how can we quantify that assertion? Let’s take a look at some ways to evaluate performance.

3.1.2 Evaluating forecast accuracy

Typically in forecasting work we would start with analysis of residuals and other diagnostics. But demonstrating all of those for every model here would take a long time. I encourage readers to check out the relevant chapters of fpp3. Instead, we are going to focus on evaluating point forecast and forecast distribution accuracy.

3.1.2.1 Point forecast accuracy

Proper evaluation of accuracy instead of just looking at residuals is critical, however. This is because residuals are not a good indicator of how large actual forecast errors might be. Similar to other areas of statistics and machine learning, we need to split our data into training and test data to evaluate genuine forecasts over unseen time points. Typical guidance is that the test set should be at least as large as the longest forecast time horizon.

Let’s filter our data to contain everything up to and including 2016, and then use 2017 as our test set2:

train <- tsbl %>%
  filter(lubridate::year(month) < 2017)

test <- tsbl %>%
  filter(lubridate::year(month) >= 2017)

Now we can re-fit the models using only the training data and generate forecasts for one year:

fit_train <- train %>%
  model(
    mean = MEAN(log(cases)),
    naive = NAIVE(log(cases)),
    snaive = SNAIVE(log(cases) ~ lag("year")),
    drift = RW(log(cases) ~ drift())
  )

test_fc <- fit_train %>%
  forecast(h = 12)

Here is a plot of our forecasts against the actuals:

test_fc %>%
  autoplot(tsbl, level = NULL) +
  labs(x = "Month",
       y = "log-scaled lab confirmed influenza cases") +
  scale_y_log10() +
  guides(colour = guide_legend(title = "Forecast"))

Clearly, these are pretty poor forecasts (aside from maybe the seasonal naive model) which suggests that a more sophisticated approach is warranted (i.e., one that captures the temporal dynamics our eyes can see in the data, at a minimum). Indeed, this is in fact the point of this entire article – we need to start with basic methods that are completely transparent and interpretable, and build the case for additional complexity. In our circumstances here, we have a pretty strong case for moving to more complex methods. We can further this case by computing a range of error metrics for the forecasts3:

accuracy(test_fc, test)
## # A tibble: 4 × 11
##   .model    id .type     ME   RMSE    MAE    MPE  MAPE  MASE RMSSE  ACF1
##   <chr>  <dbl> <chr>  <dbl>  <dbl>  <dbl>  <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 drift      1 Test   8486. 31413. 18776. -164.  201.    NaN   NaN 0.516
## 2 mean       1 Test  17745. 35926. 18164.   27.9  47.4   NaN   NaN 0.560
## 3 naive      1 Test  11597. 32528. 17889.  -99.3 143.    NaN   NaN 0.532
## 4 snaive     1 Test   7982. 18229.  9741.  -17.2  53.9   NaN   NaN 0.542

3.1.2.2 Forecast distribution accuracy

So far, we have only evaluated point forecast accuracy. But we would be terrible forecasters if that is all we cared about! Why interpret a single value, when you can make use of an entire distribution which encapsulates your uncertainty about the future! A useful metric for this scenario is the Continuous Ranked Probability Score (CRPS), which is essentially a a weighted absolute error computed from the entire forecast distribution, where the weighting explicitly accounts for the probabilities. Here is how we can use the CRPS to evaluate forecast distributions for all time points in the prediction set:

test_fc %>%
  accuracy(test, list(crps = CRPS))
## # A tibble: 4 × 4
##   .model    id .type   crps
##   <chr>  <dbl> <chr>  <dbl>
## 1 drift      1 Test  14487.
## 2 mean       1 Test  17314.
## 3 naive      1 Test  14348.
## 4 snaive     1 Test   8222.

For the CRPS, lower values indicate better distributional forecasts. We see that the snaive method is producing better forecasts than the other simple approaches. However, as noted earlier, the other methods are quite poor, which suggests we may need to move to something that better captures the relevant temporal properties.

3.2 More complex methods

Okay, so we have built up a case for moving to more complex methods. Remember, complexity often comes at the cost of interpretability. However, since we are trying to be truly incremental about all of this, we are going to progress to methods that are indeed more complex than a mean or naive forecast, but are still incredibly interpretable and well-studied. Best of all, since we are using fable, we can easily compare the performance of all of these approaches. Let’s fit the same forecasts as before but with the addition of an autoregressive integrated moving average (ARIMA) model, an exponential smoothing (ETS) model, a Seasonal and Trend decomposition using Loess (STL) model, and a time-series linear model which just measures trend and annual seasonality (where seasonality is just modelled as a simple Fourier term):

fit_train_complex <- train %>%
  model(
    mean = MEAN(log(cases)),
    naive = NAIVE(log(cases)),
    snaive = SNAIVE(log(cases) ~ lag("year")),
    drift = RW(log(cases) ~ drift()),
    stepwise = ARIMA(log(cases)),
    search = ARIMA(log(cases), stepwise = FALSE, approx = FALSE),
    ets = ETS(log(cases)),
    stlf = decomposition_model(
      STL(log(cases) ~ trend(window = 21), robust = TRUE), 
      NAIVE(season_adjust)
      ), 
    tslm = TSLM(log(cases) ~ trend() +  season())
  )

test_fc_complex <- fit_train_complex %>%
  forecast(h = 12)

Let’s unpack the new models a bit further. There are two ARIMA specifications. They both use fable’s amazing auto-ARIMA functionality (meaning we don’t have to manually specify the \(p\), \(d\), and \(q\) parameters), but the stepwise model uses the default stepwise automation procedure, while search does a much larger model space search to find the optimal model4. Similarly, we are letting fable do the automated search for the optimal parameters of the ETS model too.

But why are ARIMA, ETS, and STL models ‘interpretable’ despite the substantial jump in complexity? Well, an ARIMA model is a generative process. That is, we can simulate time series from an ARIMA process because it describes the temporal mechanics: an autoregressive lag order (\(p\); i.e., each time point is correlated to the \(p-th\) point behind it), a degree to difference the data (\(d\); removes seasonality), and a moving average lag order (\(q\); i.e., we use past forecast errors rather than the values as in the case of autoregression). This leads us to a formulation that looks incredibly like a nice extension of linear regression, where \(y_{t}^{'}\) is the differenced time series:

\[ y_{t}^{'} = c + \phi_{1} y_{t-1}^{'} + \dots + \phi_{p} y_{t-p}^{'} + \theta_{1} \epsilon_{t-1} + \dots + \theta_{q} \epsilon_{t-q} + \epsilon_{t} \]

The \(\phi\) components describe the autoregressive part of the model, while the \(\theta\) components describe the moving average part of the model. \(\epsilon\) is the error term.

With regards to ETS, put simply, an ETS model employs weighted averages of past observations, with the weights decaying exponentially as the observations get older. This approach can then be extended to account for both trends and seasonality. Mathematically, we can represent the weighted average form of an ETS model as:

\[ \hat{y}_{T+1|T} = \sum_{j=0}^{T-1} \alpha(1 - \alpha)^{j} y_{T-j} + (1 - \alpha)^{T} \mathcal{l}_{0} \]

Finally, STL is the most intuitive of the three new additions. Its usefulness is best seen graphically, since it decomposes the time series into trend, seasonality, and the remainder of the signal. This leads to an additive component interpretation of time series. Here is the STL decomposition for the train set, where the upwards trend and annual seasonality is made very clear:

train %>%
  model(stl = STL(log(cases))) %>%
  components() %>%
  autoplot()

We can now directly compare the forecast distributions all of the models fit thus far:

test_fc_complex %>%
  accuracy(test, list(crps = CRPS)) %>%
  arrange(crps)
## # A tibble: 9 × 4
##   .model      id .type   crps
##   <chr>    <dbl> <chr>  <dbl>
## 1 search       1 Test   5160.
## 2 stepwise     1 Test   5226.
## 3 tslm         1 Test   6749.
## 4 stlf         1 Test   8134.
## 5 snaive       1 Test   8222.
## 6 ets          1 Test  11185.
## 7 naive        1 Test  14348.
## 8 drift        1 Test  14487.
## 9 mean         1 Test  17314.

We see that the search ARIMA model produces the best forecast distributions, and four of the five newer complex models are the top performers. Let’s visualise the search ARIMA’s forecasts independently:

test_fc_complex %>%
  filter(.model == "search") %>%
  autoplot(tsbl) +
  labs(x = "Month",
       y = "log-scaled lab confirmed influenza cases") +
  scale_y_log10() +
  guides(colour = guide_legend(title = "Forecast"))

The model appears to capture 2017’s data reasonably well in terms of both the prediction interval coverage and the mean forecast against the actual. However, there are other methods for evaluating forecast accuracy. One of these is a Winkler score. If the \(100(1- \alpha) \%\) prediction interval at time \(t\) is given by \([\mathcal{l}_{\alpha,t}, \mu_{\alpha,t}]\), then the Winkler score is defined as the length of the interval plus a penalty if the observation is outside the interval. As Rob Hyndman succinctly states in fpp3:

For observations that fall within the interval, the Winkler score is simply the length of the interval. Thus, low scores are associated with narrow intervals. However, if the observation falls outside the interval, the penalty applies, with the penalty proportional to how far the observation is outside the interval.

We can compare Winkler scores for forecasts easily in fable, but the decision of the width of the interval is important (we use \(95\%\) here for demonstration purposes):

test_fc_complex %>%
  accuracy(tsbl, measures = interval_accuracy_measures,
    level = 95) %>%
  arrange(winkler)
## # A tibble: 9 × 4
##   .model      id .type winkler
##   <chr>    <dbl> <chr>   <dbl>
## 1 tslm         1 Test   43905.
## 2 stepwise     1 Test   50738.
## 3 search       1 Test   52982.
## 4 snaive       1 Test   75465.
## 5 naive        1 Test  230301.
## 6 stlf         1 Test  231972.
## 7 drift        1 Test  389376.
## 8 mean         1 Test  466614.
## 9 ets          1 Test  590724.

Winkler scores suggest that the time-series linear model (i.e., a model consisting of a linear trend component added to a seasonal periodicity component) is the best performing method. The two ARIMA specifications are the second and third-best options, followed by the seasonal naive method. Clearly, this finding contrasts the result obtained from the CRPS method. Statistical decision making is hard!

The above accuracy comparisons were computed on the prediction intervals. But we can also just compare pointwise forecasts across a wide range of metrics:

test_fc_complex %>%
  accuracy(test) %>%
  arrange(RMSE)
## # A tibble: 9 × 11
##   .model      id .type      ME   RMSE    MAE    MPE  MAPE  MASE RMSSE   ACF1
##   <chr>    <dbl> <chr>   <dbl>  <dbl>  <dbl>  <dbl> <dbl> <dbl> <dbl>  <dbl>
## 1 stlf         1 Test    -906.  7719.  5183.  -75.6  83.0   NaN   NaN -0.206
## 2 search       1 Test    4338. 11623.  5371.  -13.5  31.0   NaN   NaN  0.220
## 3 stepwise     1 Test    4562. 12019.  5913.  -18.4  37.6   NaN   NaN  0.302
## 4 tslm         1 Test    6821. 17958.  8050.  -12.3  32.9   NaN   NaN  0.378
## 5 snaive       1 Test    7982. 18229.  9741.  -17.2  53.9   NaN   NaN  0.542
## 6 ets          1 Test  -15075. 20602. 15075. -160.  160.    NaN   NaN  0.660
## 7 drift        1 Test    8486. 31413. 18776. -164.  201.    NaN   NaN  0.516
## 8 naive        1 Test   11597. 32528. 17889.  -99.3 143.    NaN   NaN  0.532
## 9 mean         1 Test   17745. 35926. 18164.   27.9  47.4   NaN   NaN  0.560

Now we see that the decomposition model is regarded as the best. However, as the table shows, interpretation of different error metrics can lead to different decision regarding the best performing model. Regardless, the overarching finding here is that more complex models appear to perform better than simple models – even from a pointwise perspective.

3.3 Forecast combinations and method comparisons

So far, we have directly compared multiple individual forecasting methods. But what about if we combined them? Could a composite model outperform any individual forecast method? As it turns out, this 1989 review showed that combining multiple forecasts often leads to increased forecast accuracy, and that for most applications, averaging the different forecast methods can provide substantial performance improvement. Now, this is again a step up in complexity, but if the performance gains truly are that meaningful, the increase may be worth it. Further, if the underlying methods used in the composite forecast are themselves interpretable, then producing an average of them should not harm overall interpretability. Let’s try a combination of the search ARIMA and the time-series linear model and see how well it does:

fit_train_composite <- train %>%
  model(
    mean = MEAN(log(cases)),
    naive = NAIVE(log(cases)),
    snaive = SNAIVE(log(cases) ~ lag("year")),
    drift = RW(log(cases) ~ drift()),
    stepwise = ARIMA(log(cases)),
    search = ARIMA(log(cases), stepwise = FALSE, approx = FALSE),
    ets = ETS(log(cases)),
    stlf = decomposition_model(
      STL(log(cases) ~ trend(window = 21), robust = TRUE), 
      NAIVE(season_adjust)
      ), 
    tslm = TSLM(log(cases) ~ trend() +  season())
  ) %>%
    mutate(arima_tslm = (search + tslm) / 2)

Now, you may note that I haven’t just piped straight into the forecast function like we did in the previous sections. This is because fable is not currently able to combine different distribution types (e.g., a normal distribution for component model and a Student’s \(t\) distribution for another component model) yet, and so we need to compute an empirical forecast distribution by simulating many futures from the model. We do this for all the models to ensure fair comparisons:

futures <- fit_train_composite %>%
  generate(h = nrow(test), new_data = test, times = 1000) %>%
  as_tibble() %>%
  group_by(month, .model) %>%
  summarise(dist = distributional::dist_sample(list(.sim))) %>%
  ungroup() %>%
  as_fable(index = month, key = .model,
           distribution = dist, response = "cases")

We can then compare the empirical forecast distributions using the same methods as before:

futures %>%
  accuracy(tsbl, measures = interval_accuracy_measures,
    level = 95) %>%
  arrange(winkler)
## # A tibble: 10 × 3
##    .model     .type winkler
##    <chr>      <chr>   <dbl>
##  1 arima_tslm Test   35734.
##  2 tslm       Test   37146.
##  3 search     Test   50872.
##  4 stepwise   Test   53267.
##  5 snaive     Test   63864.
##  6 naive      Test  205846.
##  7 stlf       Test  265227.
##  8 drift      Test  446284.
##  9 mean       Test  463255.
## 10 ets        Test  931365.

We see that the combination forecast of the search ARIMA and the time-series linear model performs the best, followed by the component models of the TSLM and both ARIMA specifications. The highest-ranked ‘simple’ forecasting method – seasonal naive – is ranked fifth best. This provides us with further evidence that the increase in complexity is warranted.

Finally, with a bit more code, we can produce a visualisation of the distribution of futures for the top composite model:

fit_train_composite %>%
  generate(h = nrow(test), new_data = test, times = 1000) %>%
  as_tibble() %>%
  group_by(month, .model) %>%
  summarise(dist = distributional::dist_sample(list(.sim))) %>%
  ungroup() %>%
  filter(.model == "arima_tslm") %>%
  mutate(id = 1) %>%
  dplyr::select(-c(.model)) %>%
  as_fable(index = month, key = id,
           distribution = dist, response = "cases") %>%
  autoplot(tsbl) +
  labs(x = "Month",
       y = "log-scaled lab confirmed influenza cases") +
  scale_y_log10() +
  guides(colour = guide_legend(title = "Forecast"))

Looks decent!

4 Final thoughts

Thank you for making it to the end! If you have any questions, please feel free to reach out to me via email. I intend to write a follow-up piece to this post exploring the magnitude of the impact of COVID-19 on influenza rates, so stay tuned for that!


  1. Doing this within fable::model is great because it will automatically back-transform forecasts and prediction intervals to the scale of interest automatically for us.↩︎

  2. 2018 saw a strange decrease in average cases, so we will park this complexity for now.↩︎

  3. These become far more meaningful once we directly compare the simple and complex methods. This is coming up shortly!↩︎

  4. Note that this substantially increases computation time.↩︎

</div> </div>