Guided tour on ARIMA estimation and forecasting

This guided tour contains mathematical formulas and/or Greek symbols and are therefore best viewed with Internet Explorer, because other web browsers may not display the "Symbol" fonts involved. For example, "b" should be displayed as the Greek letter "beta" rather than the Roman "b". If not, reload this guided tour in Internet Explorer, or make the latter your default web browser.

ARIMA stands for AutoRegressive Integrated Moving Average. The ARIMA modeling and forecasting approach is also known as the Box-Jenkins approach.

ARMA(p,q) processes

I will discuss the "I" in ARIMA later. For the time being is suffices to note that an ARIMA(p,0,q) process is the same as an ARMA(p,q) process.

As is well known (if not to you, stop here and don't use module ARIMA!), the general form of an ARMA(p,q) process y(t) is:

y(t) = a1y(t-1) + .... + apy(t-p) + m + e(t) - b1e(t-1) - .... - bqe(t-q)

where the e(t)'s are independently distributed with zero expectation and variance s2, and m is a constant. Thus, the p in "ARMA(p,q)" is the maximum lag of the AR part, and q is the maximum lag of the MA part.

This model can be written more compactly in terms of lag polynomials and lag operators. Define the lag operator L as:

L.y(t) = y(t-1)
L2y(t) = y(t-2)
.............
Lpy(t) = y(t-p)
.............

etcetera. Then we can write:

y(t) - a1y(t-1) - .... - apy(t-p) = y(t) - a1L.y(t) - .... - apLpy(t) = ap(L)y(t)

say, where

ap(L) = 1 - a1L - .... - apLp

and similarly

e(t) - b1e(t-1) - .... - bqe(t-q) = e(t) - b1L.e(t) - .... - bqLqe(t) = bq(L)e(t)

say, where

bq(L) = 1 - b1L - .... - bqLq

Thus, the ARMA(p,q) model involved can now be written as:

ap(L)y(t) = m + bq(L)e(t)

If the roots of ap(L) are all outside the (complex) unit circle, i.e., if

ap(z) = 0 |z| > 1,

then [ap(L)]-1 exists and is an infinite order lag polynomial with exponentially vanishing coefficients:

[ap(L)]-1 = 1+ j 1 r j L j,

where |r j| < cr j for some constant c and a r (0,1). If so, we can write the ARMA model as a stationary MA() process:

y(t) = m/ap(1) + [ap(L)]-1bq(L)e(t)

Similarly, if the roots of bq(L) are all outside the (complex) unit circle then we can write the ARMA model as an AR() process:

[bq(L)]-1ap(L)y(t) = m/bq(1) + e(t)

where [bq(L)]-1ap(L) can be written as:

[bq(L)]-1ap(L) = 1 - j 1 g j L j,

so that

y(t) = j 1 g jy(t-j) + d + e(t),

with d = m/bq(1). Thus

Et-1[y(t)] = j 1 g jy(t-j) + d,

which is the best one-step-ahead forecast of y(t).

I(d) processes

A time series process is called I(d) if we need to apply at least d times the first difference operator

D = 1 - L

to make the process stationary. Now a time series process x(t) is an ARIMA(p,d,q) process if

y(t) = Ddx(t) = (1 - L)dx(t),

where y(t) is a stationary ARMA(p,q) process.

ARIMA estimation and forecasting in practice

Model and data

The time series y(t) I shall use has been artifically generated, as follows:

(1 - 0.7L)(1 - L)y(t) = (1 - 0.7L)Dy(t) = (1 + 0.5L)(1 - 0.25L4)e(t),

where e(t) is i.i.d. standard normally distributed. This is an ARIMA(1,1,5) process:

Dy(t) = 0.7Dy(t-1) + e(t) + 0.5e(t-1) - 0.25e(t-4) - 0.125e(t-5).

The data involved is available as CSV file ARIMADATA.CSV, with y(t) = "ARIMA test data". The data should be interpreted as quarterly time series, starting from quarter 1950.1. Note that this data file has been created under US number setting. Hence, if your Windows uses a comma as decimal delimiter you have to convert it to your local number setting. See the guided tour on importing Excel files in CSV format.

Note that the MA lag polynomial b5(L) = (1 + 0.5L)(1 - 0.25L4) is specified as the product of a non-seasonal lag polynomial bns,1(L) = 1 + 0.5L and a seasonal lag polynomial cs,1(L4), where cs,1(L) = 1 - 0.25L . In general a seasonal lag polynomial is a polynonial in Ls, where s is the number of observations per year (the number of seasons). For example, for monthly data s =12, and for weekly data s = 52.

ARIMA estimation and forecasting in EasyReg

Import the data file ARIMADATA.CSV in EasyReg, and declare it quarterly time series, with first year 1950 and first quarter 1.

Next, open "Menu > Single equation models > ARIMA estimation and forecasting". Then the following window appears.

ARIMA Window 1

Click "Continue":

ARIMA Window 2

In order to conduct out of sample forecasting, either append the data with missing values (via "Menu > Input > Prepare time series for forecasting"), or select a subset of observations. I will choose the latter. Thus, click "Yes":

ARIMA Window 3

Choose the subsample 1950.1 through 1997.1. Then the ARIMA model will be fitten to this subsample, and the observations after 1997.1 will be used to compare forecasts and realizations.

Click "Bounds OK", and then "Confirm" and "Continue" (in the next window). Then the following window appears:

ARIMA Window 4

You now have to tell EasyReg what the order of integration ("d") is. Usually you do not know this in advance. If so, test the unit root hypothesis, via "Menu > Data analysis > Unit root tests (root 1)". If you don't know what a unit root is, please read my lecture notes on unit roots. If after reading these lecture notes you still don't understand what a unit root is and how to test for it, click "Don't know".

In our case d = 1, as indicated. Thus click "1 times OK":

ARIMA Window 5

Although in our case the process Dy(t) has zero expectation so that there is no need for an intercept, in practice this is rare. Therefore, in first instance include an intercept in your model, and test afterwards whether the parameter involved is zero. Thus, click "Continue":

ARIMA Window 6

Now you have to specify the ARMA process for u(t) = Dy(t) - E[Dy(t)]. The coefficients a(1,i) are the non-zero coefficients of the non-seasonal AR lag polynomial, and the coefficients c(1,i) are the non-zero coefficients of the seasonal AR lag polynomial. Similarly, the coefficients a(2,i) are the non-zero coefficients of the non-seasonal MA lag polynomial, and the coefficients c(2,i) are the non-zero coefficients of the seasonal MA lag polynomial. If your data consist of annual time series the option of specifying seasonal lag polynomials is not available.

In our case (1 - 0.7L)u(t) = (1 + 0.5L)(1 - 0.25L4)e(t), hence

  • a(1,1) = 0.7
  • a(2,1) = -0.5
  • c(2,1) = 0.25
Thus you have to click a(1,1), a(2,1) and c(2,1). Of course, in practice you don't know this in advance. To determine the lags involved, read first my lecture notes on forecasting , and then use the option "ARIMA model section via information criteria". This module also comes with a guided tour.

For more advanced time series analysis, see for example:

  • James D. Hamilton: Time Series Analysis, Princeton University Press, 1994.

Now click "Specification OK":

ARIMA Window 7

The only action required is to click "Continue":

ARIMA Window 8

The model parameters will be estimated by minimizing te(t)2, using the simplex method of Nelder and Mead. Click first "Simplex method: How it works, and stopping rules".

ARIMA Window 9

I recommend to use in first instance the default stopping rules. After completing the first iteration round you may wish to decrease the value of "r". Thus click "Stoppings rules OK". Then the previous window reappears. Click "Start SIMPLEX iteration":

ARIMA Simplex Window

In the current version of module ARIMA the simplex method is restarted until the parameters do not change anymore. As a double check, check "Auto restart" and restart the simplex iteration. Then click "Simplex method: How it works, and stopping rules" again, and decrease the value of "r":

ARIMA Window 10

Click "Stopping rules OK":

ARIMA Simplex Window

Check "Auto restart" and click "Restart SIMPLEX iteration", and then click "Done with SIMPLEX iteration":

ARIMA Window 11

Click "Continue":

ARIMA Window 12

This window is similar to the "What to do next" window when you run an OLS regression. See the guided tour on OLS estimation. It contains the estimation results, and an options menu.

Click the "Options" button:

ARIMA Options

Test parameter restrictions

Recall that the true values of the parameters are:

  • b(1) = 0
  • a(1,1) = 0.7
  • a(2,1) = -0.5
  • c(2,1) = 0.25

In order to see whether the estimates are significantly different from the true values, test the null hypothesis involved using the "Test parameter restrictions" option. The procedure is the same as for OLS (see the guided tour on OLS estimation), and therefore I will not demonstrate it again how to conduct this test, but only show the results:

ARIMA Window 13

The test result is as expected: The parameter estimates are not significantly different from the true values, at any reasonable significance level.

Plot the fit

ARIMA Window 16

This window does not need explanation.

One-step ahead forecasts

Recall that the best one-step ahead forecast of Dy(t) takes the form

Et-1[Dy(t)] = j 1 g jDy(t-j) + d,

where the parameters g j and d can be derived from the parameters of the ARMA model for Dy(t). See my Lecture notes on forecasting. Therefore, the best one-step ahead forecast of y(t) itself takes the form

Et-1[y(t)] = y(t-1) + Et-1[Dy(t)].

Thus both forecast schemes use all the data up to time t-1. The option "One-step ahead forecasts" generates these forecasts:

ARIMA Window 17

Click "Continue":

ARIMA Window 18

This picture displays Et-1[Dy(t)] on the vertical axis and its realisation Dy(t) on the horizontal axis, for t = 1997.2 to 1999.4. The closer the points (Dy(t), Et-1[Dy(t)]) are to the (45 degrees) line, the better the forecasts.

Click "Continue":

ARIMA Window 19

The top panel displays the plots of Dy(t) (solid line) and its forecasts Et-1[Dy(t)] (dotted red line) for t = 1997.2 to 1999.4. The bottom panel plots the forecast errors Dy(t) - Et-1[Dy(t)], together with the one- and two-times standard error bands.

Click "Continue":

ARIMA Window 20

This picture displays Et-1[y(t)] on the vertical axis and its realisation y(t) on the horizontal axis, for t = 1997.2 to 1999.4. The closer the points (y(t), Et-1[y(t)]) are to the (45 degrees) line, the better the forecasts.

Click "Continue":

ARIMA Window 21

The top panel displays the plots of y(t) (solid line) and its forecasts Et-1[y(t)] (dotted red line) for t = 1997.2 to 1999.4. The bottom panel plots the forecast errors y(t) - Et-1[y(t)].

Click "Continue". Then you will return to the "What to do next?" window.

Recursive forecasts

In recursive forecasting, the unknown Dy(t+h-j)'s in the one-step ahead forecast scheme

Et+h[Dy(t+h)] = j 1 g jDy(t+h-j) + d

are replaced recursively by forecasts, which then yields the best h-step ahead forecast:

Et[Dy(t+h)] = j 0 g h,jDy(t-j) + dh

See my Lecture notes on forecasting. Thus, these forecasts only use the information up to time t = 1997.1. The corresponding recursive level forecast of y(t+h) is then

Et[y(t+h)] = y(t) + Et[Dy(t+1)] + ... + Et[Dy(t+h)]

ARIMA Window 22

ARIMA Window 23

ARIMA Window 24

ARIMA Window 25

ARIMA Window 26

The results in the above windows indicate that recursive h step ahead forecasting only yields reasonable forecasts for modest values of h. This corresponds to the fact that

Et[Dy(t+h)] E[Dy(t)] as h .

This is what you see happening in the last window.

Plot one-step ahead forecast coefficients

Recall that the best one-step ahead forecast of Dy(t+1) takes the form

Et[Dy(t+1)] = j 0 g j+1Dy(t-j) + d.

In the next window the coefficients a(j) = g j+1 are plotted, and their values displayed.

ARIMA Window 27

Kernel estimate of the error density

ARIMA Window 28

This picture compares the nonparametric kernel estimator of the density of e(t) with the corresponding normal density. Note that nonparametric kernel density estimation lacks "parametric backbone" and therefore needs much more data than parametric density estimation. The effective sample size in this case is too small to do reliable nonparametric estimation.

Comparison with the OLS module

An ARMA model can also be estimated via the linear regression (OLS) module, using the option "Re-estimate the model with ARMA errors". See the guided tour on OLS estimation. This option produces the same parameter estimates as the ARIMA module. However, if you estimate an AR model directly via the linear regression (OLS) module, without using the option "Re-estimate the model with ARMA errors", the estimation results for the intercept, and eventually time trend and seasonal dummy parameters, will differ from the corresponding results obtained via the ARIMA module under review. The reason is the following. The ARIMA module estimates an AR(p) model with intercept in the form

yt = m + ut,

ut = a1ut-1 + .... + aput-p + et,

where m = E[yt] and et is white noise, whereas the linear regression module estimates this model in the form

yt = a0 + a1yt-1 + .... + apyt-p + et.

The intercept a0 will be different from m, because taking expectations in the latter case yields m = a0 + a1m + .... + apm, hence:

m = (1 - a1 - .... - ap)-1a0.

This is the end of the guided tour on ARIMA estimation and forecasting