Johnny Deng's Column

Hi all, This Blog is an English archive of my PhD experience in Imperial College London, mainly logging my research and working process, as well as some visual records.

Thursday, 15 May 2008

Autoregressive Moving-Average Process

An n-dimensional autoregressive moving-average process of orders p and q, ARMA(p,q), is a stochastic process of the form

[1]

where a is an n-dimensional vector, the and are nn matrices, and W is n-dimensional white noise (see the notation conventions documentation). As the name suggests, this combines an AR(p) model with an MA(q) model of the same dimension n. In applications, ARMA(1,1) processes are common.

Exhibit 1 indicates a realization of the univariate ARMA(1,1) process

[2]

where W is variance 1 Gaussian white noise.

ARMA Process
Exhibit 1

A realization of the ARMA(1,1) process [2].

Exercises

Below are indicated a realization of 50 consecutive terms of a variance 1 Gaussian white noise.

0.293 0.317 0.047 -0.286 -1.237
-0.554 0.535 -1.640 -0.899 -0.704
-1.886 0.271 0.418 1.651 0.078
0.528 1.013 2.296 0.086 1.471
-0.580 -1.776 -2.217 0.502 -1.104
-1.211 0.205 0.110 0.011 0.778
-1.036 1.195 -1.169 -0.162 -0.504
-0.679 -1.366 0.885 -0.476 1.644
-1.665 0.129 2.882 0.978 0.054
-0.396 0.685 1.403 -0.009 0.918

Realization of 50 consecutive terms of a variance 1 Gaussian white noise.

Use this to generate a corresponding realization of the ARMA(1,1) process

[e1]

where tW is a variance 1 Gaussian white noise. Initialize the realization with term 0x = 0. [spreadsheet solution]

White Noise

A white noise is a simple type of stochastic process. Precise definitions vary. One simple definition is that a white noise is a (univariate or multivariate) discrete-time stochastic process whose terms are independent and identically distributed (IID), all with zero mean. While this definition captures the spirit of what constitutes a white noise, the IID requirement is often too restrictive for applications. Typically the IID requirement is replaced with a requirement that terms have constant second moments, zero autocorrelations and zero means. Let's formalize this.

If you have not already done so, see the notation conventions documentation. A one-dimensional stochastic process

..., t–2W, t–1W, tW, t+1W, ...

[1]

is said to be white noise if unconditional means, standard deviations and autocorrelations satisfy

E(tW) = 0

[2]

std(tW) =

[3]

cor(tW, t+nW) = 0

[4]



for some constant and any integer n. To distinguish this definition of white noise from that which requires IID terms, we call the latter an independent white noise or strong white noise. Note that the definition of white noise is more restrictive than that of independent white noise in just one respect. With a white noise, means, standard deviations and autocorrelations must exist. For independent white noise, they need not.

While the definition of independent white noise is otherwise more restrictive than that of white noise, it is also simpler. An independent white noise is necessarily a very simple process. Conditions [2] through [4], which define a white noise, can accommodate more complicated processes. For example, conditions [2] and [3] apply only to unconditional moments. There is nothing to stop a white noise from being conditionally heteroskedastic. That is impossible with an independent white noise.

An independent white noise whose terms are all normally distributed is called a Gaussian white noise. A realization of a univariate Gaussian white noise with variance 1 is graphed in Exhibit 1.

Univariate Gaussian White Noise
Exhibit 1

A realization of a univariate Gaussian white noise with variance 1.

All these concepts generalize to multivariate processes. An n-dimensional stochastic process

[5]

is said to be white noise if unconditional expectations satisfy

[6]
[7]

for some constant covariance matrix . Condition [7] does not require that the be independent. If we make this stronger assumption, the process is called independent white noise. If we further assume the are joint normal, it is called Gaussian white noise.

White noises are important in time series analysis because more complicated stochastic processes are generally defined in terms of white noises.

ARCH and GARCH Processes

Autoregressive conditional heteroskedastic (ARCH) processes are a form of stochastic process that are widely used in finance and economics for modeling conditional heteroskedasticity and volatility clustering. First proposed by Engle (1982), ARCH processes are univariate conditionally heteroskedastic white noises. An ARCH(q) process X is defined by two interrelated formulas (see the notation conventions documentation):

[1]
[2]

where W is a standard normal Gaussian white noise. This means that the time t distribution of X, conditional on information available at time t–1, is normal, with constant mean 0 and a conditional variance that changes with time. Our notation indicates that is a variance at time t, but conditional on information available at time t–1. Formula [2] defines as a function of preceding values of X. Together, formulas [1] and [2] ensure that, if X takes on large positive or negative values at some point in time, its conditional variance will be elevated for subsequent points in time, thereby making it likely that X will also take on large positive or negative values at those times too. In this manner, an ARCH process models volatility clustering—periods of high or low volatility.

Bollerslev (1986) extended the model by allowing to also depend on its own past values. His generalized ARCH, or GARCH(p,q), process has form

[3]
[4]

See Hamilton (1994) for stationarity conditions. In applications, GARCH(1,1) processes are common. Exhibit 1 indicates a realization of the GARCH(1,1) process

[5]

[6]

GARCH(1,1) Process
Exhibit 1

A realization of the GARCH(1,1) process defined by [5] and [6].




There have been many attempts to generalize GARCH models to multiple dimensions. Attempts include:

the vech and BEKK models of Engle and Kroner (1995),

the CCC-GARCH of Bollerslev (1990),

the orthogonal GARCH of Ding (1994), Alexander and Chibumba (1997), and Klaassen (2000), and

the DCC-GARCH of Engle (2000), and Engle and Sheppard (2001).

With some of these approaches, the number of parameters that must be specified becomes unmanageable as dimensionality n increases. With some, estimation requires considerable user intervention or entails other challenges. Some require assumptions that are difficult to reconcile with phenomena to be modeled. This is an area of ongoing research.