Researches on the causal relationship between equity prices and exchange rates have been conducted with various econometric methods. In this study, I employ the vector autoregressive model and dynamic Granger (1969) causality test to examine the relationship between the variables under study. Empirical studies which are premised on time series data assume that the underlying time series is stationary. On the contrary, many empirical studies have shown that this assumption is not always true and that a significant number of time series variables are non-stationary (Engle and Granger, 1987). Thus, employing a non-stationary time series data in a regression analysis may result in spurious results (Granger and Newbold (1974)). Therefore, embarking on studies involving time series data necessitates that stationary test is conducted to establish the underlying process of the data series.

3.1 Stationarity Test

A data generating process is considered stationary if it has time-invariant first and second moments, and the covariance of two time periods is constant notwithstanding which time periods are used and the distance between them, Gujarati (1995). The process is said to be weakly stationary if the two first conditions are fulfilled but the covariance between two time periods depends on the distance between the time periods, but not on when it is calculated. If the process is stationary around a trend, it is said to be trend-stationary. There are a variety of unit root tests used in the econometric literature principally Augmented Dickey-Fuller (ADF), Dickey-Fuller, Phillip-Perron, Ng-Perron tests, etc to investigate whether the time series data used in a study are stationary or not. I employ the Augmented Dickey-Fuller to examine the stationarity of the variables.

3.1.1 Augmented Dickey-Fuller (ADF) Test

The ADF model tests the null hypothesis that there is unit root, against the alternative hypothesis that there is no unit root in the regression. The regression for the ADF test is estimated as follows: (1) where represents the variable that we are examining its properties, is the difference operator, , and are the coefficients to be estimated, p is the chosen lag length, t is the time trend, and t is the white-noise error term. has a stochastic trend under the null hypothesis but under the alternative hypothesis is stationary. Generally, the lag length for conducting the ADF test is unknown but can be estimated using information criteria such as the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) applied to the regressions of the form in equation (1). If the Data Generating Process (DGP) is stationary in the data series at levels, then it will be concluded to be integrated of order zero, I (0). On the contrary, it is not always the case and the underlying process of the data series may be non-stationary. In effect, the original series need to be transformed into a stationary state by taking difference (d) times. If after taking first difference of the series, it is found that they are all stationary then we can conclude that the DGP is integrated at order one, I (1). Moreover, if the original series used in the study are found out to be integrated of the same order, it is useful to test for cointegration relationship between the integrated variables.

3.2 Cointegration Test

It is generally accepted that regression which involves non-stationary time series will lead to spurious results. However, Engle-Granger (1987) proposed that a linear combination of these non-stationary series may be stationary in which case we can say that the series are cointegrated. To compute the Engle-Granger test, let the vector denote the tth observation on N time series, each of which is known to be I (1). If these times series are cointegrated, there exists a vector such that the stochastic process with observation is I (0). However, if they are not cointegrated, there will be no vector with this property, and any linear combination of y1 through yN and a constant will still be I(1). The cointegration regression is estimated as follows: (2) With respect to this regression, it is assumed that all the variables are I (1) and might cointegrate to form a stationary relationship, and thus will result in a stationary residual term. The null hypothesis of non-cointegration is that the residual term is non-stationary. Unit root test is conducted on the residuals to find out whether they are stationary or otherwise. To this end, the ADF test is employed to conduct the unit root test. If the residuals are stationary, then one rejects the null hypothesis of non-cointegration. However, if they are non-stationary, then one accepts the null hypothesis of non-cointegration.

3.3 Vector Autoregressive (VAR) Model

A vector autoregression is a set of k series of regressions in which the regressors are lagged values of all the k series. The underlying assumption of the model is that all variables are endogenous a priori, and allowance is made for rich dynamics. VAR models offer some level of flexibility and therefore easy to use for analysing multiple time series. This is against the backdrop that one needs not to specify which variables are exogenous or endogenous. However, there are still some difficulties associated with VAR models. In the first place, it is not easy to identify which variables have significant effect on the dependent variable. Also, there is a strict condition that all the data series in the VAR should be stationary. However, most financial time series are non-stationarity. In case the variables are found not to be stationary at levels, then according to Granger (1969), it is more appropriate to estimate VAR or Vector Error Correction Model depending on whether the series are cointegrated or not. The vector error correction model is discussed in the subsequent section. The simplest form of the VAR is the bivariate model. The bivariate model can generally be estimated as follows: (3) where Aƒ°A’A¢it is a white noise term with E (Aƒ°A’A¢it) = 0, E (u1tu2t) =0.

3.4 The Granger Causality Test

According to Granger (1969) a variable X could be defined as causal to a time series variable Yif the former helps to improve the forecast of the latter. Thus, X does not Granger-cause Y if Pr (|) = Pr (|) (4) where Pr (.) is the conditional probability, is the information set at time t on past values of Y and is the information set containing values of both Xand Yup to time point t. If the variables are found not to be cointegrated, then the following VAR will be estimated and the Granger causality test is consequently conducted: (5) (6) where SI is the stock price, ER is the exchange rate of the Ghana cedi to the US dollar andare uncorrelated white noise terms, ln represents the natural log, Î” difference operator and t denotes the time period. If the lagged coefficient of vector in equation (5) is significant but that of vector of in equation (6) is not significant then the results imply that there is unidirectional causality from exchange rate to stock price returns. However, if the lagged coefficient vector in equation (6) is statistically significant but the lagged coefficient vector in equation (5) is not statistically significant then the results imply that there is unidirectional causality from stock prices returns to exchange rate returns. Moreover, if the lagged coefficient vectors of both equations (5 and 6) are statistically significant then the results imply that there is a bidirectional causality from the stock returns and exchange rate returns. Finally, if both lagged coefficient vectors are statistically insignificant, then this implies that there is no causality between these variables.

3.5 Vector Error Correction Model (VECM)

According to Engle and Granger (1987), the VECM is a preferable model to the VAR in equations (5 and 6) if it is found that there is cointegration relation between and or among the data series. The VECM discriminates between both the dynamic short-run and long-run Granger causality. The VECM equations are written as follows: where SI is the stock price, ER is the exchange rate, is the error correction term lagged one period; and are uncorrelated white noise terms. The error correction term () is derived from the long-run cointegration relationship between the variables. The estimates of the error correction term of () also shows how much of the deviation from the equilibrium state is corrected in each short period. To find out the presence of long-run causality between the two data series, one will test for the significance of the coefficient of the error correction term in equations (7 and 8) by employing the t-test. Finally the Wald or F-statistic is used to test for the joint significance of both the error correction term and the various interactive terms in equations (7 and 8). If the lagged coefficient vector of equation (7) is statistically significant but the lagged coefficient vector in equation (8) is not significant then the results imply that there is a unidirectional causality from exchange rate to stock price returns. However, if the lagged coefficient vector in equation (8) is statistically significant but the lagged coefficient vector in equation (7) is not statistically significant then the results imply that there is a unidirectional causality from stock prices returns to exchange rate returns. Moreover, if the lagged coefficient vectors of both equations (7 and 8) are statistically significant then the results imply that there is a bidirectional causality from the stock returns and exchange rate returns. Finally, if both lagged coefficient vectors are statistically insignificant, then this implies that there is no causality between these variables.

3.6 Lag Length selection Criteria

To estimate the VAR/VECM model requires choosing the lag length that reduces the information loss. Thus, choosing the lag length involves neutralizing the trade-off between adding more lags against the marginal benefit of additional estimation uncertainty. Thus, too many lags included in the model will lead to additional estimation errors and whiles too few lags may leave out potentially valuable information. To contain this problem, there are so many models to use to select the lag order, namely the Akaike Information (AIC) and Bayes Information Criteria (BIC). I use the BIC to determine the lag order to the estimate model and therefore I will discuss it briefly. The BIC and AIC are expressed as follows: (9) (10) where SSR(p) is the sum of squared residuals of the estimated AR(p). The BIC estimator of , is the value that minimizes BIC(p) out of the range of lags available. The SSR decreases as more lags are introduced, however the second term increases as more lags are introduced. Moreover, the amount of penalty in the second term of the AIC is relatively smaller to that of the BIC. Thus, the BIC awards more penalty factor relative to the AIC. This implies that BIC gives a consistent estimate of the true lag length unlike the BIC. This makes the BIC preferable to AIC which tends to overestimate the lag order with positive probability. Thus, the second term of the AIC is smaller compared to the BIC.

3.7 Test for Structural Breaks

To test for structural breaks in the regression coefficients, I estimate an autoregressive distributed lag (ADL) with dummy variables to represent the periods before and after the redenomination of the cedi. Moreover, to choose the appropriate lag length for both the dependent and independent variables to include in the ADL, I estimate the regression equations with different lag lengths and compare the resulting BICs. In effect, the lag length that resulted in the lowest BIC is chosen to estimate the ADL and then the structural break test is conducted. The ADL is estimated as below: where SIt = stock price returns ERt = exchange rate returns Dt = Dummy variable where Dt = 1 if t âA¢â‚¬°A¥ 3 July, 2007; Dt = 0 if t âA¢â‚¬°A¤ 3 July, 2007 âA‹” A¢â‚¬A = difference operator T = time period; d, are the coefficients of the parameters Chow (1960) model tests for structural break in which case the break dates must be known a priori and the decision is made on the F-statistic that tests the null hypothesis of no break; against the alternate hypothesis that at least one of d is nonzero. Thus, in case of the Chow (1960) test, the investigator has to pick an arbitrary break date or pick a known date based on the feature of the data series. In effect, the results can be highly sensitive to these arbitrary choices and as the true break date can be missed. However, in this study, the break date is identified by the redenomination of the cedi.

3.8 The CUSUM of Squares Test

In an attempt to test for the constancy of the variance, I employ the CUSUM of square test Brown et al. (1975). This test is principally based on the square of the residuals on the plot of the quantities. This test involves drawing a pair of critical lines on the diagram which is parallel to the mean value line so that the probability that the sample path crosses one or both critical lines is the significance level. If the sample path stays between the pair of critical lines without crossing any of the two lines, then one can conclude that the variance is constant over the period. However, movement outside of the critical lines implies parameter or variance instability. The CUSUM of squares test is based on the test statistic: (12) where the mean value of is given by: . (13)