# Arbitrage Pricing Theory and Uk Stock Exchange Finance Essay

Published: 2021-06-26 08:40:03

Category: Finance

Type of paper: Essay

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Hey! We can write a custom essay for you.

All possible types of assignments. Written by academics

GET MY ESSAY

To estimate empirically the Arbitrage Pricing Theory (APT) model we focus our attention to the UK’s stock exchange market. Our study employs monthly time series data spanning the period 2000:9 to 2010:9 (121 observations). The sample of our analysis is dictated solely by the demands of the coursework. The variables involved are: the closing share prices for 25 UK companies listed on the London Stock Exchange, the FTSE 100 stock index, the UK Libor as proxy for the short-run risk-free rate, the 20-year government bond yield as proxy for the long-run risk-free rate, the exchange rate series between the British Pound and the US Dollar and finally the Brent crude oil prices. [1] The abbreviated notation of the above variables is as follows: Sharei = Si with i= 1 to 25 FTSE 100 stock index = indext UK Libor= free_s_ratet 20-year government bond yield = free_l_ratet Exchange rate series = fxt Brent crude oil prices = brentt Given the availability of the Si series it is trivial to calculate the return series for each of the 25 selected shares (r_ Si) by taking the first logarithmic differences of the share prices (growth rate). To do so in E-views the relevant command is the one described below: For !i=1 to 25 series r_S!i = dlog(S!i) Next To construct the equally weighed portfolio return series (portfoliot) we merely estimate the average return of the 25 share returns for each time period of the sample. The commands applied are described as follows: series sum_stock_returns = r_S1+ r_S2+ r_S3+ r_s4+ r_S5+ r_S6+ r_S7+ r_S8+ r_S9+r_S10+r_S11 +r_S12+ r_S13+ r_S14+ r_S15+ r_S16+ r_S17+ r_S18+ r_S19+ r_S20+ r_S21+ r_s22+ r_S23+ r_S24+ r_S25 series portofolio = sum_stock_returns / 25 Figure 1 below presents the five main variables used in this study as well as the equally weighed portfolio return series constructed by the returns of the 25 involved shares. Figure 1. The variables of the study Figure 1. The variables of the study (continued)

II. Empirical Results
Question 1
Question 2
Question 3
In this section we conduct diagnostic testing in order to assess our models statistical strength. For this reason we investigate by testing analogously if there is presence of multicollinearity, heteroskedasticity, serial correlation and finally non-normality in the residuals. Before the diagnostic testing, it is important to stress that all the regressors used in equation 1 are stationary and therefore we exclude the possibility of estimating a spurious regression. It is well known that the estimated results in cases where the regression is characterized as spurious, are meaningless and the statistical inference is worthless. Clearly, this is not the case for our estimated model presented in Table 1. Turning now to the diagnostic testing procedure, our first concern is to ensure that there is no presence of multicollinearity. The problem with multicollinearity is that it inflates the standard errors and therefore it is hard to assess the significance of the regressors used in the model. Furthermore, we know that multicollinearity does not affect the efficiency of the estimated parameters. Provided that there is no availability of an official testing procedure for the detection of multicollinearity we make use of a practical solution. According to this approach evidence for multicollinearity would be a high correlation among the regressors. High value for the correlation coefficient is considered a value of above 0.8. For this reason we estimate the correlation coefficients for all the regressors involved in the estimation of equation 1. The correlation coefficients are illustrated in Table 2. Undoubtedly, the results in Table 2 reveal that all the correlation coefficients are well below the threshold value of 0.8 and as a consequence we may say that there is no evidence of multicollinearity for that particular set of regressors. Table 2. Correlation matrix for the regressors of the APT model Regressor [ Rm-Rf ]t GTSt GFXt GBPt [ Rm-Rf ]t 1.000

GTSt 0.124 1.000

GFXt 0.066 -0.048 1.000

GBPt 0.216** -0.029 0.379*** 1.000 Note: **, *** denote significance at the 0.05 and 0.01 significance level, respectively. We continue with testing for serial correlation. We know that the presence of serial correlation in a regression model leads to the underestimation of the standard errors and the coefficients and as a consequence hypothesis testing will direct us to incorrect conclusions. A widely used Statistic for testing first order serial correlation is the Durbin-Watson. If its value is close to 2 then this is evidence of no serial correlation. In Table 1, we observe that the Durbin-Watson statistic equals to 1.92 and as result we can support the absence of a first order serial correlation. In order to ensure that higher order serial correlation is also excluded from our model we implemented the Breusch-Godfrey Serial Correlation LM Test for two and eight lags. The Breusch-Godfrey LM statistics for two and eight lags along with the associated p-values are presented in Table 1. Based on the relevant p-values we fail to reject the null hypothesis of no serial correlation in each case, and as a result we may support that serial correlation, even in higher orders, is not a problem in our model. Another important issue related to the diagnostics of a model has to do with the presence of heteroskedasticity. Heteroskedasticity leads to non-efficient estimators as well as to biased standard errors, resulting to unreliable t-statistics and confidence intervals. However, the estimators still remain unbiased under heteroskedasticity. To test formally for heteroskedasticity we implemented White’s test and the results are illustrated again in Table 1. Based on the calculated p-value (0.53) that corresponds to White’s test, we fail to reject the null hypothesis of homoskedasticity. As a result our model seems to satisfy the assumption of homoskedasticity, implying that the performed statistical inference is correct. Our final concern is to ensure that the residuals are normally distributed, which is one of the basic assumptions of the classical linear regression model. The assumption of the error’s normality is considered essential for conducting correctly statistical inference. Finally, we tested for normality by making use of the Anderson-Darling statistic with the null hypothesis to be the presence of normality. The estimated Anderson-Darling statistic along with the associated p-value, for the residuals, is 0.85 and 0.44, respectively. It is clear that we fail to reject the null hypothesis of normality and therefore we have one more clue that our model is well specified.
Question 4
As is clearly shown in question 3, the diagnostic testing performed for the statistical validity of the estimated model revealed the following a) the regressors are stationary, b) multicollinearity is not considered a threat, c) there is no serial correlation in the residuals, d) the residuals are homoskedastic and finally, e) the residuals are distributed normally. Therefore, we came to the conclusion that all the basic assumptions of the classical linear regression model hold and no further actions are required.
Question 5
Question 6
The Chow breakpoint test is implemented for equation (3). The Chow breakpoint test is used to assess the stability of the estimated coefficients over a pre-specified breakpoint. The test depends heavily on the correct selection of the breakpoint. After the selection of the breakpoint the test is carried out by separating the initial sample into two sub-samples, with the first sample to be from the beginning of the sample up to the breakpoint and the second sample from the breakpoint up to the end. The main intuition of the test is based on the similarity of the sum of squared residuals resulting from the whole sample with the respective sum of squared residuals resulting from the equations that are fitted to each sub-sample. If there is a significant difference then this is indicative of a structural change in the coefficients derived from the whole sample regression. At this point we need to be very careful of the selection of the breakpoint. Based on the results presented above, and especially in question 1, we have realized that all the variables illustrated graphically show systematically a spike (extreme value) which takes place during the last quarter of 2008. The period indicated by the data coincides with the beginning of the global economic crisis. The beginning of the crisis is chronologically oriented by the collapse of the investment bank Lehman AŽ’rothers on September of 2008 (2008m09). Consequently, the choice of the 2008m09 as a break date for our application seems to be theoretically and empirically fully justified. The results of the Chow breakpoint test are presented in Table 6. The test is implemented for a break to all the estimated coefficients of the regression. As can be realized from the p-values of the three illustrated statistics we clearly reject in all cases the null hypothesis of no breaks at the 0.01 significance level. Table 6. Chow Breakpoint Test (equation 3) Null Hypothesis: No breaks at specified breakpoints Breakpoint: 2008:m9 Varying regressors: All equation variables Equation Sample: 2000:m10 2010:m09 F-statistic 3.226107 Prob. F(7,106) 0.0039 Log likelihood ratio 23.17603 Prob. Chi-Square(7) 0.0016 Wald Statistic 22.58275 Prob. Chi-Square(7) 0.0020 Clearly the confirmation of the structural change in the coefficients of the estimated regression reveals that our specification needs to be revised analogously in order to take into account the break. Such a re-specification may be the inclusion of a dummy variable for the period after the break date or otherwise cross products between the dummy and the independent variables in order to determine the magnitude of change for the initially estimated slopes.
Question 7
In this section we will compare the three alternative specifications which have been presented (equations 1, 2 and 3) and estimated (Tables 1, 3 and 5) in the previous sections. For this reason we will make use of four different Statistics which are considered appropriate for the task at hand. These Statistics are the Adjusted R-square, the Akaike information criterion, The Schwartz criterion and finally the Hannan-Quinn criterion. For the adjusted R-square, this receives values between 0 and 1, the higher the value the better for the corresponding model. High values imply that high percentage of the dependent’s variable variability is explained by the regressors. For the three remaining Statistics, the lower the values they receive the better the model is. Table 7 below presents all these Statistics in order to select the final model.
Table 7. Model selection criteria
Statistic Model 1 Model 2 Model 3 Adjusted R-square 0.839409 0.848280 0.849374 Akaike -4.653130 -4.686387 -4.701399 Schwartz -4.536984 -4.500554 -4.538796 Hannan-Quinn criterion -4.605962 -4.610919 -4.635365 Based on the reported results in Table 7 it is immediately realized that model 2 is preferred in comparison to model 1 (higher adjusted R-square and lower values for the rest of the statistics) and model 3 is preferred in comparison to model 2 (also there is higher adjusted R-square and lower values for the rest of the statistics). Model’s 3 fit to the data is considered more than satisfactory as almost 85% of the variability that the dependent variable has is explained by the selected regressors.
Question 8
Based on the model presented in Table 5 we will assess the results from a financial perspective. This task is mainly focused on the interpretation and the significance of the estimated coefficients. We have already proved that model 3 is well specified and as a result we may proceed to the analysis of the results. Regarding the expected theoretical sign of the regressors it can be stressed that the estimated signs do not deviate from those expected. The justification for the sign of each variable has been analytically presented in question 2 and the same rationale applies also to the finally selected specification. The most notable fact is that the market index, FTSE 100, excess returns was found to be significant at 99% confidence level. This implies that this factor is the single-most important factor in explaining our portfolios excess returns. Alternatively, the estimated coefficient of 1.035900 can be seen as a measure of risk for the portfolio constructed since it is inferred that if the market’s excess returns increase by one unit then the portfolio’s excess returns will increase also by the value of the coefficient. Immediately we realise that our constructed portfolio is riskier than the market. This is because our portfolio is only a subset of the market portfolio and the market portfolio is more diversified and contains less individual risk. The constant can be interpreted as follows: if all the independent variables are simultaneously equal to zero then portfolio’s excess return is equal to 0.011796. The fact that the constant term is statistically different from zero suggests that our choice to use the APT model is correct provided that the CAPM model is a special case of the APT model. A non-significant constant term would favour the use of the CAPM model. Finally, the three included factors in the specification remain statistically insignificant implying that these factors do not contribute considerably in the explanation of our portfolios excess returns. There are probably other factors that may play a significant role in explaining our portfolios excess returns. Such factors, among others, may be the industrial production, money supply, inflation and market’s capitalization. Their effect remains under further investigation.

#### Warning! This essay is not original. Get 100% unique essay within 45 seconds!

GET UNIQUE ESSAY

We can write your paper just for 11.99\$

i want to copy...

This essay has been submitted by a student and contain not unique content