As noted before, regulations that perform good for most developed states may non work in instance of Russia. Therefore, non merely the popular Taylor regulation will be estimated, but besides the McCallum regulation. In this subdivision the methodological analysis of the regulations used will be described every bit good as the outlooks of the consequences. Then the subdivision discusses few trials that have to be performed in order to verify whether the arrested development is good or bad.

## Taylor regulation

Taylor has estimated different pecuniary regulations and reached a decision that the theoretical account that fits the world the best is the one which describes the alterations in the involvement rate by divergences in end product and rising prices from their coveted degrees. The Taylor regulation that he used to explicate the Central Bank ‘s behavior was as follows:

where is the short term involvement rate ( refinancing rate ) ,

is the rising prices rate,

is the end product spread, the per centum divergence of the existent end product from its possible degree, given by the expression: where is the existent end product, and is the possible or mark end product,

is the long-term equilibrium value of rising prices and involvement rate, severally,

is the error term,

inferior shows the current period.

The coefficients of 0.5 indicate that the Central Bank worries every bit about end product spread and divergence of rising prices from the mark value.

However, the Taylor regulation in this original version can non be applied to Russia due to the fact that there is no stable rising prices rate[ 2 ]. For this ground a version of Taylor regulation for unfastened economic system was estimated:

where are the same as before,

is the existent effectual exchange rate ( REER ) growing,

are coefficient parametric quantities,

is the changeless intercept,

is the error term,

inferior indicates the current period and the old period.

The marks of the parametric quantities are expected to be positive for, , , and negative for, . The value of the possible end product was calculated utilizing the Hodrick-Prescott ( HP ) filter. To guarantee that the series are stationary, growing rates are used in the regulation, which is approximately the same as utilizing differences. The rates of growing were calculated by the differences of the logarithms:

## McCallum regulation

Harmonizing to Vyugin[ 3 ], former deputy chair of the Central Bank of Russia, the picks of pecuniary policy in Russia were far excessively complicated to be explained by one individual regulation. However, the fact that officially the Central Bank uses the growing rate of the pecuniary sum as an intermediate mark leads to a suggestion that the McCallum regulation could be a better theoretical account for capturing the policymaking behavior. The equation of the McCallum regulation is:

where is the pecuniary aggregative growing rate,

is target nominal end product growing rate,

is the money speed growing rate,

is the nominal end product growing rate,

are coefficient parametric quantities,

is the changeless intercept,

is the error term,

inferior indicates the current period and the old period.

The marks of the parametric quantities are expected to be positive and the mark of is expected to be negative. All growing rates in this regulation are calculated in per centums. The mark nominal growing rate is given by the equation:

where is the mark rising prices rate,

is the mark existent end product growing.

In order to build series for mark values of end product and rising prices the traditional Hodrick-Prescott ( HP ) filter was employed. The value of money speed is calculated as follows:

where is the nominal value of aggregative minutess, measured by nominal end product,

is the pecuniary sum.

## Residual trials

## Autocorrelation trials

After the initial appraisal several trials will be carried out. The first measure is to prove for consecutive correlativity among the mistake footings. The effects that may happen if the presence of autocorrelation is ignored are wrong standard mistakes, which lead to misdirecting assurance intervals and hypothesis trials, and least squares calculators which do non hold minimal discrepancy. Therefore, the computed R-square is besides undependable. As a effect, we have to prove for autocorrelation is present, and if it we have to discourse what techniques could be applied in order to rectify this job.

First, Breusch-Godfrey consecutive correlativity LM trial is performed. The void hypothesis is that there is no consecutive correlativity in the remainders. The option is that there is consecutive correlativity. To accept or reject the void hypothesis, the value of the LM statistic should be examined. When the void hypothesis is true, the value of the LM trial statistic, which in E-views is shown as Obs*R-squared, asymptotically follows the I‡2 distribution. If the LM statistic is greater than I‡2 critical at a specified significance degree, we should reject the void hypothesis of no autocorrelation. Alternatively, the p-value of this statistic could be examined. The void hypothesis should be rejected when the chance is less or equal to a specified significance degree I± of 0.01, 0.05 or 0.1.

Second, we consider the Durbin-Watson trial. The 500 statistic is reported in the appraisal end product along with the drumhead statistics. The nothing and the alternate hypothesises are indistinguishable to those in the LM trial described above. If the mistakes are non correlated the vitamin D statistic should be between the critical upper bound du and 4-du. Other values indicate that either there is grounds of autocorrelation or no determination can be made.

Third, we use correlogram – Q-statistic trial to observe for autocorrelation. The nothing and the alternate hypothesises are same as earlier. The p-value represents the chance with which the nothing hypothesis is true.

If the trials confirm this outlook, we will seek to modify the equation in order to decide this job. One manner is to calculate heteroskedasticity and autocorrelation consistent ( HAC ) or Newey-West standard mistakes. Using this methodological analysis will non alter the coefficients, but the standard mistakes will be different every bit good as the t-statistics and the chances. Furthermore, slowdowns can be added harmonizing to the significance of the coefficient and information standard. This alterations have to be taken into history and a new theoretical account should be estimated and so trial for autocorrelation. If a theoretical account is found, that does non hold consecutive correlativity in the remainders, it will be considered a better theoretical account.

## Heteroskedasticity trials

The 2nd measure is to prove whether the discrepancy of the error term is changeless ( homoskedastic ) . The effects of non-constant ( heteroskedastic ) discrepancy of the error term are similar to those of autocorrelation. The calculators do non hold the lowest discrepancy and the hypothesis trials and assurance intervals are incorrect, which increases the chance of misdirecting decisions. Testing for heteroskedasticity is tantamount proving for consecutive correlativity in the squared remainders.

We consider several alternate trials in order to prove the theoretical account for presence of heteroskedasticity, viz. Breusch-Pagan-Godfrey, White ‘s general heteroskedasticity, ARCH and Glejser trials. Since the sample size is rather big, the consequences of the trials are dependable. The void hypothesis in each of these trials is that the discrepancy of the mistake footings is changeless, so there is no consecutive correlativity among the squared remainders, intending no heteroskedasticity. The alternate hypothesis provinces that heteroskedasticity is present. The LM statistic in the trials follows asymptotically the I‡2 distribution with grades of freedom equal to the figure of the independent variables, excepting the invariable, if the void hypothesis is true. The void hypothesis of no heteroskedasticity could be rejected when the value of the trial statistic is larger than the I‡2 critical at a specified significance degree. Alternatively, the chance of the LM statistic could be observed. If the p-value is less than the significance degree I± , where I± is either 0.01, 0.05 or 0.1, so the void hypothesis should be rejected.

Another manner for observing heteroskedasticity will besides be performed to verify the consequences. Correlogram of squared remainders will demo us whether there is consecutive correlativity in the squared remainders. The smaller the p-value, the greater the chance of rejecting the void hypothesis.

## Normality trials

The last trial to be carried out in order to look into the dependability of the consequences obtained in the full theoretical account is a normality trial. First, it is utile to look into whether the histogram is bell-shaped or non. Second, Jarque-Bera formal trial will be presented. The void hypothesis is that the remainders are usually distributed. The option is that the mistakes do non follow normal distribution.

The foundations of the Jarque-Bera statistic are the values of lopsidedness ( the symmetricalness of the mistakes ) and kurtosis ( the tallness of the distribution ) . For a normal distribution the step of lopsidedness is zero and kurtosis is three. Therefore, if the remainders are usually distributed the Jarque-Bera statistic follows a chi-squared distribution with 2 grades of freedom I‡2 ( 2 ) . If the computed trial statistic is greater than the critical value of this distribution at the chosen significance degree, the void hypothesis of usually distributed remainders should be rejected. Furthermore, the chance of the Jarque-Bera statistic could be examined. We reject the void hypothesis when the p-value is lower than the chosen significance degree of I± .

If the remainders are non usually distributed, they should non be employed in hypothesis trials as this may take to a deceptive decision. The possible cause of non usually distributed remainders could be excluding an of import variable, utilizing a incorrect functional signifier of some variable or a general misspecification of the theoretical account.

Word count: 1638

## Empirical Analysis

## 3.1. Datas and Preliminary trials

In the empirical analysis clip series informations was used, with monthly frequence from January 1995 to December 2009, intending 180 observations. This clip scope was chosen because of the handiness of the necessary information every bit good as the fact that there is no such research on this topic during this peculiar period. All informations have been obtained from the databases of the International Monetary Fund ‘s International Financial Statistics, the Organisation for Economic Co-operation and Development ( OECD ) , the Vienna Institute for International Economic Studies ( WIIW ) and the Bank of Russia ( CBR ) .

For the intent of the empirical appraisal the information on refinancing rates ( short-run involvement rates ) , consumer monetary value index for the computation of rising prices, industrial production index as a placeholder for end product, pecuniary sum M1, and existent effectual exchange rate were used.

Furthermore, alternatively of degrees the growing rates of existent effectual exchange rate, pecuniary sum M1 and industrial production index were used in the theoretical accounts.

Before running any arrested developments, it is of import to look into whether the time-series informations is stationary or non. For this purpose both informal ( graphical presentations ) and formal unit root trials ( augmented Dickey-Fuller, Phillips-Perron, Kwiatkowski-Phillips-Schmidt-Shin trials ) were performed. From the graphs it was decided which type of the trial had to be used ( with intercept, with intercept and tendency, or without intercept and tendency ) . The void hypothesis in the augmented Dickey-Fuller and the Phillips-Perron trials is that there is a unit root and the series are non stationary. The alternate hypothesis is that there is no unit root and the series are stationary. In the Kwiatkowski-Phillips-Schmidt-Shin trial the hypothesises are the opposite. The void provinces that there is no unit root and the series are stationary, and the option is that there is a unit root and the series are non-stationary.

We foremost see the augmented Dickey-Fuller trial. Based on the low values of the chance ( p-values for all variables, except that of pecuniary sum growing, are less than 0.05 ) and big negative values of the t-statistic ( t-statistic for all variables, except that of pecuniary sum growing, is less than the critical value at 5 % degree of significance ) , we reject the void hypothesis that each of the variables separately is non-stationary and accept the alternate hypothesis that it is stationary at 5 % significance degree. The p-value for pecuniary sum growing is 0.0862 ; and the t-statistic of about -2.6444 is lower than the t-critical for 10 % degree of significance, bespeaking that the variable is stationary at 10 % significance degree.

Then the Phillips-Perron trial was performed. The p-values of all the variables are little Numberss near to zero and the adjusted t-statistic is less than the critical value for 5 % degree of significance. This leads to a rejection of the void hypothesis of the presence of a unit root.

Finally, the Kwiatkowski-Phillips-Schmidt-Shin trial was carried out. Since the values of the LM statistic for all variables are less than the asymptotic critical value at 1 % significance degree, the void hypothesis of no unit root can non be rejected. Therefore, all variables ate stationary at 1 % degree of significance.

Since all trials show the same consequences that all the variables are stationary, we can continue to the appraisal of the equations. Table 1 illustrates the consequences of the unit root trials for all variables and the consequences from each single trial are included in Appendix.

Table 1- Unit root trials

## ADF

## PP

## KPSS

## Variable

t-statistic

CV 1 %

Adj. t-statistic

CV 1 %

LM-statistic

ACV 1 %

I

-4.238802*

-3.467418

-3.75484*

-2.877544

0.388224*

0.739

dcpi

-4.951145*

-3.467633

-9.453058*

-2.877636

0.683064*

0.739

Y

-5.286933*

-3.466994

-5.49832*

-2.877544

0.031573*

0.739

dreer

-9.333866*

-2.578018

-9.33095*

-1.942624

0.056378*

0.739

dm1

-2.913863**

-3.470427

-16.0718*

-2.877729

A 0.400899*

0.739

dipi_n_t

-3.846743**

-4.011352

-4.67083*

-3.435269

0.367708*

0.739

dvel

-13.8849*

-3.467851

-18.2474*

-2.877823

0.552196*

0.739

dipi_n

-15.66309*

-2.578167

-21.5585*

-1.942634

0.316907*

0.739

dm1_d

-2.745183*

-2.579052

-14.5723*

-1.942634

0.252658*

0.739

Notes: In the ADF trials, we use automatic slowdowns based on Schwarz info standard. In the Phillips-Perron and the Kwiatkowski-Phillips-Schmidt-Shin trials, we use Newey-West bandwidth based on Bartlett meat.

ADF = Augmented Dickey-Fuller trial

PP = Phillips-Perron trial

KPSS = Kwiatkowski-Phillips-Schmidt-Shin trial

I = short term involvement rate

dcpi = rising prices

Y = end product spread

dreer = existent effectual exchange rate growing

dm1 = pecuniary sum growing

dipi_n_t = mark nominal end product growing

dvel = money speed growing

dipi_n = nominal end product growing

dm1_d = deflated pecuniary sum growing

*Rejection of the unit root nothing at the 0.01 degree of significance.

**Rejection of the unit root nothing at the 0.05 degree of significance.

## Consequences from the Taylor regulation for unfastened economic system

The consequences of the appraisal of the Taylor regulation version for unfastened economic system indicate that merely the coefficient of rising prices is important and shows the expected positive mark. The p-value of 0.061 agencies that the coefficient of rising prices is important at 6.1 % significance degree, or higher. The coefficient of the end product spread shows the expected mark but the p-value of 0.8219 shows that it is extremely undistinguished. This could perchance be caused by the fact that for the first periods of the passage phase the informations on the end product spread was overestimated or by the fact that the end product spread was non one of the aims of the Central Bank of Russia. The overall consequence of the coefficient of the existent effectual exchange rate is positive

## .

This coefficient does non demo the expected mark and its high p-value indicates that it is besides extremely undistinguished.

Table 2 – Table regulation for unfastened economic system

Dependent variable – I

## Explanatory variable

## Coefficients

degree Celsiuss

0.806451

( 0.4922 )

dcpi

0.910862

( 0.0610 )

Y

0.059372

( 0.8219 )

dreer

0.401234

( 0.2118 )

dreer ( -1 )

-0.080888

( 0.7143 )

I ( -1 )

0.90969

( 0.0000 )

R-squared

0.937622

Notes: P-values are in parentheses.

I = short term involvement rate

dcpi = rising prices

Y = end product spread

dreer = existent effectual exchange rate growing

( -1 ) indicates a first slowdown

The reading of the coefficient of the lagged value of the short-run involvement rate of about 0.91 is that the involvement rate in the current period is about 91 % of the involvement rate in the old period plus the consequence of the other explanatory variables. There is a batch of continuity in the involvement rates. The high R-squared of about 0.938 agencies that the fluctuation in the refinancing rate can be explained 93.8 % by the fluctuation in the rising prices, the end product spread and the exchange rate. However, the fact that most of the explanatory variables are undistinguished and some of them show incorrect marks suggests that the Taylor regulation for unfastened economic system does non explicate the behavior of the Central Bank of Russia good. The consequences from the appraisals are shown in table 2 and the full consequences are in Appendix B.

## Consequences from the McCallum regulation

The consequences of the appraisal of the original version of the McCallum regulation demonstrate that the coefficient of the mark nominal end product growing shows the expected mark, but is undistinguished since the p-value of 0.574 is clearly higher than 0.05. However, both the coefficients of money speed growing and that of the alteration between the mark nominal end product growing and the nominal end product growing in the old period are demoing the expected marks. The p-values of these coefficients of 0.0000 and 0.0002 severally indicate that they are important at 5 % important degree. The consequences from the appraisal are shown in Appendix B.

The reading of the R-squared of about 0.599 is that the fluctuation in the growing of the pecuniary sum M1 is 59.9 % explained by the fluctuation in the explanatory variables. On the other manus, the insignificance of the coefficient of the mark nominal end product growing may be caused by a theoretical account misspecification or an inappropriate usage of some of the variables.

Furthermore, with the intent of stabilising the pecuniary sum informations from any seasonal fluctuations M1 variable was deflated with the consumer monetary value index.

This accommodation for rising prices generates a new series M1_d, which was so used to cipher the existent growing of the pecuniary sum dM1_d.

A new theoretical account was estimated with this chapfallen pecuniary sum growing as a dependant variable, go forthing all the independent variables the same. The consequences from the appraisals are shown in the tabular array 3 and they show a better public presentation of the theoretical account. The full consequences are in Appendix. The p-values of all coefficients are really little Numberss near to zero, which indicate that all explanatory variables are extremely important. Furthermore, the marks of all coefficients are consistent with our outlooks.

Table 3 – McCallum regulation with chapfallen pecuniary sum

Dependent variable – dm1_1

## Explanatory variable

## Coefficients

degree Celsiuss

1.757251

( 0.0008 )

dipi_n_t

-1.77451

( 0.0000 )

dvel

-0.77559

( 0.0000 )

dipi_n_t-dipi_n ( -1 )

0.329024

( 0.0000 )

R-squared

0.741536

Notes: P-values are in parentheses.

dipi_n_t = mark nominal end product growing

dvel = money speed growing

dipi_n = nominal end product growing

dm1_d = deflated pecuniary sum growing

( -1 ) indicates a first slowdown

The high R-squared of about 0.742 shows that approximately 74.2 % of the fluctuation in the pecuniary sum is explained by the theoretical account. In add-on, comparing to the old appraisals the information standard now is lower, bespeaking that this theoretical account is preferred. Akaike info standard in the McCallum regulation with chapfallen pecuniary sum is about 5.028 comparing to 5.28 in the McCallum regulation with the original value of pecuniary sum. Schwarz and Hannan-Quinn standard are 5.1 and 5.058 severally in the theoretical account with chapfallen value of M1 and 5.353 and 5.31 in the original version. All this illustrates that the modified version of the McCallum regulation is a better theoretical account and it explains the behavior of the Bank of Russia rather good. However, in order to observe if these consequences are dependable few trials have to be carried out.

## Residual trials

## Testing for consecutive correlativity

Breusch-Godfrey Serial Correlation LM Test

This trial is performed in order to see if there is consecutive correlativity in the remainders. The void hypothesis in this trial is that there is autocorrelation. The option is that there is autocorrelation. The figure of slowdowns is chosen based on the frequence of the informations used. In this appraisal monthly informations was usage, so the figure of slowdowns should be 12. The LM trial statistic of 44.76869 is clearly higher than the I‡2 ( 12 ) critical value at 99 % assurance interval of 26.217, so we can reject the void hypothesis of no consecutive correlativity up to dawdle 12 and reason that consecutive correlativity on the remainders in present. The p-value of I‡2 ( 12 ) of 0.0000 represents the chance with which it would be wrong to reject the void hypothesis of no consecutive correlativity up to dawdle 12 at the 99 % assurance interval.

Durbin-Watson trial

The nothing and the alternate hypothesises are the same as in the old LM trial for consecutive correlativity. From the appraisal end product we can see that the Durbin-Watson statistic is 1.484501, which is below the lower bound value of 1.738. Following the determination regulations, we reject the void hypothesis of no consecutive correlativity and conclude that there is autocorrelation in the remainders.

Correlogram – Q-statistics

The p-values of the Q-statistic for all 12 slowdowns are about equal to zero, which leads to a rejection of the void hypothesis of no autocorrelation and to conclusion that autocorrelation is present.

The end product from all three trials indicates the same consequence, so there is consecutive correlativity on the remainders. This unsatisfactory consequence frequently is present in time-series informations, and if consecutive correlativity is nor corrected the appraisals may non be dependable.

## Testing for heteroskedasticity

Breush-Pagan-Godfrey, White, ARCH and Glejser trials were performed in order to look into whether the mistakes are homoskedastic. The LM statistics from all trials are lower than the I‡2 with the relevant grades of freedom at 1 % significance degree, which indicate that the void hypothesis of no heteroskedasticity can non be rejected within a 99 % assurance interval. Therefore heteroskedasticity is non present in this theoretical account.

## Testing for normalcy

Since the chance of the Jarque-Bera statistic is close to zero, we should reject the void hypothesis of usually distributed remainders and conclude that the mistakes do non follow a normal distribution.The full consequences from all the trials for the McCallum regulation are shown in Appendix.

## Consequences from the full theoretical account

Since we have found that the theoretical account suffers from autocorrelation, a new theoretical account was estimated, where Newey-West standard mistakes were computed and lagged values of the growing rates of pecuniary sum, the money speed and the alteration between the mark nominal end product and the nominal end product were included. This version of the McCallum regulation with chapfallen pecuniary sum that besides accounts for autocorrelation is called the full theoretical account. The consequences from the appraisal of the full theoretical account are shown in the tabular array 4 and the full consequences are in Appendix.

Table 4 – Full moon theoretical account

Dependent variable – dm1_1

## Explanatory variable

## Coefficients

degree Celsiuss

0.781932

( 0.2768 )

dipi_n_t

-1.389999

( 0.0011 )

dvel

-0.869925

( 0.0000 )

dipi_n_t-dipi_n ( -1 )

0.953875

( 0.0000 )

dm1_d ( -1 )

0.773368

( 0.0000 )

dvel ( -1 )

0.792873

( 0.0000 )

dipi_n_t ( -1 ) -dipi_n ( -2 )

0.116527

( 0.0589 )

R-squared

0.839044

Notes: P-values are in parentheses.

dipi_n_t = mark nominal end product growing

dvel = money speed growing

dipi_n = nominal end product growing

dm1_d = deflated pecuniary sum growing

( -1 ) indicates a first slowdown

The consequences from the appraisal demonstrate that all the coefficients are demoing the expected marks. Although the coefficient of the lagged value of the money speed growing has positive mark, the overall consequence is still negative, since

## .

In add-on, since the p-values of all coefficients, except that of the lagged value of the alteration between the mark nominal end product growing and the nominal end product growing, are really little Numberss near to zero the coefficients are statistically important at 1 % degree of significance. The coefficient of the lagged value of the alteration between the mark nominal end product growing and the nominal end product growing is important at 10 % degree. The high R-squared of about 0.839 indicates that the fluctuation in the growing of pecuniary sum is 83.9 % explained by the theoretical account. Comparing to the old appraisal of the McCallum regulation with chapfallen pecuniary sum, where the effects of autocorrelation were non taken into consideration, the information standards are lower in the full theoretical account. Awake info standard is now equal to 4.591, Schwarz standard is 4.717 and Hannan-Quinn standard is 4.642, comparing to those of 5.028, 5.1 and 5.058 severally.

## Residual trials

## Testing for consecutive correlativity

The consequences from Breusch-Godfrey consecutive correlativity LM trial show that the LM trial statistic of 16.9432 is lower than the I‡2 ( 12 ) critical value at 99 % assurance interval of 26.217, so the void hypothesis of no consecutive correlativity up to dawdle 12 can non be reject. The p-value of I‡2 ( 12 ) of 0.1517 besides indicates that the void hypothesis can be accepted at 1 % degree of significance. Alternatively, we can look at the Durbin-Watson statistic of 1.997112 and observe that it is between the upper edge value of 1.735 and 4-1.735=2.265, which leads to the decision that the void hypothesis of no autocorrelation can non be rejected and that the mistakes are non correlated. This points to the fact that in the full theoretical account autocorrelation is non present. The full end product from the Breush-Godfrey consecutive correlativity LM trial is shown in Appendix.

## Testing for heteroskedasticity

The full theoretical account besides has to be tested for the presence of heteroskedasticity. The void hypothesis in the Breusch-Pagan-Godfrey trial is that there is no heteroskedasticity. The alternate hypothesis that heteroskedasticity is present. Since the LM trial statistic of 3.859366 is lower than the I‡2 ( 6 ) critical value at 99 % assurance interval of 16.812, the void hypothesis of no heteroskedasticity can non be rejected, taking to reason that heteroskedasticity is non present. The p-value of the LM statistic of 0.6957 indicates that the void hypothesis on no heteroskedasticity can be accepted at 1 % degree of significance.

Alternatively, we can use the White, the ARCH and the Glejser trials. First, in the White trial the LM statistic of 45.57788 is lower than the I‡2 ( 27 ) critical value at 99 % assurance interval of 46.963, so the void hypothesis of homoskedasticity can non be rejected. Second, in the ARCH trial the LM statistic of 10.61272 is lower than the I‡2 ( 12 ) critical value at 99 % assurance of 26.217, bespeaking that the void hypothesis of no heteroskedasticity can be accepted. Third, in the Glejser trial the LM statistic of 2.172672 is lower than the I‡2 ( 6 ) critical value at 99 % assurance of 16.812, demoing that the void hypothesis of homoskedasticity can non be rejected.

As we can see, all the trials performed demonstrate that heteroskedasticity is non present in the full theoretical account and no farther actions are required. The full end product from the trials is shown in Appendix F.

The nothing and the alternate hypothesis are the same as in the old heteroskedasticity trial. The p-values show that for all 12 slowdowns the void hypothesis should be accepted at 1 % significance degree, and so heteroskedasticity does non be. This consequence is the same as the one obtained from the Breusch-Pagan-Godfrey trial, so we can reason that heteroskedasticity is non present and no farther actions are required.

## Testing for normalcy

The histogram demonstrates that the remainders are non symmetric around zero and their tallness takes big values. In add-on, there are some big spreads and the histogram is non precisely bell-shaped. Yet analyzing the histogram is non a formal trial.

Jarque-Bera trial is used for this intent. The void hypothesis is that the remainders are usually distributed and the option is that they are non. The big value of Jarque-Bera statistic of 646.0628 is much higher than the 5 % I‡2 ( 1 ) critical value of 9.21, which leads to a rejection of the void hypothesis of usually distributed remainders. Identical decision can be made by detecting the p-value of nothing, which besides indicates that the remainders are non usually distributes. The full end product from the trial is shown in Appendix.

This unsatisfactory consequence indicates that any hypothesis trial could take to deceptive decisions. On the other manus, we must indicate out that due to the fact that a instead big sample was used in the appraisal, assurance intervals and trials are applicable, even though the remainders do non follow normal distribution[ 4 ]. Normally a rejection of the void hypothesis of usually distributed mistakes could be caused by few really utmost remainders. In such instances a manner to better the normalcy premise is to present in the theoretical account silent person variables for these outliers. The remainders were plotted against clip and several extreme values have been observed for December 1999, April 2000, November 2008 and January 2009. However, we must indicate out that by utilizing dummy variables for the outliers we lose valuable information.

## Consequences from the full theoretical account with dummy variables for outliers

Even though we drop out some utile piece of information, we include the silent persons for outliers and gauge a new theoretical account. The utmost values are erstwhile events due to either alterations in the authorities policy or a fiscal crisis.

The consequences are illustrated in the tabular array 5. The consequences demonstrate that all coefficients have marks harmonizing to anterior outlooks. The p-values of all coefficients are really little Numberss near to zero, which is lower than 0.05, so we can reject the single nothing hypothesises that each partial arrested development coefficient is zero and reason that all independent variables separately have impact on the growing of the pecuniary sum. The chance of the F-statistic is besides close to zero, which leads to a rejection of the void hypothesis that all coefficients at the same time are undistinguished. Therefore, the coefficients together besides act upon the dependant variable.

Table 4 – Full moon theoretical account with dummy outliers

Dependent variable – dm1_1

## Explanatory variable

## Coefficients

degree Celsiuss

1.519053

( 0.0010 )

dipi_n_t

-1.898059

( 0.0000 )

dvel

-0.878733

( 0.0000 )

dipi_n_t-dipi_n ( -1 )

0.921643

( 0.0000 )

dm1_d ( -1 )

0.649590

( 0.0000 )

dvel ( -1 )

0.680483

( 0.0000 )

dipi_n_t ( -1 ) -dipi_n ( -2 )

0.148092

( 0.0000 )

dum99m12

15.03787

dum00m04

8.411983

dum08m11

-8.293455

dum09m01

-9.562341

R-squared

0.912867

Notes: P-values are in parentheses.

dipi_n_t = mark nominal end product growing

dvel = money speed growing

dipi_n = nominal end product growing

dm1_d = deflated pecuniary sum growing

dum99m12 = 1 during December 1999 and 0 otherwise

dum00m04 = 1 during April 2000 and 0 otherwise

dum08m11 = 1 during August 2008 and 0 otherwise

dum09m01 = 1 during January 2009 and 0 otherwise

( -1 ) indicates a first slowdown and ( -2 ) indicates a 2nd slowdown

Comparing to the old full theoretical account without dummy outliers, this theoretical account is demoing a better public presentation. First, the adjusted R-squared has increased from 83.3 % to 90.76 % . Second, the values of the information standards have decreased from 4.591, 4.717 and 4.642 to 4.023, 4.222 and 4.104 for Akaike, Schwarz and Hanna-Quinn information standard severally. Finally, the standard mistakes have dropped from 2.356 to 1.754. All this indicates a better public presentation of the full theoretical account with dummy outliers.

## Residual trials

## Testing for consecutive correlativity

The end product from Breusch-Godfrey consecutive correlativity LM trial illustrate that the LM trial statistic of 21.25038is lower than the I‡2 ( 12 ) critical value at 99 % assurance interval of 26.217, so the void hypothesis of no autocorrelation can non be reject. The same decision can be reached by analyzing the chance. The p-value of I‡2 ( 12 ) of 0.0468 indicates that the void hypothesis can be accepted at 1 % degree of significance. Alternatively, we can analyze the Durbin-Watson statistic of 2.015513 and observe that it is between the upper edge value of 1.779 and 4-1.779=2.221. This implies that the void hypothesis of no consecutive correlativity can non be rejected and that the remainders are non correlated. The full end product from the Breush-Godfrey consecutive correlativity LM trial is shown in Appendix. Table 4 nowadayss a sum-up of the autocorrelation trials on all three estimated versions of the McCallum regulation.

Table 4 shows the consequences from autocorrelation trials.

Table 4 – Autocorrelation trials

## Model

## BG

## DW

## Consequence

LM stat

CV 1 %

DW stat

deciliter

du

mccallum_6

44.76869

( 0.0000 )

26.217

1.484501

1.643

1.704

Autocorrelation is present

ful_model

16.9432

( 0.1517 )

26.217

1.997112

1.613

1.735

No autocorrelation

ful_model_dum

21.25038

( 0.0468 )

26.217

2.015513

1.571

1.779

No autocorrelation

Notes: P-values are in parentheses.

BG = Breusch-Godfrey consecutive correlativity LM trial

DW = Durbin-Watson vitamin D trial

mccallum_6 = McCallum regulation with chapfallen pecuniary sum

ful_model = Accounted for autocorrelation McCallum regulation with chapfallen pecuniary sum

ful_model_dum = Accounted for autocorrelation McCallum regulation with chapfallen pecuniary sum and silent persons for outliers

## Testing for heteriskedasticity

The consequence from the Breusch-Pagan-Godfrey trial demonstrate that the LM trial statistic of 5.785371, which is lower than the I‡2 ( 10 ) critical value at 1 % significance degree of 23.209. Therefore, the void hypothesis of homoskedastic mistakes can non be rejected. The p-value of the LM statistic of 0.833 shows that the void hypothesis of no heteroskedasticity can be accepted at 99 % assurance interval.

Alternatively, we can utilize the White, the ARCH and the Glejser trials. First, in the White trial the LM statistic of 42.03407 is lower than the I‡2 ( 31 ) critical value at1 % of 50.8922. Second, in the ARCH trial the LM statistic of 15.87044 is lower than the I‡2 ( 12 ) critical value at 1 % of 26.217. Third, in the Glejser trial the LM statistic of 7.293319 is lower than the I‡2 ( 10 ) critical value at 1 % of 23.2093. This points out that the void hypothesis of no heteroskedasticity can non be rejected.. All the trials performed show that heteroskedasticity is non present in the theoretical account. The full end product from the trials is shown in Appendix. A sum-up of the heteroskedasticity trials carried out on all three versions of the McCallum regulation is presented in the tabular array 5.

Table 5 – Heteroskedasticity trials

## Model

## BPG

## White

## Arch

## Glejser

## Consequence

LM stat

CV 1 %

LM stat

CV 1 %

LM stat

CV 1 %

LM stat

CV 1 %

No heteroskedasticity

mccallum_6

7.417217

( 0.0597 )

11.345

58.55919

( 0.0000 )

21.666

16.39903

( 0.1736 )

26.217

2.239718

( 0.5242 )

11.345

ful_model

3.859366

( 0.6957 )

16.812

45.57788

( 0.0141 )

46.963

10.61272

( 0.5624 )

26.217

2.172672

( 0.9032 )

16.812

No heteroskedasticity

ful_model_dum

5.785371

( 0.8330 )

23.209

42.03407

( 0.0893 )

50.892

15.87044

( 0.1972 )

26.217

7.293319

( 0.6975 )

23.209

No heteroskedasticity

Notes: P-values are in parentheses.

BPG = Breusch-Pagan-Godfrey heteroskedasticity trial

White = White ‘s general heteroskedasticity trial

ARCH = Autoregressive conditional heteroskedasticity trial

Glejser = Glejser heteroskedasticity trial

mccallum_6 = McCallum regulation with chapfallen pecuniary sum

ful_model = Accounted for autocorrelation McCallum regulation with chapfallen pecuniary sum

ful_model_dum = Accounted for autocorrelation McCallum regulation with chapfallen pecuniary sum and silent persons for outliers

## Testing for normalcy

The Jarque-Bera statistic has drastically decreased comparison to the old theoretical account, it is now 20.38393. The mistakes are still non precisely usually distributed, but the premise of the normalcy has sufficiently improved due to the debut of the silent person outliers to the theoretical account. A sum-up of the normalcy trials carried out on all three versions of the McCallum regulation is presented in the tabular array 5.

Table 6 – Normality trial

## A

## Model

## JB

JB stat

CV 1 %

mccallum_6

514.0552

( 0.0000 )

9.21

ful_model

646.0628

( 0.0000 )

9.21

ful_model_dum

20.38393

( 0.000037 )

9.21

Notes: P-values are in parentheses.

mccallum_6 = McCallum regulation with chapfallen pecuniary sum

ful_model = Accounted for autocorrelation McCallum regulation with chapfallen pecuniary sum

ful_model_dum = Accounted for autocorrelation McCallum regulation with chapfallen pecuniary sum and silent persons for outliers

Word count:3364