### How do you interpret results from unit root tests?

I have to do some unit root tests for a project, I'm just unsure on how to interpret the data (which is what I have been asked to do).

Here is one of my results:

`dfuller Demand Dickey-Fuller test for unit root Number of obs = 50 ---------- Interpolated Dickey-Fuller --------- Test 1% Critical 5% Critical 10% Critical Statistic Value Value Value ------------------------------------------------------------------- Z(t) -1.987 -3.580 -2.930 -2.600 ------------------------------------------------------------------- MacKinnon approximate p-value for Z(t) = 0.2924`

What do I say about the critical values and the p-value results?

hurm.. how can i interpret unit root test? can you explain to me what is the relationship between the level and intercept, first different and intercept, level and intercept+trend, first different and intercept+trend. i am very confused how to interpret the output of unit root test

Welcome to the site, @fathin. This isn't an answer to the OP's question. Please only use the "Your Answer" field to provide answers. If you have your own question, click the `ASK QUESTION, which contains information for new users.

If you have a new question, please ask it by clicking the Ask Question button. Include a link to this question if it helps provide context.

This tests the null hypothesis that Demand follows a unit root process. You usually reject the null when the p-value is less than or equal to a specified significance level, often 0.05 (5%), or 0.01 (1%) and even 0.1 (10%). Your approximate p-value is 0.2924, so you would fail to reject the null in all these cases, but that does not imply that the null hypothesis is true. The data are merely consistent with it.

The other way to see this is that your test statistic is smaller (

**in absolute value**) than the 10% critical value. If you observed a test statistic like -4, then you could reject the null and claim that your variable is stationary. This might be more familiar way if you remember that you reject when the test statistic is "extreme". I find the absolute value thing a bit confusing, so I prefer to look at the p-value.But you aren't done yet. Some things to worry about and try:

- You don't have any lags here. There are three schools of thought on how to choose the right number. One, is to use the frequency of the data to decide (4 lags for quarterly, 12 for monthly). Two, chose some number of lags that you are confident are larger than needed, and trim away the longest lag as long as it is insignificant, one-by-one. This is a stepwise approach and can lead you astray. Three, use the modified DF test (
`dfgls`

in Stata), which includes estimates of the optimal number of lags to use. This test is also more powerful in a statistical sense of that word. - You also don't have a drift or a trend terms. If a graph of the data shows an upward trend over time, add the trend option. If there's no trend, but you have a nonzero mean, the default option you have is fine. It might help if you post a graph of the data.

If you get this, that would be great. In testing, do you simply convert everything into absolute value and then check if your t-value is less than your critical value?

@JackArmstrong Unfortunately, I have no idea what you are asking.

I am talking about in the Dickey Fuller. Take the t-stat value you solved, convert it to absolute value. Then take your t-critical value based on Observations and your level of significance and put that in absolute value. Then compare the two and hope that t-stat

@JackArmstrong I think the details depend on the options you specified for the test. I would just look at the p-value.

I agree. I just flipped through a couple of books I had and it said nothing about absolute value when using Dickey Fuller test. In testing with t-values though you want the abs(t-stat)>t-critical value and the coefficient to have the expected sign to reject null.

You accept the null of a unit root when the test statistic is larger algebraically (closer to zero) than the displayed critical values. If taking the absolute value does not help you determine that, by all means, there's no need to use it.

Since this test generally produces t-stats that are negative it works. If for some reason you end up with a positive t-stat that is lower than your t-crit your saying you accept the null? Where did you find what you said? Also, I think I am arguing that you should not take the absolute value especially when the t-stat is negative. You can always compare which one is larger to begin with. When you take absolute value, t-stat is always larger cause now positive, but it could also be further away from zero than the t-crit. You lose information I think.

@JackArmstrong For instance, from Wiki entry for ADF, "The augmented Dickey–Fuller (ADF) statistic, used in the test, is a negative number."

- You don't have any lags here. There are three schools of thought on how to choose the right number. One, is to use the frequency of the data to decide (4 lags for quarterly, 12 for monthly). Two, chose some number of lags that you are confident are larger than needed, and trim away the longest lag as long as it is insignificant, one-by-one. This is a stepwise approach and can lead you astray. Three, use the modified DF test (
Addition to @ Dimitriy:

The

`Stata`

runs the`OLS`

regression for the`ADF`

in`first difference`

form. So, the null is that the coefficient on lag of level of dependent variable (Demand here) on the right hand side is zero (you need to use the options regress, to confirm that it is running regression in`first difference`

form) . The alternative is that it is less than zero (`one-tailed test`

). So, when you compare the computed test statistics and critical value, you have to reject the null if computed value is smaller than critical value (`note that this is one (left) tailed test`

). In your case,-1.987 is not smaller than -3.580 (1% critical value) [Try not to use the absolute value because that is usually applied to`two-tailed test`

]. So, we do not reject the null at 1%. If you go on like that, you will see that null is also not rejected at 5% or 10%. This is also confirmed by`MacKinnon approximate p-value for Z(t) = 0.2924`

which says that null will be rejected only around 30% which is quite a high considering the traditional level of significance (1,5,and 10 %).**More theoretical:**Under the null, the demand follows an unit root process. So we can't apply the usual central limit theorem. We instead need to use functional central limit theorem. In other words, the test statistics don't follow

`t`

distribution but`Tau`

distribution. So, we can't use the critical values from`t-distribution`

.STATA

Valor z >Valor crítico 5% >>>> Acepto Ho: la serie tiene raíces unitarias >>>> Si hay raíces unitarias >>>> serie no estacionaria

La probabilidad del valor de z(t) es no significativo >>>> serie no estacionaria

Valor z ≤ Valor crítico 5% >>>> Rechazo Ho: la serie tiene raíces unitarias >>>>No hay raíces unitarias >>>> serie estacionaria

La probabilidad del valor de z(t) es significativo >>>> serie estacionaria

*(Rough and somewhat free) translation*If $z > z_{0.05}$ where $z_{0.05}$ is the critical value of the test, then we "accept" $H_0$, i.e., that the series has a unit root. If there are unit roots, the series is not stationary.

Accordingly, if the $p$-value of $z(t)$ is not significant, the series is not stationary.

If $z \leq z_{0.05}$ then we reject the null hypothesis $H_0$ that the series has a unit root. If there are no unit roots, then we conclude the series is stationary.

The $p$-value of $z(t)$ being significant would lead us to conclude that the series is stationary.

License under CC-BY-SA with attribution

Content dated before 6/26/2020 9:53 AM

usεr11852 8 years ago

In case this help: http://stats.stackexchange.com/questions/29121/intuitive-explanation-of-unit-root The whole thread is pretty epic.