Visualise panel data regression with ExPanDaR package in R

The ExPand package is an example of a shiny app.

What is a shiny app, you ask? Click to look at a quick Youtube explainer. It’s basically a handy GUI for R.

When we feed a panel data.frame into the ExPanD() function, a new screen pops up from R IDE (in my case, RStudio) and we can interactively toggle with various options and settings to run a bunch of statistical and visualisation analyses.

Click here to see how to convert your data.frame to pdata.frame object with the plm package.

Be careful your pdata.frame is not too large with too many variables in the mix. This will make ExPanD upset enough to crash. Which, of course, I learned the hard way.

Also I don’t know why there are random capitalizations in the PaCkaGe name. Whenever I read it, I think of that Sponge Bob meme.

If anyone knows why they capitalised the package this way. please let me know!

So to open up the new window, we just need to feed the pdata.frame into the function:

ExPanD(mil_pdf)

For my computer, I got error messages for the graphing sections, because I had an old version of Cairo package. So to rectify this, I had to first install a source version of Cairo and restart my R session. Then, the error message gods were placated and they went away.

install.packages("Cairo", type="source")

Then press command + shift + F10 to restart R session

library(Cairo)

You may not have this problem, so just ignore if you have an up-to-date version of the necessary packages.

When the new window opens up, the first section allows you to filter subsections of the panel data.frame. Similar to the filter() argument in the dplyr package.

For example, I can look at just the year 1989:

But let’s look at the full sample

We can toggle with variables to look at mean scores for certain variables across different groups. For example, I look at physical integrity scores across regime types.

  • Purple plot: closed autocracy
  • Turquoise plot: electoral autocracy
  • Khaki plot: electoral democracy:
  • Peach plot: liberal democracy

The plots show that there is a high mean score for physical integrity scores for liberal democracies and less variance. However with the closed and electoral autocracies, the variance is greater.

We can look at a visualisation of the correlation matrix between the variables in the dataset.

Next we can look at a scatter plot, with option for loess smoother line, to graph the relationship between democracy score and physical integrity scores. Bigger dots indicate larger GDP level.

Last we can run regression analysis, and add different independent variables to the model.

We can add fixed effects.

And we can subset the model by groups.

The first column, the full sample is for all regions in the dataset.

The second column, column 1 is

Column 2 Post Soviet countries

Column 3: Latin America

Column 4: AFRICA

Column 5: Europe, North America

Column 6: Asia

Choose model variables by AIC in a stepwise algorithm with the MASS package in R

Running a regression model with too many variables – especially irrelevant ones – will lead to a needlessly complex model. Stepwise can help to choose the best variables to add.

Packages you need:

library(MASS)

First, choose a model and throw every variable you think has an impact on your dependent variable!

I hear the voice of my undergrad professor in my ear: ” DO NOT go for the “throw spaghetti at the wall and just see what STICKS” approach. A cardinal sin.

We must choose variables because we have some theoretical rationale for any potential relationship. Or else we could end up stumbling on spurious relationships.

Like the one between Nick Cage movies and incidence of pool drowning.

Awkward Schitts Creek GIF by CBC - Find & Share on GIPHY

However …

… beyond just using our sound theoretical understanding of the complex phenomena we study in order to choose our model variables …

… one additional way to supplement and gauge which variables add to – or more importantly omit from – the model is to choose the one with the smallest amount of error.

We can operationalise this as the model with the lowest Akaike information criterion (AIC).

AIC is an estimator of in-sample prediction error and is similar to the adjusted R-squared measures we see in our regression output summaries.

It effectively penalises us for adding more variables to the model.

Lower scores can indicate a more parsimonious model, relative to a model fit with a higher AIC. It can therefore give an indication of the relative quality of statistical models for a given set of data.

As a caveat, we can only compare AIC scores with models that are fit to explain variance of the same dependent / response variable.

data(mtcars)
summary(car_model <- lm(mpg ~., data = mtcars))

With our model, we can now feed it into the stepwise function. For the direction argument, you can choose between backward and forward stepwise selection,

  • Forward steps: start the model with no predictors, just one intercept and search through all the single-variable models, adding variables, until we find the the best one (the one that results in the lowest residual sum of squares)
  • Backward steps: we start stepwise with all the predictors and removes variable with the least statistically significant (the largest p-value) one by one until we find the lost AIC.

Backward stepwise is generally better because starting with the full model has the advantage of considering the effects of all variables simultaneously.

Unlike backward elimination, forward stepwise selection is more suitable in settings where the number of variables is bigger than the sample size.

So tldr: unless the number of candidate variables is greater than the sample size (such as dealing with genes), using a backward stepwise approach is default choice.

You can also choose direction = "both":

step_car <- stepAIC(car_model, trace = TRUE, direction= "both")

If you add the trace = TRUE, R prints out all the steps.

I’ll show the last step to show you the output.

The goal is to have the combination of variables that has the lowest AIC or lowest residual sum of squares (RSS).

The last line is the final model that we assign to step_car object.

stargazer(car_model, step_car, type = "text")

We can see that the stepwise model has only three variables compared to the ten variables in my original model.

And even with far fewer variables, the R2 has decreased by an insignificant amount. In fact the Adjusted R2 increased because we are not being penalised for throwing so many unnecessary variables.

So we can quickly find a model that loses no explanatory power by is far more parsimonious.

Plus in the original model, only one variable is significant but in the stepwise variable all three of the variables are significant.

From the olsrr package

step_plot <- ols_step_both_aic(car_model)
plot(step_plot)

Check linear regression residuals are normally distributed with olsrr package in R.

Packages we will need:

library(olsrr)

One core assumption of linear regression analysis is that the residuals of the regression are normally distributed.

When the normality assumption is violated, interpretation and inferences may not be reliable or not at all valid.

So it is important we check this assumption is not violated.

As well residuals being normal distributed, we must also check that the residuals have the same variance (i.e. homoskedasticity). Click here to find out how to check for homoskedasticity and then if there is a problem with the variance, click here to find out how to fix heteroskedasticity (which means the residuals have a non-random pattern in their variance) with the sandwich package in R.

There are three ways to check that the error in our linear regression has a normal distribution (checking for the normality assumption):

  • plots or graphs such histograms, boxplots or Q-Q-plots,
  • examining skewness and kurtosis indices
  • formal normality tests.

So let’s start with a model. I will try to model what factors determine a country’s propensity to engage in war in 1995. The factors I throw in are the number of conflicts occurring in bordering states around the country (bordering_mid), the democracy score of the country and the military expediture budget of the country, logged (exp_log).

summary(war_model <- lm(mid_propensity ~ bordering_mid + democracy_score + exp_log, data = military))
stargazer(war_model, type = "text")

So now we have our simple model, we can check whether the regression is normally distributed. Insert the model into the following function. This will print out four formal tests that run all the complicated statistical tests for us in one step!

ols_test_normality(war_model)

Luckily, in this model, the p-value for all the tests (except for the Kolmogorov-Smirnov, which is juuust on the border) is less than 0.05, so we can reject the null that the errors are not normally distributed. Good to see.

Which of the normality tests is the best?

A paper by Razali and Wah (2011) tested all these formal normality tests with 10,000 Monte Carlo simulation of sample data generated from alternative distributions that follow symmetric and asymmetric distributions.

Their results showed that the Shapiro-Wilk test is the most powerful normality test, followed by Anderson-Darling test, and Kolmogorov-Smirnov test. Their study did not look at the Cramer-Von Mises test. These

The results of this study echo the previous findings of Mendes and Pala (2003) and Keskin (2006) in support of Shapiro-Wilk test as the most powerful normality test.

However, they emphasised that the power of all four tests is still low for small sample size. The common threshold is any sample below thirty observations.

We can visually check the residuals with a Residual vs Fitted Values plot.

plot(war_model)

To interpret, we look to see how straight the red line is. With our war model, it deviates quite a bit but it is not too extreme.

The Q-Q plot shows the residuals are mostly along the diagonal line, but it deviates a little near the top. Generally, it will

So out model has relatively normally distributed model, so we can trust the regression model results without much concern!

References

Razali, N. M., & Wah, Y. B. (2011). Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. Journal of statistical modeling and analytics2(1), 21-33.

Check for multicollinearity with the car package in R

Packages we will need:

install.packages("car")
library(car)

When one independent variable is highly correlated with another independent variable (or with a combination of independent variables), the marginal contribution of that independent variable is influenced by other predictor variables in the model.

And so, as a result:

  • Estimates for regression coefficients of the independent variables can be unreliable.
  • Tests of significance for regression coefficients can be misleading.

To check for multicollinearity problem in our model, we need the vif() function from the car package in R. VIF stands for variance inflation factor. It measures how much the variance of any one of the coefficients is inflated due to multicollinearity in the overall model.

As a rule of thumb, a vif score over 5 is a problem. A score over 10 should be remedied and you should consider dropping the problematic variable from the regression model or creating an index of all the closely related variables.

This blog post will look only at the VIF score. Click here to look at how to interpret various other multicollinearity tests in the mctest package in addition to the the VIF score.

Back to our model, I want to know whether countries with high levels of clientelism, high levels of vote buying and low democracy scores lead to executive embezzlement?

So I fit a simple linear regression model (and look at the output with the stargazer package)

summary(embezzlement_model_1 <- lm(executive_embezzlement ~ clientelism_index + vote_buying_score + democracy_score, data = data_2010))

stargazer(embezzlement_model_1, type = "text")

I suspect that clientelism and vote buying variables will be highly correlated. So let’s run a test of multicollinearity to see if there is any problems.

car::vif(embezzlement_model_1)

The VIF score for the three independent variables are :

Both clientelism index and vote buying variables are both very high and the best remedy is to remove one of them from the regression. Since vote buying is considered one aspect of clientelist regime so it is probably overlapping with some of the variance in the embezzlement score that the clientelism index is already explaining in the model

So re-run the regression without the vote buying variable.

summary(embezzlement_model_2 <- lm(v2exembez ~ v2xnp_client  + v2x_api, data = vdem2010))
stargazer(embezzlement_model_2, embezzlement_model_2, type = "text")
car::vif(embezzlement_mode2)

Comparing the two regressions:

And running a VIF test on the second model without the vote buying variable:

car::vif(embezzlement_model_2)

These scores are far below 5 so there is no longer any big problem of multicollinearity in the second model.

Click here to quickly add VIF scores to our regression output table in R with jtools package.

Plus, looking at the adjusted R2, which compares two models, we see that the difference is very small, so we did not lose much predictive power in dropping a variable. Rather we have minimised the issue of highly correlated independent variables and thus an inability to tease out the real relationships with our dependent variable of interest.

tl;dr: As a rule of thumb, a vif score over 5 is a problem. A score over 10 should be remedied (and you should consider dropping the problematic variable from the regression model or creating an index of all the closely related variables).

Click here to run stepwise regression analysis to help decide which problematic variables we can drop from our model (based on AIC scores)

Correct for heteroskedasticity in OLS with sandwich package in R

Packages we will need:

library(sandwich)
library(stargazer)
library(lmtest)

If our OLS model demonstrates high level of heteroskedasticity (i.e. when the error term of our model is not randomly distributed across observations and there is a discernible pattern in the error variance), we run into problems.

Why? Because this means OLS will use sub-optimal estimators based on incorrect assumptions and the standard errors computed using these flawed least square estimators are more likely to be under-valued.

Since standard errors are necessary to compute our t – statistic and arrive at our p – value, these inaccurate standard errors are a problem.

Click here to check for heteroskedasticity in your model with the lmtest package.

To correct for heteroskedastcity in your model, you need the sandwich package and the lmtest package to employ the vcocHC argument.

Gordon Ramsey Idiot GIF - Find & Share on GIPHY

First, let’s fit a simple OLS regression.

summary(free_express_model <- lm(freedom_expression ~ free_elections + deliberative_index, data = data_1990))

We can see that there is a small star beside the main dependent variable of interest! Success!

Happy So Excited GIF - Find & Share on GIPHY

We have significance.

Thus, we could say that the more free and fair the elections a country has, this increases the mean freedom of expression index score for that country.

This ties in with a very minimalist understanding of democracy. If a country has elections and the populace can voice their choice of leadership, this will help set the scene for a more open society.

However, it is naive to look only at the p – value of any given coefficient in a regression output. If we run some diagnostic analyses and look at the relationship graphically, we may need to re-examine this seemingly significant relationship.

Can we trust the 0.087 standard error score that our OLS regression calculated? Is it based on sound assumptions?

Worried Its Always Sunny In Philadelphia GIF by HULU - Find & Share on GIPHY

First let’s look at the residuals. Can we assume that the variance of error is equal across all observations?

If we examine the residuals (the first graph), we see that there is actually a tapered fan-like pattern in the error variance. As we move across the x axis, the variance along the y axis gets continually smaller and smaller.

The error does not look random.

Panicking Oh No GIF by HULU - Find & Share on GIPHY

Let’s run a Breush-Pagan test (from the lmtest package) to check our suspicion of heteroskedasticity.

lmtest::bptest(free_exp_model)

We can reject the null hypothesis that the error variance is homoskedastic.

So the model does suffer from heteroskedasticty. We cannot trust those stars in the regression output!

Season 1 Omg GIF by Friends - Find & Share on GIPHY

In order to fix this and make our p-values more accuarate, we need to install the sandwich package to feed in the vcovHC adjustment into the coeftest() function.

vcovHC stands for variance covariance Heteroskedasticity Consistent.

With the stargazer package (which prints out all the models in one table), we can compare the free_exp_model alone with no adjustment, then four different variations of the vcovHC adjustment using different formulae (as indicated in the type argument below).

stargazer(free_exp_model,
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC0")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC1")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC2")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC3")),
          type = "text")

Looking at the standard error in the (brackets) across the OLS and the coeftest models, we can see that the standard error are all almost double the standard error from the original OLS regression.

There is a tiny difference between the different types of Heteroskedastic Consistent (HC) types.

The significant p – value disappears from the free and fair election variable when we correct with the vcovHC correction.

Season 2 Friends GIF - Find & Share on GIPHY

The actual coefficient stays the same regardless of whether we use no correction or any one of the correction arguments.

Which HC estimator should I use in my vcovHC() function?

The default in the sandwich package is HC3.

STATA users will be familiar with HC1, as it is the default robust standard error correction when you add robust at the end of the regression command.

The difference between them is not very large.

The estimator HC0 was suggested in the econometrics literature by White in 1980 and is justified by asymptotic arguments.

For small sample sizes, the standard errors from HC0 are quite biased, usually downward, and this results in overly liberal inferences in regression models (Bera, Suprayitno & Premaratne, 2002). But with HC0, the bias shrinks as your sample size increases.

The estimator types HC1, HC2 and HC3 were put forward by MacKinnon and White (1985) to improve the performance in small samples.

Long and Ervin (2000) furthermore argue that HC3 provides the best performance in small samples as it gives less weight to influential observations in the model

In our freedom of expression regression, the HC3 estimate was the most conservative with the standard error calculations. however the difference between the approaches did not change the conclusion; ultimately the main independent variable of interest in this analysis – free and fair elections – can explain variance in the dependent variable – freedom of expression – does not find evidence in the model.

Click here to read an article by Hayes and Cai (2007) which discusses the matrix formulae and empirical differences between the different calculation approaches taken by the different types. Unfortunately it is all ancient Greek to me.

References

Bera, A. K., Suprayitno, T., & Premaratne, G. (2002). On some heteroskedasticity-robust estimators of variance–covariance matrix of the least-squares estimators. Journal of Statistical Planning and Inference108(1-2), 121-136.

Hayes, A. F., & Cai, L. (2007). Using heteroskedasticity-consistent standard error estimators in OLS regression: An introduction and software implementation. Behavior research methods39(4), 709-722.

Long, J. S., & Ervin, L. H. (2000). Using heteroscedasticity consistent standard errors in the linear regression model. The American Statistician54(3), 217-224.

MacKinnon, J. G., & White, H. (1985). Some heteroskedasticity-consistent covariance matrix estimators with improved finite sample properties. Journal of econometrics29(3), 305-325.

Check for heteroskedasticity in OLS with lmtest package in R

One core assumption when calculating ordinary least squares regressions is that all the random variables in the model have equal variance around the best fitting line.

Essentially, when we run an OLS, we expect that the error terms have no fan pattern.

Example of homoskedasticiy

So let’s look at an example of this assumption being satisfied. I run a simple regression to see whether there is a relationship between and media censorship and civil society repression in 178 countries in 2010.

ggplot(data_010, aes(media_censorship, civil_society_repression)) 
      + geom_point() + geom_smooth(method = "lm") 
      + geom_text(size = 3, nudge_y = 0.1, aes(label = country))

If we run a simple regression

summary(repression_model <- lm(media_censorship ~ civil_society_repression, data = data_2010))
stargazer(repression_model, type = "text")

This is pretty common sense; a country that represses its citizens in one sphere is more likely to repress in other areas. In this case repressing the media correlates with repressing civil society.

We can plot the residuals of the above model with the autoplot() function from the ggplotigfy package.

library(ggplotify)
autoplot(repression_model)

Nothing unusual appears to jump out at us with regard to evidence for heteroskedasticity!

In the first Residuals vs Fitted plot, we can see that blue line does not drastically diverge from the dotted line (which indicates residual value = 0).

The third plot Scale-Location shows again that there is no drastic instances of heteroskedasticity. We want to see the blue line relatively horizontal. There is no clear pattern in the distribution of the residual points.

In the Residual vs. Leverage plot (plot number 4), the high leverage observation 19257 is North Korea! A usual suspect when we examine model outliers.

While it is helpful to visually see the residuals plotted out, a more objective test can help us find whether the model is indeed free from heteroskedasticity problems.

For this we need the Breusch-Pagan test for heteroskedasticity from the lmtest package.

install.packages("lmtest)
library(lmtest)
bptest(repression_model)

The default in R is the studentized Breusch-Pagan. However if you add the studentize = FALSE argument, you have the non-studentized version

The null hypothesis of the Breusch-Pagan test is that the variance in the model is homoskedastic.

With our repression_model, we cannot reject the null, so we can say with confidence that there is no major heteroskedasticity issue in our model.

The non-studentized Breusch-Pagan test makes a very big assumption that the error term is from Gaussian distribution. Since this assumption is usually hard to verify, the default bptest() in R “studentizes” the test statistic and provide asymptotically correct significance levels for distributions for error.

Why do we care about heteroskedasticity?

If our model demonstrates high level of heteroskedasticity (i.e. the random variables have non-random variation across observations), we run into problems.

Why?

  • OLS uses sub-optimal estimators based on incorrect assumptions and
  • The standard errors computed using these flawed least square estimators are more likely to be under-valued. Since standard errors are necessary to compute our t – statistics and arrive at our p – value, these inaccurate standard errors are a problem.

Example of heteroskedasticity

Let’s look at an example of this homoskedasticity assumption NOT being satisfied.

I run a simple regression to see whether there is a relationship between democracy score and respect for individuals’ private property rights in 178 countries in 2010.

When you are examining democracy as the main dependent variable, heteroskedasticity is a common complaint. This is because all highly democratic countries are all usually quite similar. However, when we look at autocracies, they are all quite different and seem to march to the beat of their own despotic drum. We cannot assume that the random variance across different regime types is equally likely.

First, let’s have a look at the relationship.

prop_graph <- ggplot(vdem2010, aes(v2xcl_prpty, v2x_api)) 
                     + geom_point(size = 3, aes(color = factor(regime_type))) 
                     + geom_smooth(method = "lm")
prop_graph + scale_colour_manual(values = c("#D55E00", "#E69F00", "#009E73", "#56B4E9"))

Next, let’s fit the model to examine the relationship.

summary(property_model <- lm(property_score ~ democracy_score, data = data_2010))
stargazer(property_model, type = "text")

To plot the residuals (and other diagnostic graphs) of the model, we can use the autoplot() function to look at the relationship in the model graphically.

autoplot(property_model)

Graph number 1 plots the residuals against the fitted model and we can see that lower values on the x – axis (fitted values) correspond with greater spread on the y – axis. Lower democracy scores relate to greater error on property rights index scores. Plus the blue line does not lie horizontal and near the dotted line. It appears we have non-random error patterns.

Examining the Scale – Location graph (number 3), we can see that the graph is not horizontal.

Again, interpreting the graph can be an imprecise art. So a more objective approach may be to run the bptest().

bptest(property_model)

Since the p – value is far smaller than 0.05, we can reject the null of homoskedasticity.

Rather, we have evidence that our model suffers from heteroskedasticity. The standard errors in the regression appear smaller than they actually are in reality. This inflates our t – statistic and we cannot trust our p – value.

In the next blog post, we can look at ways to rectify this violation of homoskedasticity and to ensure that our regression output has more accurate standard errors and therefore more accurate p – values.

Click here to use the sandwich package to fix heteroskedasticity in the OLS regression.

Add Correlates of War codes with countrycode package in R

One problem with merging two datasets by country is that the same countries can have different names. Take for example, America. It can be entered into a dataset as any of the following:

  • USA
  • U.S.A.
  • America
  • United States of America
  • United States
  • US
  • U.S.

This can create a big problem because datasets will merge incorrectly if they think that US and America are different countries.

Correlates of War (COW) is a project founded by Peter Singer, and catalogues of all inter-state war since 1963. This project uses a unique code for each country.

For example, America is 2.

When merging two datasets, there is a helpful R package that can convert the various names for a country into the COW code:

install.packages("countrycode")
library(countrycode)

To read more about the countrycode package in the CRAN PDF, click here.

First create a new name for the variable I want to make; I’ll call it COWcode in the dataset.

Then use the countrycode() function. First type in the brackets the name of the original variable that contains the list of countries in the dataset. Then finally add "country.name", "cown". This turns the word name for each country into the numeric COW code.

dataset$COWcode <- countrycode(dataset$countryname, "country.name", "cown")

If you want to turn into a country name, swap the "country.name" and "cown"

dataset$countryname <- countrycode(dataset$COWcode, "country.name", "cown")

Now the dataset is ready to merge more easily with my other dataset on the identical country variable type!

There are many other types of codes that you can add to your dataset.

A very popular one is the ISO-2 and ISO-3 codes. For example, if you want to add flags to your graph, you will need a two digit code for each country (for example, Ireland is IE).

To see the list of all the COW codes, click here.

To check out the COW database website, click here.

Alternative codes than the country.name and the cown options include:

• ccTLD: IANA country code top-level domain
• country.name: country name (English)
• country.name.de: country name (German)
• cowc: Correlates of War character
• cown: Correlates of War numeric
• dhs: Demographic and Health Surveys Program
• ecb: European Central Bank
• eurostat: Eurostat
• fao: Food and Agriculture Organization of the United Nations numerical code
• fips: FIPS 10-4 (Federal Information Processing Standard)
• gaul: Global Administrative Unit Layers
• genc2c: GENC 2-letter code
• genc3c: GENC 3-letter code
• genc3n: GENC numeric code
• gwc: Gleditsch & Ward character
• gwn: Gleditsch & Ward numeric
• imf: International Monetary Fund
• ioc: International Olympic Committee
• iso2c: ISO-2 character
• iso3c: ISO-3 character
• iso3n: ISO-3 numeric
• p4n: Polity IV numeric country code
• p4c: Polity IV character country code
• un: United Nations M49 numeric codes
4 codelist
• unicode.symbol: Region subtag (often displayed as emoji flag)
• unpd: United Nations Procurement Division
• vdem: Varieties of Democracy (V-Dem version 8, April 2018)
• wb: World Bank (very similar but not identical to iso3c)
• wvs: World Values Survey numeric code