Compare Irish census years with compareBars and csodata package in R

Packages we will need:

library(csodata)
library(janitor)
library(ggcharts)
library(compareBars)
library(tidyverse)

First, let’s download population data from the Irish census with the Central Statistics Office (CSO) API package, developed by Conor Crowley.

You can search for the data you want to analyse via R or you can go to the CSO website and browse around the site.

I prefer looking through the site because sometimes I stumble across a dataset I didn’t even think to look for!

Keep note of the code beside the red dot star symbol if you’re looking around for datasets.

Click here to check out the CRAN PDF for the CSO package.

You can search for keywords with cso_search_toc(). I want total population counts for the whole country.

cso_search_toc("total population")

We can download the variables we want by entering the code into the cso_get_data() function

The Good Place Yes GIF by NBC - Find & Share on GIPHY
irish_pop <- cso_get_data("EY007")
View(irish_pop)

The EY007 code downloads population census data in both 2011 and 2016 at every age.

It needs a little bit of tidying to get it ready for graphing.

irish_pop %<>%  
  clean_names()

First, we can be lazy and use the clean_names() function from the janitor package.

GIF by The Good Place - Find & Share on GIPHY

Next we can get rid of the rows that we don’t want with select().

Then we use the pivot_longer() function to turn the data.frame from wide to long and to turn the x2011 and x2016 variables into one year variable.

irish_pop %>% 
  filter(at_each_year_of_age == "Population") %>% 
  filter(sex == 'Both sexes') %>% 
  filter(age_last_birthday != "All ages") %>% 
  select(!statistic) %>% 
  select(!sex) %>% 
  select(!at_each_year_of_age) -> irish_wide

irish_wide %>% 
  pivot_longer(!age_last_birthday,
    names_to = "year", 
    values_to = "pop_count",
    values_drop_na = TRUE) %>% 
    mutate(year = as.factor(year)) -> irish_long

No we can create our pyramid chart with the pyramid_chart() from the ggcharts package. The first argument is the age category for both the 2011 and 2016 data. The second is the actual population counts for each year. Last, enter the group variable that indicates the year.

irish_long %>%   
  pyramid_chart(age_last_birthday, pop_count, year)

One problem with the pyramid chart is that it is difficult to discern any differences between the two years without really really examining each year.

One way to more easily see the differences with the compareBars function

The compareBars package created by David Ranzolin can help to simplify comparative bar charts! It’s a super simple function to use that does a lot of visualisation leg work under the hood!

First we need to pivot the data.frame back to wide format and then input the age, and then the two groups – x2011 and x2016 – in the compareBars() function.

We can add more labels and colors to customise the graph also!

irish_long %>% 
  pivot_wider(names_from = year, values_from = pop_count) %>% 
  compareBars(age_last_birthday, x2011, x2016, orientation = "horizontal",
              xLabel = "Population",
              yLabel = "Year",
              titleLabel = "Irish Populations",
              subtitleLabel = "Comparing 2011 and 2016",
              fontFamily = "Arial",
              compareVarFill1 = "#FE6D73",
              compareVarFill2 = "#17C3B2") 

We can see that under the age of four-ish, 2011 had more at the time. And again, there were people in their twenties in 2011 compared to 2016.

However, there are more older people in 2016 than in 2011.

Similar to above it is a bit busy! So we can create groups for every five age years categories and examine the broader trends with fewer horizontal bars.

First we want to remove the word “years” from the age variable and convert it to a numeric class variable. We can easily do this with the parse_number() function from the readr package

irish_wide %<>% 
mutate(age_num = readr::parse_number(as.character(age_last_birthday))) 

Next we can group the age years together into five year categories, zero to 5 years, 6 to 10 years et cetera.

We use the cut() function to divide the numeric age_num variable into equal groups. We use the seq() function and input age 0 to 100, in increments of 5.

irish_wide$age_group = cut(irish_wide$age_num, seq(0, 100, 5))

Next, we can use group_by() to calculate the sum of each population number in each five year category.

And finally, we use the distinct() function to remove the duplicated rows (i.e. we only want to keep the first row that gives us the five year category’s population count for each category.

irish_wide %<>% 
  group_by(age_group) %>% 
  mutate(five_year_2011 = sum(x2011)) %>% 
  mutate(five_year_2016 = sum(x2016)) %>% 
  distinct(five_year_2011, five_year_2016, .keep_all = TRUE)

Next plot the bar chart with the five year categories

compareBars(irish_wide, age_group, five_year_2011, five_year_2016, orientation = "horizontal",
              xLabel = "Population",
              yLabel = "Year",
              titleLabel = "Irish Populations",
              subtitleLabel = "Comparing 2011 and 2016",
              fontFamily = "Arial",
              compareVarFill1 = "#FE6D73",
              compareVarFill2 = "#17C3B2") 

irish_wide2 %>% 
  select(age_group, five_year_2011, five_year_2016) %>% 
  pivot_longer(!age_group,
             names_to = "year", 
             values_to = "pop_count",
             values_drop_na = TRUE) %>% 
  mutate(year = as.factor(year)) -> irishlong2

irishlong2 %>%   
  pyramid_chart(age_group, pop_count, year)

Choose model variables by AIC in a stepwise algorithm with the MASS package in R

Running a regression model with too many variables – especially irrelevant ones – will lead to a needlessly complex model. Stepwise can help to choose the best variables to add.

Packages you need:

library(MASS)

First, choose a model and throw every variable you think has an impact on your dependent variable!

I hear the voice of my undergrad professor in my ear: ” DO NOT go for the “throw spaghetti at the wall and just see what STICKS” approach. A cardinal sin.

We must choose variables because we have some theoretical rationale for any potential relationship. Or else we could end up stumbling on spurious relationships.

Like the one between Nick Cage movies and incidence of pool drowning.

Awkward Schitts Creek GIF by CBC - Find & Share on GIPHY

However …

… beyond just using our sound theoretical understanding of the complex phenomena we study in order to choose our model variables …

… one additional way to supplement and gauge which variables add to – or more importantly omit from – the model is to choose the one with the smallest amount of error.

We can operationalise this as the model with the lowest Akaike information criterion (AIC).

AIC is an estimator of in-sample prediction error and is similar to the adjusted R-squared measures we see in our regression output summaries.

It effectively penalises us for adding more variables to the model.

Lower scores can indicate a more parsimonious model, relative to a model fit with a higher AIC. It can therefore give an indication of the relative quality of statistical models for a given set of data.

As a caveat, we can only compare AIC scores with models that are fit to explain variance of the same dependent / response variable.

data(mtcars)
summary(car_model <- lm(mpg ~., data = mtcars))

With our model, we can now feed it into the stepwise function. For the direction argument, you can choose between backward and forward stepwise selection,

  • Forward steps: start the model with no predictors, just one intercept and search through all the single-variable models, adding variables, until we find the the best one (the one that results in the lowest residual sum of squares)
  • Backward steps: we start stepwise with all the predictors and removes variable with the least statistically significant (the largest p-value) one by one until we find the lost AIC.

Backward stepwise is generally better because starting with the full model has the advantage of considering the effects of all variables simultaneously.

Unlike backward elimination, forward stepwise selection is more suitable in settings where the number of variables is bigger than the sample size.

So tldr: unless the number of candidate variables is greater than the sample size (such as dealing with genes), using a backward stepwise approach is default choice.

You can also choose direction = "both":

step_car <- stepAIC(car_model, trace = TRUE, direction= "both")

If you add the trace = TRUE, R prints out all the steps.

I’ll show the last step to show you the output.

The goal is to have the combination of variables that has the lowest AIC or lowest residual sum of squares (RSS).

The last line is the final model that we assign to step_car object.

stargazer(car_model, step_car, type = "text")

We can see that the stepwise model has only three variables compared to the ten variables in my original model.

And even with far fewer variables, the R2 has decreased by an insignificant amount. In fact the Adjusted R2 increased because we are not being penalised for throwing so many unnecessary variables.

So we can quickly find a model that loses no explanatory power by is far more parsimonious.

Plus in the original model, only one variable is significant but in the stepwise variable all three of the variables are significant.

From the olsrr package

step_plot <- ols_step_both_aic(car_model)
plot(step_plot)

Recode variables with car package in R

There is one caveat with this function that we are using from the car package:

recode is also in the dplyr package so R gets confused if you just type in recode on its own; it doesn’t know which package you’re using.

So, you must write car::recode(). This placates the R gods and they are clear which package to use.

It is useful for all other times you want to explicitly tell R which package you want it to use to avoid any confusion. Just type the package name followed by two :: colons and a list of all the functions in the package drops down. So really, it can also be useful for exploring new packages you’ve installed and loaded!

install.packages("car")
library(car)

First, subset the dataframe, so we are only looking at countries in the year 1990.

data_90 <- data[which(data$year==1990),]

Next look at a frequency of each way that regimes around the world ended.

plyr::count(data_90$regime_end)

To understand these numbers, we look at the codebook.

We want to make a new binary variable to indicate whether a coup occurred in a country in 1990 or not.

To do this we use the car::recode() function.

First we can make a numeric variable. So in the brackets, we indicate our dataframe at the start.

Next bit is important, we put all the original and new variables in ” ” inverted commas.

Also important that we separate each level of the new variable with a ; semicolon.

The punctuation marks in this function are a bit fussy and difficult but it is important.

data_90$coup_numeric <- car::recode(data_90$regime_end, "0:2 = 1; 3:13=0; NA=0")

Alternatively, we can recode the variable as a string output when we choose to make the new variable values in ‘ apostrophe marks’.

data_90$coup_string <- car::recode(data_90$regime_end, "0:2 = 'coup'; 3:13= 'no coup'; NA='no coup'")

If you want to convert a continuous variable to discrete factors, we can go to our trusty mutate() function in the dplyr package. And within mutate() we use another function: cut()

So instead of recoding binary variables or factor variables . . . we can turn a numeric variable into a discrete variable with cut()

We specify with the breaks argument to indicate where we want to divide the variable and then we can label the factors with the labels argument:

data_90  <- data_90 %>% 
dplyr::mutate(instability_discrete = cut(instability_continuous, breaks=c(-Inf, 0.3, 0.7, Inf), labels=c("low_instability", "mid_instability", "high_instability")))

Move year variable to first column in dataframe with dplyr package in R

A quick hack to create a year variable from a string variable and place it as column number one in your dataframe.

Initial dataset

First problem with my initial dataset is that the date is a string of numbers and I want the first four characters in the string.

data$year <- substr(data$date, 0, 4)
data$year <- as.numeric(data$year)

Now I want to place it at the beginning to keep things more organised:

data = data %>% 
select(year, everything())

And we are done!

Much better.

Plot marginal effects with sjPlot package in R

Without examining interaction effects in your model, sometimes we are incorrect about the real relationship between variables.

This is particularly evident in political science when we consider, for example, the impact of regime type on the relationship between our dependent and independent variables. The nature of the government can really impact our analysis.

For example, I were to look at the relationship between anti-government protests and executive bribery.

I would expect to see that the higher the bribery score in a country’s government, the higher prevalence of people protesting against this corrupt authority. Basically, people are angry when their government is corrupt. And they make sure they make this very clear to them by protesting on the streets.

First, I will describe the variables I use and their data type.

With the dependent variable democracy_protest being an interval score, based upon the question: In this year, how frequent and large have events of mass mobilization for pro-democratic aims been?

The main independent variable is another interval score on executive_bribery scale and is based upon the question: How clean is the executive (the head of government, and cabinet ministers), and their agents from bribery (granting favors in exchange for bribes, kickbacks, or other material inducements?)

Higher scores indicate cleaner governing executives.

So, let’s run a quick regression to examine this relationship:

summary(protest_model <- lm(democracy_protest ~ executive_bribery, data = data_2010))

Examining the results of the regression model:

We see that there is indeed a negative relationship. The cleaner the government, the less likely people in the country will protest in the year under examination. This confirms our above mentioned hypothesis.

However, examining the R2, we see that less than 1% of the variance in protest prevalence is explained by executive bribery scores.

Not very promising.

Is there an interaction effect with regime type? We can look at a scatterplot and see if the different regime type categories cluster in distinct patterns.

The four regime type categories are

  • purple: liberal democracy (such as Sweden or Canada)
  • teal: electoral democracy (such as Turkey or Mongolia)
  • khaki green: electoral autocracy (such as Georgia or Ethiopia)
  • red: closed autocracy (such as Cuba or China)

The color clusters indicate regime type categories do cluster.

  • Liberal democracies (purple) cluster at the top left hand corner. Higher scores in clean executive index and lower prevalence in pro-democracy protesting.
  • Electoral autocracies (teal) cluster in the middle.
  • Electoral democracies (khaki green) cluster at the bottom of the graph.
  • The closed autocracy countries (red) seem to have a upward trend, opposite to the overall best fitted line.

So let’s examine the interaction effect between regime types and executive corruption with mass pro-democracy protests.

Plot the model and add the * interaction effect:

summary(protest_model_2 <-lm(democracy_protest ~ executive_bribery*regime_type, data = data_2010))

Adding the regime type variable, the R2 shoots up to 27%.

The interaction effect appears to only be significant between clean executive scores and liberal democracies. The cleaner the country’s executive, the prevalence of mass mobilization and protests decreases by -0.98 and this is a statistically significant relationship.

The initial relationship we saw in the first model, the simple relationship between clean executive scores and protests, has disappeared. There appears to be no relationship between bribery and protests in the semi-autocratic countries; (those countries that are not quite democratic but not quite fully despotic).

Let’s graph out these interactions.

In the plot_model() function, first type the name of the model we fitted above, protest_model.

Next, choose the type . For different type arguments, scroll to the bottom of this blog post. We use the type = "pred" argument, which plots the marginal effects.

Marginal effects tells us how a dependent variable changes when a specific independent variable changes, if other covariates are held constant. The two terms typed here are the two variables we added to the model with the * interaction term.

install.packages("sjPlot")
library(sjPlot)

plot_model(protest_model, type = "pred", terms = c("executive_bribery", "regime_type"), title = 'Predicted values of Mass Mobilization Index',

 legend.title = "Regime type")

Looking at the graph, we can see that the relationship changes across regime type. For liberal democracies (purple), there is a negative relationship. Low scores on the clean executive index are related to high prevalence of protests. So, we could say that when people in democracies see corrupt actions, they are more likely to protest against them.

However with closed autocracies (red) there is the opposite trend. Very corrupt countries in closed autocracies appear to not have high levels of protests.

This would make sense from a theoretical perspective: even if you want to protest in a very corrupt country, the risk to your safety or livelihood is often too high and you don’t bother. Also the media is probably not free so you may not even be aware of the extent of government corruption.

It seems that when there are no democratic features available to the people (free media, freedom of assembly, active civil societies, or strong civil rights protections, freedom of expression et cetera) the barriers to protesting are too high. However, as the corruption index improves and executives are seen as “cleaner”, these democratic features may be more accessible to them.

If we only looked at the relationship between the two variables and ignore this important interaction effects, we would incorrectly say that as

Of course, panel data would be better to help separate any potential causation from the correlations we can see in the above graphs.

The blue line is almost vertical. This matches with the regression model which found the coefficient in electoral autocracy is 0.001. Virtually non-existent.

Different Plot Types

type = "std" – Plots standardized estimates.

type = "std2" – Plots standardized estimates, however, standardization follows Gelman’s (2008) suggestion, rescaling the estimates by dividing them by two standard deviations instead of just one. Resulting coefficients are then directly comparable for untransformed binary predictors.

type = "pred" – Plots estimated marginal means (or marginal effects). Simply wraps ggpredict.

type = "eff"– Plots estimated marginal means (or marginal effects). Simply wraps ggeffect.

type = "slope" and type = "resid" – Simple diagnostic-plots, where a linear model for each single predictor is plotted against the response variable, or the model’s residuals. Additionally, a loess-smoothed line is added to the plot. The main purpose of these plots is to check whether the relationship between outcome (or residuals) and a predictor is roughly linear or not. Since the plots are based on a simple linear regression with only one model predictor at the moment, the slopes (i.e. coefficients) may differ from the coefficients of the complete model.

type = "diag" – For Stan-models, plots the prior versus posterior samples. For linear (mixed) models, plots for multicollinearity-check (Variance Inflation Factors), QQ-plots, checks for normal distribution of residuals and homoscedasticity (constant variance of residuals) are shown. For generalized linear mixed models, returns the QQ-plot for random effects.

Check for multicollinearity with the car package in R

Packages we will need:

install.packages("car")
library(car)

When one independent variable is highly correlated with another independent variable (or with a combination of independent variables), the marginal contribution of that independent variable is influenced by other predictor variables in the model.

And so, as a result:

  • Estimates for regression coefficients of the independent variables can be unreliable.
  • Tests of significance for regression coefficients can be misleading.

To check for multicollinearity problem in our model, we need the vif() function from the car package in R. VIF stands for variance inflation factor. It measures how much the variance of any one of the coefficients is inflated due to multicollinearity in the overall model.

As a rule of thumb, a vif score over 5 is a problem. A score over 10 should be remedied and you should consider dropping the problematic variable from the regression model or creating an index of all the closely related variables.

This blog post will look only at the VIF score. Click here to look at how to interpret various other multicollinearity tests in the mctest package in addition to the the VIF score.

Back to our model, I want to know whether countries with high levels of clientelism, high levels of vote buying and low democracy scores lead to executive embezzlement?

So I fit a simple linear regression model (and look at the output with the stargazer package)

summary(embezzlement_model_1 <- lm(executive_embezzlement ~ clientelism_index + vote_buying_score + democracy_score, data = data_2010))

stargazer(embezzlement_model_1, type = "text")

I suspect that clientelism and vote buying variables will be highly correlated. So let’s run a test of multicollinearity to see if there is any problems.

car::vif(embezzlement_model_1)

The VIF score for the three independent variables are :

Both clientelism index and vote buying variables are both very high and the best remedy is to remove one of them from the regression. Since vote buying is considered one aspect of clientelist regime so it is probably overlapping with some of the variance in the embezzlement score that the clientelism index is already explaining in the model

So re-run the regression without the vote buying variable.

summary(embezzlement_model_2 <- lm(v2exembez ~ v2xnp_client  + v2x_api, data = vdem2010))
stargazer(embezzlement_model_2, embezzlement_model_2, type = "text")
car::vif(embezzlement_mode2)

Comparing the two regressions:

And running a VIF test on the second model without the vote buying variable:

car::vif(embezzlement_model_2)

These scores are far below 5 so there is no longer any big problem of multicollinearity in the second model.

Click here to quickly add VIF scores to our regression output table in R with jtools package.

Plus, looking at the adjusted R2, which compares two models, we see that the difference is very small, so we did not lose much predictive power in dropping a variable. Rather we have minimised the issue of highly correlated independent variables and thus an inability to tease out the real relationships with our dependent variable of interest.

tl;dr: As a rule of thumb, a vif score over 5 is a problem. A score over 10 should be remedied (and you should consider dropping the problematic variable from the regression model or creating an index of all the closely related variables).

Click here to run stepwise regression analysis to help decide which problematic variables we can drop from our model (based on AIC scores)

Correct for heteroskedasticity in OLS with sandwich package in R

Packages we will need:

library(sandwich)
library(stargazer)
library(lmtest)

If our OLS model demonstrates high level of heteroskedasticity (i.e. when the error term of our model is not randomly distributed across observations and there is a discernible pattern in the error variance), we run into problems.

Why? Because this means OLS will use sub-optimal estimators based on incorrect assumptions and the standard errors computed using these flawed least square estimators are more likely to be under-valued.

Since standard errors are necessary to compute our t – statistic and arrive at our p – value, these inaccurate standard errors are a problem.

Click here to check for heteroskedasticity in your model with the lmtest package.

To correct for heteroskedastcity in your model, you need the sandwich package and the lmtest package to employ the vcocHC argument.

Gordon Ramsey Idiot GIF - Find & Share on GIPHY

First, let’s fit a simple OLS regression.

summary(free_express_model <- lm(freedom_expression ~ free_elections + deliberative_index, data = data_1990))

We can see that there is a small star beside the main dependent variable of interest! Success!

Happy So Excited GIF - Find & Share on GIPHY

We have significance.

Thus, we could say that the more free and fair the elections a country has, this increases the mean freedom of expression index score for that country.

This ties in with a very minimalist understanding of democracy. If a country has elections and the populace can voice their choice of leadership, this will help set the scene for a more open society.

However, it is naive to look only at the p – value of any given coefficient in a regression output. If we run some diagnostic analyses and look at the relationship graphically, we may need to re-examine this seemingly significant relationship.

Can we trust the 0.087 standard error score that our OLS regression calculated? Is it based on sound assumptions?

Worried Its Always Sunny In Philadelphia GIF by HULU - Find & Share on GIPHY

First let’s look at the residuals. Can we assume that the variance of error is equal across all observations?

If we examine the residuals (the first graph), we see that there is actually a tapered fan-like pattern in the error variance. As we move across the x axis, the variance along the y axis gets continually smaller and smaller.

The error does not look random.

Panicking Oh No GIF by HULU - Find & Share on GIPHY

Let’s run a Breush-Pagan test (from the lmtest package) to check our suspicion of heteroskedasticity.

lmtest::bptest(free_exp_model)

We can reject the null hypothesis that the error variance is homoskedastic.

So the model does suffer from heteroskedasticty. We cannot trust those stars in the regression output!

Season 1 Omg GIF by Friends - Find & Share on GIPHY

In order to fix this and make our p-values more accuarate, we need to install the sandwich package to feed in the vcovHC adjustment into the coeftest() function.

vcovHC stands for variance covariance Heteroskedasticity Consistent.

With the stargazer package (which prints out all the models in one table), we can compare the free_exp_model alone with no adjustment, then four different variations of the vcovHC adjustment using different formulae (as indicated in the type argument below).

stargazer(free_exp_model,
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC0")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC1")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC2")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC3")),
          type = "text")

Looking at the standard error in the (brackets) across the OLS and the coeftest models, we can see that the standard error are all almost double the standard error from the original OLS regression.

There is a tiny difference between the different types of Heteroskedastic Consistent (HC) types.

The significant p – value disappears from the free and fair election variable when we correct with the vcovHC correction.

Season 2 Friends GIF - Find & Share on GIPHY

The actual coefficient stays the same regardless of whether we use no correction or any one of the correction arguments.

Which HC estimator should I use in my vcovHC() function?

The default in the sandwich package is HC3.

STATA users will be familiar with HC1, as it is the default robust standard error correction when you add robust at the end of the regression command.

The difference between them is not very large.

The estimator HC0 was suggested in the econometrics literature by White in 1980 and is justified by asymptotic arguments.

For small sample sizes, the standard errors from HC0 are quite biased, usually downward, and this results in overly liberal inferences in regression models (Bera, Suprayitno & Premaratne, 2002). But with HC0, the bias shrinks as your sample size increases.

The estimator types HC1, HC2 and HC3 were put forward by MacKinnon and White (1985) to improve the performance in small samples.

Long and Ervin (2000) furthermore argue that HC3 provides the best performance in small samples as it gives less weight to influential observations in the model

In our freedom of expression regression, the HC3 estimate was the most conservative with the standard error calculations. however the difference between the approaches did not change the conclusion; ultimately the main independent variable of interest in this analysis – free and fair elections – can explain variance in the dependent variable – freedom of expression – does not find evidence in the model.

Click here to read an article by Hayes and Cai (2007) which discusses the matrix formulae and empirical differences between the different calculation approaches taken by the different types. Unfortunately it is all ancient Greek to me.

References

Bera, A. K., Suprayitno, T., & Premaratne, G. (2002). On some heteroskedasticity-robust estimators of variance–covariance matrix of the least-squares estimators. Journal of Statistical Planning and Inference108(1-2), 121-136.

Hayes, A. F., & Cai, L. (2007). Using heteroskedasticity-consistent standard error estimators in OLS regression: An introduction and software implementation. Behavior research methods39(4), 709-722.

Long, J. S., & Ervin, L. H. (2000). Using heteroscedasticity consistent standard errors in the linear regression model. The American Statistician54(3), 217-224.

MacKinnon, J. G., & White, H. (1985). Some heteroskedasticity-consistent covariance matrix estimators with improved finite sample properties. Journal of econometrics29(3), 305-325.

Check for heteroskedasticity in OLS with lmtest package in R

One core assumption when calculating ordinary least squares regressions is that all the random variables in the model have equal variance around the best fitting line.

Essentially, when we run an OLS, we expect that the error terms have no fan pattern.

Example of homoskedasticiy

So let’s look at an example of this assumption being satisfied. I run a simple regression to see whether there is a relationship between and media censorship and civil society repression in 178 countries in 2010.

ggplot(data_010, aes(media_censorship, civil_society_repression)) 
      + geom_point() + geom_smooth(method = "lm") 
      + geom_text(size = 3, nudge_y = 0.1, aes(label = country))

If we run a simple regression

summary(repression_model <- lm(media_censorship ~ civil_society_repression, data = data_2010))
stargazer(repression_model, type = "text")

This is pretty common sense; a country that represses its citizens in one sphere is more likely to repress in other areas. In this case repressing the media correlates with repressing civil society.

We can plot the residuals of the above model with the autoplot() function from the ggplotigfy package.

library(ggplotify)
autoplot(repression_model)

Nothing unusual appears to jump out at us with regard to evidence for heteroskedasticity!

In the first Residuals vs Fitted plot, we can see that blue line does not drastically diverge from the dotted line (which indicates residual value = 0).

The third plot Scale-Location shows again that there is no drastic instances of heteroskedasticity. We want to see the blue line relatively horizontal. There is no clear pattern in the distribution of the residual points.

In the Residual vs. Leverage plot (plot number 4), the high leverage observation 19257 is North Korea! A usual suspect when we examine model outliers.

While it is helpful to visually see the residuals plotted out, a more objective test can help us find whether the model is indeed free from heteroskedasticity problems.

For this we need the Breusch-Pagan test for heteroskedasticity from the lmtest package.

install.packages("lmtest)
library(lmtest)
bptest(repression_model)

The default in R is the studentized Breusch-Pagan. However if you add the studentize = FALSE argument, you have the non-studentized version

The null hypothesis of the Breusch-Pagan test is that the variance in the model is homoskedastic.

With our repression_model, we cannot reject the null, so we can say with confidence that there is no major heteroskedasticity issue in our model.

The non-studentized Breusch-Pagan test makes a very big assumption that the error term is from Gaussian distribution. Since this assumption is usually hard to verify, the default bptest() in R “studentizes” the test statistic and provide asymptotically correct significance levels for distributions for error.

Why do we care about heteroskedasticity?

If our model demonstrates high level of heteroskedasticity (i.e. the random variables have non-random variation across observations), we run into problems.

Why?

  • OLS uses sub-optimal estimators based on incorrect assumptions and
  • The standard errors computed using these flawed least square estimators are more likely to be under-valued. Since standard errors are necessary to compute our t – statistics and arrive at our p – value, these inaccurate standard errors are a problem.

Example of heteroskedasticity

Let’s look at an example of this homoskedasticity assumption NOT being satisfied.

I run a simple regression to see whether there is a relationship between democracy score and respect for individuals’ private property rights in 178 countries in 2010.

When you are examining democracy as the main dependent variable, heteroskedasticity is a common complaint. This is because all highly democratic countries are all usually quite similar. However, when we look at autocracies, they are all quite different and seem to march to the beat of their own despotic drum. We cannot assume that the random variance across different regime types is equally likely.

First, let’s have a look at the relationship.

prop_graph <- ggplot(vdem2010, aes(v2xcl_prpty, v2x_api)) 
                     + geom_point(size = 3, aes(color = factor(regime_type))) 
                     + geom_smooth(method = "lm")
prop_graph + scale_colour_manual(values = c("#D55E00", "#E69F00", "#009E73", "#56B4E9"))

Next, let’s fit the model to examine the relationship.

summary(property_model <- lm(property_score ~ democracy_score, data = data_2010))
stargazer(property_model, type = "text")

To plot the residuals (and other diagnostic graphs) of the model, we can use the autoplot() function to look at the relationship in the model graphically.

autoplot(property_model)

Graph number 1 plots the residuals against the fitted model and we can see that lower values on the x – axis (fitted values) correspond with greater spread on the y – axis. Lower democracy scores relate to greater error on property rights index scores. Plus the blue line does not lie horizontal and near the dotted line. It appears we have non-random error patterns.

Examining the Scale – Location graph (number 3), we can see that the graph is not horizontal.

Again, interpreting the graph can be an imprecise art. So a more objective approach may be to run the bptest().

bptest(property_model)

Since the p – value is far smaller than 0.05, we can reject the null of homoskedasticity.

Rather, we have evidence that our model suffers from heteroskedasticity. The standard errors in the regression appear smaller than they actually are in reality. This inflates our t – statistic and we cannot trust our p – value.

In the next blog post, we can look at ways to rectify this violation of homoskedasticity and to ensure that our regression output has more accurate standard errors and therefore more accurate p – values.

Click here to use the sandwich package to fix heteroskedasticity in the OLS regression.