Cluster Analysis with cluster package in R

Packages we will need:

library(cluster) 
library(factoextra)

I am looking at 127 non-democracies on seeing how the cluster on measures of state capacity (variables that capture ability of the state to control its territory, collect taxes and avoid corruption in the executive).

We want to minimise the total within sums of squares error from the cluster mean when determining the clusters.

First, we need to find the optimal number of clusters. We set the max number of clusters at k = 15.

within_sum_squares <- function(k){kmeans(autocracy_df, k, nstart = 3)$tot.withinss}

min_max <- 1:15

within_sum_squares_values <- map(min_max, within_sum_squares)

plot(min_max, within_sum_squares_values,
type="b", pch = 19, frame = FALSE,
xlab="Number of clusters",
ylab="Total within sum of squares")

K-means searches for the minimum sum of squares assignment, i.e. it minimizes unnormalized variance by assigning points to cluster centers.

k_clusters <- kmeans(autocracy_df[3:5], centers = 6, nstart = 25)
class(k_clusters)

We can now take the k_clusters object and feed it into the fviz_cluster() function.

fviz_cluster(k_clusters, data = autocracy_df[3:5], ellipse.type = "convex") 

Compare clusters with dendextend package in R

Packages we need

install.packages("dendextend")
library(dendextend)

This blog will create dendogram to examine whether Asian countries cluster together when it comes to extent of judicial compliance. I’m examining Asian countries with populations over 1 million and data comes from the year 2019.

Judicial compliance measure how often a government complies with important decisions by courts with which it disagrees.

Higher scores indicate that the government often or always complies, even when they are unhappy with the decision. Lower scores indicate the government rarely or never complies with decisions that it doesn’t like.

It is important to make sure there are no NA values. So I will impute any missing variables.

Click here to read how to impute missing values in your dataset.

library(mice)
imputed_data <- mice(asia_df, method="cart")
asia_df <- complete(imputed_data)

Next we can scale the dataset. This step is for when you are clustering on more than one variable and the variable units are not necessarily equivalent. The distance value is related to the scale on which the different variables are made. 

Therefore, it’s good to scale all to a common unit of analysis before measuring any inter-observation dissimilarities. 

asia_scale <- scale(asia_df)

Next we calculate the distance between the countries (i.e. different rows) on the variables of interest and create a dist object.

There are many different methods you can use to calculate the distances. Click here for a description of the main formulae you can use to calculate distances. In the linked article, they provide a helpful table to summarise all the common methods such as “euclidean“, “manhattan” or “canberra” formulae.

I will go with the “euclidean” method. but make sure your method suits the data type (binary, continuous, categorical etc.)

asia_judicial_dist <- dist(asia_scale, method = "euclidean")
class(asia_judicial_dist)

We now have a dist object we can feed into the hclust() function.

With this function, we will need to make another decision regarding the method we will use.

The possible methods we can use are "ward.D""ward.D2""single""complete""average" (= UPGMA), "mcquitty" (= WPGMA), "median" (= WPGMC) or "centroid" (= UPGMC).

Click here for a more indepth discussion of the different algorithms that you can use

Again I will choose a common "ward.D2" method, which chooses the best clusters based on calculating: at each stage, which two clusters merge that provide the smallest increase in the combined error sum of squares.

asia_judicial_hclust <- hclust(asia_judicial_dist, method = "ward.D2")
class(asia_judicial_hclust)

We next convert our hclust object into a dendrogram object so we can plot it and visualise the different clusters of judicial compliance.

asia_judicial_dend <- as.dendrogram(asia_judicial_hclust)
class(asia_judicial_dend)

When we plot the different clusters, there are many options to change the color, size and dimensions of the dendrogram. To do this we use the set() function.

Click here to see a very comprehensive list of all the set() attributes you can use to modify your dendrogram from the dendextend package.

asia_judicial_dend %>%
set("branches_k_color", k=5) %>% # five clustered groups of different colors
set("branches_lwd", 2) %>% # size of the lines (thick or thin)
set("labels_colors", k=5) %>% # color the country labels, also five groups
plot(horiz = TRUE) # plot the dendrogram horizontally

I choose to divide the countries into five clusters by color:

And if I zoom in on the ends of the branches, we can examine the groups.

The top branches appear to be less democratic countries. We can see that North Korea is its own cluster with no other countries sharing similar judicial compliance scores.

The bottom branches appear to be more democratic with more judicial independence. However, when we have our final dendrogram, it is our job now to research and investigate the characteristics that each countries shares regarding the role of the judiciary and its relationship with executive compliance.

Singapore, even though it is not a democratic country in the way that Japan is, shows a highly similar level of respect by the executive for judicial decisions.

Also South Korean executive compliance with the judiciary appears to be more similar to India and Sri Lanka than it does to Japan and Singapore.

So we can see that dendrograms are helpful for exploratory research and show us a starting place to begin grouping different countries together regarding a concept.

A really quick way to complete all steps in one go, is the following code. However, you must use the default methods for the dist and hclust functions. So if you want to fine tune your methods to suit your data, this quicker option may be too brute.

asia_df %>%
scale %>%
dist %>%
hclust %>%
as.dendrogram %>%
set("branches_k_color", k=5) %>%
set("branches_lwd", 2) %>%
set("labels_colors", k=5) %>%
plot(horiz = TRUE)

Plot variables on a map with rnaturalearth package in R

All the packages I will be using:

library(rnaturalearth)
library(countrycode)
library(tidyverse)
library(ggplot2)
library(ggthemes)
library(viridis)

First, we access and store a map object from the rnaturalearth package, with all the spatial information in contains. We specify returnclass = "sf", which will return a dataframe with simple features information.

Simple features or simple feature access refers to a formal standard (ISO 19125-1:2004) that describes how objects in the real world can be represented in computers, with emphasis on the spatial geometry of these objects. Our map has these attributes stored in the object.

With the ne_countries() function, we get the borders of all countries.

map <- ne_countries(scale = "medium", returnclass = "sf")
View(map)

This map object comes with lots of information about 241 countries and territories around the world.

In total, it has 65 columns, mostly with different variants of the names / locations of each country / territory. For example, ISO codes for each country. Further in the dataset, there are a few other variables such as GDP and population estimates for each country. So a handy source of data.

However, I want to use values from a different source; I have a freedom_df dataframe with a freedom of association variable.

The freedom of association index broadly captures to what extent are parties, including opposition parties, allowed to form and to participate in elections, and to what extent are civil society organizations able to form and to operate freely in each country.

So, we can merge them into one dataset.

Before that, I want to only use the scores from the most recent year to map out. So, take out only those values in the year 2019 (don’t forget the comma sandwiched between the round bracket and the square bracket):

freedom19 <- freedom_df[which(freedom_df$year == 2019),]

My freedom19 dataset uses the Correlates of War codes but no ISO country codes. So let’s add these COW codes to the map dataframe for ease of merging.

I will convert the ISO codes to COW codes with the countrycodes() function:

map$COWcode <- countrycode(map$adm0_a3, "iso3c", "cown") 

Click here to read more about the countrycode() function in R.

Now, with a universal variable common to both datasets, I can merge the two datasets with the common COW codes:

map19 <- merge(map, freedom19, by.x = "COWcode", by.y = "ccode", all = TRUE)

Click here to read more about the merge() function in R.

We’re all ready to graph the map. We can add the freedom of association variable into the aes() argument of the geom_sf() function. Again, the sf refers to simple features with geospatial information we will map out.

assoc_graph <- ggplot(data = map19) +
  geom_sf(aes(fill = freedom_association_index), 
          position = "identity") + 
  labs(fill='Freedom of Association Index')  +
  scale_fill_viridis_c(option = "viridis")

The scale_fill_viridis_c(option = "viridis") changes the color spectrum of the main variable.

Other options include:

"viridis"

"magma"

"plasma

And various others. Click here to learn more about this palette package.

Finally we call the new graph stored in the assoc_graph object.

I use the theme_map() function from the ggtheme package to make the background very clean and to move the legend down to the bottom left of the screen where it takes up the otherwise very empty Pacific ocean / Antarctic expanse.

Click here for more information on the ggtheme package.

assoc_graph + theme_map()

And there we have it, a map of countries showing the Freedom of Association index across countries.

The index broadly captures to what extent are parties, including opposition parties, allowed to form and to participate in elections, and to what extent are civil society organizations able to form and to operate freely.

Yellow colors indicate more freedom, green colors indicate middle scores and blue colors indicate low levels of freedom.

Some of the countries have missing data, such as Germany and Yemen, for various reasons. A true perfectionist would go and find and fill in the data manually.

Recode variables with car package in R

There is one caveat with this function that we are using from the car package:

recode is also in the dplyr package so R gets confused if you just type in recode on its own; it doesn’t know which package you’re using.

So, you must write car::recode(). This placates the R gods and they are clear which package to use.

It is useful for all other times you want to explicitly tell R which package you want it to use to avoid any confusion. Just type the package name followed by two :: colons and a list of all the functions in the package drops down. So really, it can also be useful for exploring new packages you’ve installed and loaded!

install.packages("car")
library(car)

First, subset the dataframe, so we are only looking at countries in the year 1990.

data_90 <- data[which(data$year==1990),]

Next look at a frequency of each way that regimes around the world ended.

plyr::count(data_90$regime_end)

To understand these numbers, we look at the codebook.

We want to make a new binary variable to indicate whether a coup occurred in a country in 1990 or not.

To do this we use the car::recode() function.

First we can make a numeric variable. So in the brackets, we indicate our dataframe at the start.

Next bit is important, we put all the original and new variables in ” ” inverted commas.

Also important that we separate each level of the new variable with a ; semicolon.

The punctuation marks in this function are a bit fussy and difficult but it is important.

data_90$coup_numeric <- car::recode(data_90$regime_end, "0:2 = 1; 3:13=0; NA=0")

Alternatively, we can recode the variable as a string output when we choose to make the new variable values in ‘ apostrophe marks’.

data_90$coup_string <- car::recode(data_90$regime_end, "0:2 = 'coup'; 3:13= 'no coup'; NA='no coup'")

If you want to convert a continuous variable to discrete factors, we can go to our trusty mutate() function in the dplyr package. And within mutate() we use another function: cut()

So instead of recoding binary variables or factor variables . . . we can turn a numeric variable into a discrete variable with cut()

We specify with the breaks argument to indicate where we want to divide the variable and then we can label the factors with the labels argument:

data_90  <- data_90 %>% 
dplyr::mutate(instability_discrete = cut(instability_continuous, breaks=c(-Inf, 0.3, 0.7, Inf), labels=c("low_instability", "mid_instability", "high_instability")))

Move year variable to first column in dataframe with dplyr package in R

A quick hack to create a year variable from a string variable and place it as column number one in your dataframe.

Initial dataset

First problem with my initial dataset is that the date is a string of numbers and I want the first four characters in the string.

data$year <- substr(data$date, 0, 4)
data$year <- as.numeric(data$year)

Now I want to place it at the beginning to keep things more organised:

data = data %>% 
select(year, everything())

And we are done!

Much better.

Graph political party manifestos on ideological spectrum with manifestoR package in R

The Manifesto Project maintains a database of political party manifestos for around 50 countries. It covers free and democratic elections since 1945 in various countries.

To access the manifestos, we install and load the manifestoR package, which provides an API between R and The Manifesto Project site.

(What’s an API?)

On the website, we can navigate the Manifesto Project database and search for any country for a given time period in the Data Dashboard section on the site. For example, I search for Ireland from 2012 until the present day and I can see the most recent manifestos put forward by the parties.

We can see the number code for each party. (We will need to use these when downloading the texts into R via the API).

Click here for the full CRAN PDF which gives more information for the manifestoR package.

So first, install the packages.

install.packages("manifestoR")
library(manifestoR)

But we cannot just access the data right away.

In order to download any manifesto text from the database, we need to first set up an account on the website and download an API key. So first step to do is go on the website and sign up.

Then you go to your website profile and click download API Key file (txt).

Then we go back on R and write in:

mp_key <- mp_setapikey(file.choose())

Choose the txt file downloaded from the website … and hopefully you should be all set up to access all the manifesto text data.

Now, we can choose the manifestos we want to download.

Using the mp_corpus() function, we can choose the country and date that we want and download lists of all the texts.

manifestos_2016 <- mp_corpus(countryname == "Ireland" & edate > as.Date("2016-02-01"))

Note that the date I enter into the mp_corpus() function corresponds with the date from the Manifesto Project website. If there is a way to look this up directly through R, please let me know!

If we look at the manifestos_2016 object we just downloaded:

View(manifestos_2016)

We see we have ten lists. Say, for example, the party I want to look at is Fine Gael, I need the party ID code assigned by the Manifesto Project.

Similar to how I got the date, I can look up the Data Dashboard to find the party code for Fine Gael. Or I can search for the ID via this site.

It was funny to see that all the names of the Irish parties are in English, which I never hear! Fine Gael is Irish for Tribes of Ireland and I guess Family is another way to translate that.

The ID code for the party is 53520, which is the seventh list. So index this list and create a new tibble structure for the manifesto text.

fg_2016 <- as_tibble(manifestos_2016[7])
View(fg_2016)

The cmp_code refers to the value that coders from the Manifesto Project have assigned to each topic or policy position that the party puts forward in their text.

For example 104 means that the party speaks positively of the military, whereas 105 means that the party speaks of the military in a negative way.

I don’t know what the eu_code is in reference to, but it is all blank in the 2016 coding…

In another post, I hope to write about text mining and sentiment analysis with manifestos. But I’ll leave that to another day.

An alternative way to download and store the manifestos is to download everything from the database:

all_manifestos <- mp_maindataset()

And I want to subset all Irish parties only:

ireland_manifestos <- all_manifestos[which(all_manifestos$countryname == "Ireland"),]

With these all ready, there are some really interesting functions we can run with the data and the coding of the texts by the Manifesto Project.

For example, the rile() function. This calculates the Right Left Score.

Essentially, higher RILE scores indicate that the party leans more right on the ideological spectrum, with a maximum score of +100 if the whole manifesto is devoted to ‘right’ categories. Conversely, lower RILE scores indicate that the party leans more left (and a score of -100 would mean the entire manifesto puts forward exclusively ‘left’ categories)

Of course, it is a crude instrument to compress such a variety of social, political and economic positions onto a single dimension. But as long as we keep that caveat in mind, it is a handy shorthanded approach to categorising the different parties.

Additionally, Molder (2016) in his paper, “The validity of the RILE left–right index as a measure of party policy” argues that the index is not very valid. Additional researchers have also found that RILE index inaccurately places political parties in policy space as manifestos are not actual binding policies but rather directional signals and aspriations (see Pelizzo’s (2003) paper, “Party positions or party direction? An analysis of Party Manifesto Data” for more on this)

So take these figures with a grain of salt. But it is interesting to visualise the trends.

I continue subsetting until I have only the largest parties in Ireland and put them into big_parties object. The graph gets a bit hectic when including all the smaller parties in the country since 1949. Like in most other countries, party politics is rarely simple.

Next I can simply create a new rile_index variable and graph it across time.

big_parties$rile_index <- rile(big_parties)

The large chuck in the geom_text() command is to only show the name of the party at the end of the line. Otherwise, the graph is far more busy and far more unreadable.

graph_rile <- big_parties %>%
group_by(partyname) %>%
ggplot(aes(x= as.Date(edate), y = rile_index, color=partyname)) +
geom_point() + geom_line() +
geom_text(data=. %>%
arrange(desc(edate)) %>%
group_by(partyname) %>%
slice(1),
aes(label=partyname), position=position_jitter(height=2), hjust = 0, size = 5, angle=40) +
ggtitle("Relative Left Right Ideological Position of Major Irish Parties 1949 - 2016") +
xlab("Year") + ylab("Right Left (RILE) Index")

While the graph is a bit on the small size, what jumps out immediately is that there has been a convergence of the main political parties toward the ideological centre. In fact, they are all nearing left of centre. The most right-wing a party has ever been in Ireland was Fine Gael in the 1950s, with a RILE score nearing 80. Given their history of its predecessor “Blueshirts” group, this checks out.

The Labour Party has consistently been very left wing, with its most left-leaning RILE score of -40 something in the early 1950s and again in early 1980s.

Ireland joined the European Union in 1978, granted free third level education for all its citizens since the 1990s and in genenral, has seen a consistent trend of secularisation in society, these factors all could account for the constricting lines converging in the graph for various socio-economic reasons.

In recent years Ireland has become more socially liberal (as exemplified by legalisation of abortion, legalisation of same sex marriage) so these lines do not surprise. Additionally, we do not have full control over monetary policy since joining the euro, so again, this mitigates the trends of extreme economic positions laid out in manifestos.

References

Mölder, M. (2016). The validity of the RILE left–right index as a measure of party policy. Party Politics22(1), 37-48.

Pelizzo, R. (2003). Party positions or party direction? An analysis of party manifesto data. West European Politics26(2), 67-89.

Make word clouds with tidytext and gutenbergr in R

This blog will run through how to make a word cloud with Mill’s “On Liberty”, a treatise which argues that the state should never restrict people’s individual pursuits or choices (unless such choices harm others in society).

First, we install and load the gutenbergr package to access the catalogue of books from Project Gutenburg . This gutenberg_metadata function provides access to the website and its collection of around 60,000 digitised books in the public domain, for which their U.S. copyright has expired. This website is an amazing resource in its own right.

install.packages("gutenbergr")
library(gutenbergr)

Next we choose a book we want to download. We can search through the Gutenberg Project catalogue (with the help of the dplyr package). In the filter( ) function, we can search for a book in the library by supplying a string search term in “quotations”. Click here to see the CRAN package PDF. For example, we can look for all the books written by John Stuart Mill (search second name, first name) on the website:

mill_all <- gutenberg_metadata %>%
  filter(author = "Mill, John Stuart")

Or we can search for the title of the book:

mill_liberty <- gutenberg_metadata %>%
  filter(title = "On Liberty")

We now have a tibble of all the sentences in the book!

View(mill_liberty)

We see there are two variables in this new datafram and 4,703 string rows.

To extract every word as a unit, we need the unnest_tokens( ) function from the tidytext package:

install.packages("tidytext")
library(tidytext)

We take our mill_liberty object from above and indicate we want the unit to be words from the text. And we create a new mill_liberty_words object to hold the book in this format.

mill_liberty_words <- mill_liberty %>%
    unnest_tokens(word, text) %>%
    anti_join(stop_words)

We now have a row for each word, totalling to 17,576 words! This excludes words such as “the”, “of”, “to” and all those small sentence builder words.

Now we have every word from “On Liberty”, we can see what words appear most frequently! We can either create a list with the count( ) function:

count_word <- mill_liberty_words %>%
   count(word, sort = TRUE)

The default for a tibble object is printing off the first ten observations. If we want to see more, we can increase the n in our print argument.

print(liberty_words, n=30)

An alternative to this is making a word cloud to visualise the relative frequencies of these terms in the text.

For this, we need to install the wordcloud package.

install.packages("wordcloud")
library(wordcloud)

To get some nice colour palettes, we can also install the RColorBrewer package also:

install.packages("RColorBrewer")
library(RColorBrewer)

Check out the CRAN PDF on the wordcloud package to tailor your specifications.

For example, the rot.per argument indicates proportion words we want with 90 degree rotation. In my example, I have 30% of the words being vertical. I reran the code until the main one was horizontal, just so it pops out more.

With the scale option, we can indicate the range of the size of the words (for example from size 4 to size 0.5) in the example below

We can choose how many words we want to include in the wordcloud with the max.words argument

color_number <- 20
color_palette <- colorRampPalette(brewer.pal(8, "Paired"))(color_number)

wordcloud(words = mill_liberty_words$word, min.freq = 2,
 scale = c(4, 0.5)
          max.words=200, random.order=FALSE, rot.per=0.3, 
          colors=color_palette)

We can see straightaway the most frequent word in the book is opinion. Given that this book forms one of the most rigorous defenses of the idea of freedom of speech, a free press and therefore against the a priori censorship of dissent in society, these words check out.

If we run the code with random.order=TRUE option, the cloud would look like this:

And you can play with proportions, colours, sizes and word placement until you find one you like!

This word cloud highlights the most frequently used words in John Stuart Mill’s “Utilitarianism”:

Graph Google search trends with gtrendsR package in R.

Google Trends is a search trends feature. It shows how frequently a given search term is entered into Google’s search engine, relative to the site’s total search volume over a given period of time.

( So note: because the results are all relative to the other search terms in the time period, the dates you provide to the gtrendsR function will change the shape of your graph and the relative percentage frequencies on the y axis of your plot).

To scrape data from Google Trends, we use the gtrends() function from the gtrendsR package and the get_interest() function from the trendyy package (a handy wrapper package for gtrendsR).

If necessary, also load the tidyverse and ggplot packages.

install.packages("gtrendsR")
install.packages("trendyy")
library(tidyverse)
library(ggplot2)
library(gtrendsR)
library(trendyy)

To scrape the Google trend data, call the trendy() function and write in the search terms.

For example, here we search for the term “Kamala Harris” during the period from 1st of January 2019 until today.

If you want to check out more specifications, for the package, you can check out the package PDF here. For example, we can change the geographical region (US state or country for example) with the geo specification.

We can also change the parameters of the time argument, we can specify the time span of the query with any one of the following strings:

  • “now 1-H” (previous hour)
  • “now 4-H” (previous four hours)
  • “today+5-y” last five years (default)
  • “all” (since the beginning of Google Trends (2004))

If don’t supply a string, the default is five year search data.

kamala <- trendy("Kamala Harris", "2019-01-01", "2020-08-13") %>% get_interest()

We call the get_interest() function to save this data from Google Trends into a data.frame version of the kamala object. If we didn’t execute this last step, the data would be in a form that we cannot use with ggplot().

View(kamala)

In this data.frame, there is a date variable for each week and a hits variable that shows the interest during that week. Remember,  this hits figure shows how frequently a given search term is entered into Google’s search engine relative to the site’s total search volume over a given period of time.

We will use these two variables to plot the y and x axis.

To look at the search trends in relation to the events during the Kamala Presidential campaign over 2019, we can add vertical lines along the date axis, with a data.frame, we can call kamala_events.

kamala_events = data.frame(date=as.Date(c("2019-01-21", "2019-06-25", "2019-12-03", "2020-08-12")), 
event=c("Launch Presidential Campaign", "First Primary Debate", "Drops Out Presidential Race", "Chosen as Biden's VP"))

Note the very specific order the as.Date() function requires.

Next, we can graph the trends, using the above date and hits variables:

ggplot(kamala, aes(x = as.Date(date), y = hits)) +
  geom_line(colour = "steelblue", size = 2.5) +
  geom_vline(data=kamala_events, mapping=aes(xintercept=date), color="red") +
    geom_text(data=kamala_events, mapping=aes(x=date, y=0, label=event), size=4, angle=40, vjust=-0.5, hjust=0) + 
    xlab(label = "Search Dates") + 
    ylab(label = 'Relative Hits %')

Which produces:

Super easy and a quick way to visualise the ups and downs of Kamala Harris’ political career over the past few months, operationalised as the relative frequency with which people Googled her name.

If I had chosen different dates, the relative hits as shown on the y axis would be different! So play around with it and see how the trends change when you increase or decrease the time period.

Plot marginal effects with sjPlot package in R

Without examining interaction effects in your model, sometimes we are incorrect about the real relationship between variables.

This is particularly evident in political science when we consider, for example, the impact of regime type on the relationship between our dependent and independent variables. The nature of the government can really impact our analysis.

For example, I were to look at the relationship between anti-government protests and executive bribery.

I would expect to see that the higher the bribery score in a country’s government, the higher prevalence of people protesting against this corrupt authority. Basically, people are angry when their government is corrupt. And they make sure they make this very clear to them by protesting on the streets.

First, I will describe the variables I use and their data type.

With the dependent variable democracy_protest being an interval score, based upon the question: In this year, how frequent and large have events of mass mobilization for pro-democratic aims been?

The main independent variable is another interval score on executive_bribery scale and is based upon the question: How clean is the executive (the head of government, and cabinet ministers), and their agents from bribery (granting favors in exchange for bribes, kickbacks, or other material inducements?)

Higher scores indicate cleaner governing executives.

So, let’s run a quick regression to examine this relationship:

summary(protest_model <- lm(democracy_protest ~ executive_bribery, data = data_2010))

Examining the results of the regression model:

We see that there is indeed a negative relationship. The cleaner the government, the less likely people in the country will protest in the year under examination. This confirms our above mentioned hypothesis.

However, examining the R2, we see that less than 1% of the variance in protest prevalence is explained by executive bribery scores.

Not very promising.

Is there an interaction effect with regime type? We can look at a scatterplot and see if the different regime type categories cluster in distinct patterns.

The four regime type categories are

  • purple: liberal democracy (such as Sweden or Canada)
  • teal: electoral democracy (such as Turkey or Mongolia)
  • khaki green: electoral autocracy (such as Georgia or Ethiopia)
  • red: closed autocracy (such as Cuba or China)

The color clusters indicate regime type categories do cluster.

  • Liberal democracies (purple) cluster at the top left hand corner. Higher scores in clean executive index and lower prevalence in pro-democracy protesting.
  • Electoral autocracies (teal) cluster in the middle.
  • Electoral democracies (khaki green) cluster at the bottom of the graph.
  • The closed autocracy countries (red) seem to have a upward trend, opposite to the overall best fitted line.

So let’s examine the interaction effect between regime types and executive corruption with mass pro-democracy protests.

Plot the model and add the * interaction effect:

summary(protest_model_2 <-lm(democracy_protest ~ executive_bribery*regime_type, data = data_2010))

Adding the regime type variable, the R2 shoots up to 27%.

The interaction effect appears to only be significant between clean executive scores and liberal democracies. The cleaner the country’s executive, the prevalence of mass mobilization and protests decreases by -0.98 and this is a statistically significant relationship.

The initial relationship we saw in the first model, the simple relationship between clean executive scores and protests, has disappeared. There appears to be no relationship between bribery and protests in the semi-autocratic countries; (those countries that are not quite democratic but not quite fully despotic).

Let’s graph out these interactions.

In the plot_model() function, first type the name of the model we fitted above, protest_model.

Next, choose the type . For different type arguments, scroll to the bottom of this blog post. We use the type = "pred" argument, which plots the marginal effects.

Marginal effects tells us how a dependent variable changes when a specific independent variable changes, if other covariates are held constant. The two terms typed here are the two variables we added to the model with the * interaction term.

install.packages("sjPlot")
library(sjPlot)

plot_model(protest_model, type = "pred", terms = c("executive_bribery", "regime_type"), title = 'Predicted values of Mass Mobilization Index',

 legend.title = "Regime type")

Looking at the graph, we can see that the relationship changes across regime type. For liberal democracies (purple), there is a negative relationship. Low scores on the clean executive index are related to high prevalence of protests. So, we could say that when people in democracies see corrupt actions, they are more likely to protest against them.

However with closed autocracies (red) there is the opposite trend. Very corrupt countries in closed autocracies appear to not have high levels of protests.

This would make sense from a theoretical perspective: even if you want to protest in a very corrupt country, the risk to your safety or livelihood is often too high and you don’t bother. Also the media is probably not free so you may not even be aware of the extent of government corruption.

It seems that when there are no democratic features available to the people (free media, freedom of assembly, active civil societies, or strong civil rights protections, freedom of expression et cetera) the barriers to protesting are too high. However, as the corruption index improves and executives are seen as “cleaner”, these democratic features may be more accessible to them.

If we only looked at the relationship between the two variables and ignore this important interaction effects, we would incorrectly say that as

Of course, panel data would be better to help separate any potential causation from the correlations we can see in the above graphs.

The blue line is almost vertical. This matches with the regression model which found the coefficient in electoral autocracy is 0.001. Virtually non-existent.

Different Plot Types

type = "std" – Plots standardized estimates.

type = "std2" – Plots standardized estimates, however, standardization follows Gelman’s (2008) suggestion, rescaling the estimates by dividing them by two standard deviations instead of just one. Resulting coefficients are then directly comparable for untransformed binary predictors.

type = "pred" – Plots estimated marginal means (or marginal effects). Simply wraps ggpredict.

type = "eff"– Plots estimated marginal means (or marginal effects). Simply wraps ggeffect.

type = "slope" and type = "resid" – Simple diagnostic-plots, where a linear model for each single predictor is plotted against the response variable, or the model’s residuals. Additionally, a loess-smoothed line is added to the plot. The main purpose of these plots is to check whether the relationship between outcome (or residuals) and a predictor is roughly linear or not. Since the plots are based on a simple linear regression with only one model predictor at the moment, the slopes (i.e. coefficients) may differ from the coefficients of the complete model.

type = "diag" – For Stan-models, plots the prior versus posterior samples. For linear (mixed) models, plots for multicollinearity-check (Variance Inflation Factors), QQ-plots, checks for normal distribution of residuals and homoscedasticity (constant variance of residuals) are shown. For generalized linear mixed models, returns the QQ-plot for random effects.

Check for multicollinearity with the car package in R

Packages we will need:

install.packages("car")
library(car)

When one independent variable is highly correlated with another independent variable (or with a combination of independent variables), the marginal contribution of that independent variable is influenced by other predictor variables in the model.

And so, as a result:

  • Estimates for regression coefficients of the independent variables can be unreliable.
  • Tests of significance for regression coefficients can be misleading.

To check for multicollinearity problem in our model, we need the vif() function from the car package in R. VIF stands for variance inflation factor. It measures how much the variance of any one of the coefficients is inflated due to multicollinearity in the overall model.

As a rule of thumb, a vif score over 5 is a problem. A score over 10 should be remedied and you should consider dropping the problematic variable from the regression model or creating an index of all the closely related variables.

This blog post will look only at the VIF score. Click here to look at how to interpret various other multicollinearity tests in the mctest package in addition to the the VIF score.

Back to our model, I want to know whether countries with high levels of clientelism, high levels of vote buying and low democracy scores lead to executive embezzlement?

So I fit a simple linear regression model (and look at the output with the stargazer package)

summary(embezzlement_model_1 <- lm(executive_embezzlement ~ clientelism_index + vote_buying_score + democracy_score, data = data_2010))

stargazer(embezzlement_model_1, type = "text")

I suspect that clientelism and vote buying variables will be highly correlated. So let’s run a test of multicollinearity to see if there is any problems.

car::vif(embezzlement_model_1)

The VIF score for the three independent variables are :

Both clientelism index and vote buying variables are both very high and the best remedy is to remove one of them from the regression. Since vote buying is considered one aspect of clientelist regime so it is probably overlapping with some of the variance in the embezzlement score that the clientelism index is already explaining in the model

So re-run the regression without the vote buying variable.

summary(embezzlement_model_2 <- lm(v2exembez ~ v2xnp_client  + v2x_api, data = vdem2010))
stargazer(embezzlement_model_2, embezzlement_model_2, type = "text")
car::vif(embezzlement_mode2)

Comparing the two regressions:

And running a VIF test on the second model without the vote buying variable:

car::vif(embezzlement_model_2)

These scores are far below 5 so there is no longer any big problem of multicollinearity in the second model.

Click here to quickly add VIF scores to our regression output table in R with jtools package.

Plus, looking at the adjusted R2, which compares two models, we see that the difference is very small, so we did not lose much predictive power in dropping a variable. Rather we have minimised the issue of highly correlated independent variables and thus an inability to tease out the real relationships with our dependent variable of interest.

tl;dr: As a rule of thumb, a vif score over 5 is a problem. A score over 10 should be remedied (and you should consider dropping the problematic variable from the regression model or creating an index of all the closely related variables).

Click here to run stepwise regression analysis to help decide which problematic variables we can drop from our model (based on AIC scores)