# Graph countries on the political left right spectrum In this post, we can compare countries on the left – right political spectrum and graph the trends.

In the European Social Survey, they ask respondents to indicate where they place themselves on the political spectrum with this question: “In politics people sometimes talk of ‘left’ and ‘right’. Where would you place yourself on this scale, where 0 means the left and 10 means the right?”

``round <- import_all_rounds()``

Extract all the lists. I just want three of the variables for my graph.

``````r1 <- round[]

r1 <- data.frame(country = r1\$cntry, round= r1\$essround, lrscale = r1\$lrscale)``````

Do this for all the `data.frames` and` rbind()` them all together.

``round_df <- rbind(r1, r2, r3, r4, r5, r6, r7, r8, r9)``

Convert all the variables to suitable types:

``````round_df\$country <- as.factor(round_df\$country)
round_df\$round <- as.numeric(round_df\$round)
round_df\$lrscale <- as.numeric(round_df\$lrscale)``````

Next we find the mean score for all respondents in each of the countries for each year.

``````round_df %>%
dplyr::filter(!is.na(lrscale)) %>%
dplyr::group_by(country, round) %>%
dplyr::mutate(mean_lr = mean(lrscale)) -> round_df
``````

We keep only one of the values for each country at each survey year.

``````round_df <- round_df[!duplicated(round_df\$mean_lr),]
``````

Create a vector of hex colors that correspond to the countries I want to look at: Ireland, France, the UK and Germany.

``````my_palette <- c( "DE" = "#FFCE00", "FR" = "#001489", "GB" = "#CF142B", "IE" = "#169B62")
``````

And graph the plot:

``````library(ggthemes, ggimage)

lrscale_graph <- round_df %>%
dplyr::filter(country == "IE" | country == "GB" | country == "FR" | country == "DE") %>%
ggplot(aes(x= round, y = mean_lr, group = country)) +
geom_line(aes(color = factor(country)), size = 1.5, alpha = 0.5) +
ggimage::geom_flag(aes(image = country), size = 0.04) +
scale_color_manual(values = my_palette) +
scale_x_discrete(name = "Year", limits=c("2002","2004","2006","2008","2010","2012","2014","2016","2018")) +
labs(title = "Where would you place yourself on this scale,\n where 0 means the left and 10 means the right?",
subtitle = "Source: European Social Survey, 2002 - 2018",
fill="Country",
x = "Year",
y = "Left - Right Spectrum")

lrscale_graph + guides(color=guide_legend(title="Country")) + theme_economist()
`````` The European Social Survey (ESS) measure attitudes in thirty-ish countries (depending on the year) across the European continent. It has been conducted every two years since 2001.

The survey consists of a core module and two or more ‘rotating’ modules, on social and public trust; political interest and participation; socio-political orientations; media use; moral, political and social values; social exclusion, national, ethnic and religious allegiances; well-being, health and security; demographics and socio-economics.

So lots of fun data for political scientists to look at.

``````install.packages("essurvey")
library(essurvey)``````

``set_email("rforpoliticalscience@gmail.com")``

Don’t forget the email address goes in as a string in “quotations marks”.

Show what countries are in the survey with the `show_countries() `function.

``show_countries()``
``` "Albania"     "Austria"    "Belgium"
 "Bulgaria"    "Croatia"     "Cyprus"
 "Czechia"     "Denmark"     "Estonia"
 "Finland"    "France"      "Germany"
 "Greece"     "Hungary"     "Iceland"
 "Ireland"    "Israel"      "Italy"
 "Kosovo"     "Latvia"      "Lithuania"
 "Luxembourg" "Montenegro"  "Netherlands"
 "Norway"     "Poland"      "Portugal"
 "Romania" "Russian Federation" "Serbia"
 "Slovakia"   "Slovenia"     "Spain"
 "Sweden"     "Switzerland"  "Turkey"
 "Ukraine"    "United Kingdom"
```

It’s important to know that country names are case sensitive and you can only use the name printed out by `show_countries()`. For example, you need to write “Russian Federation” to access Russian survey data; if you write “Russia”…

Using these country names, we can download specific rounds or waves (i.e survey years) with `import_country`.  We have the option to choose the two most recent rounds, 8th (from 2016) and 9th round (from 2018).

``ire_data <- import_all_cntrounds("Ireland")``

The resulting data comes in the form of nine lists, one for each round

These rounds correspond to the following years:

• ESS Round 9 – 2018
• ESS Round 8 – 2016
• ESS Round 7 – 2014
• ESS Round 6 – 2012
• ESS Round 5 – 2010
• ESS Round 4 – 2008
• ESS Round 3 – 2006
• ESS Round 2 – 2004
• ESS Round 1 – 2002

I want to compare the first round and most recent round to see if Irish people’s views have changed since 2002. In 2002, Ireland was in the middle of an economic boom that we called the “Celtic Tiger”. People did mad things like buy panini presses and second house in Bulgaria to resell. Then the 2008 financial crash hit the country very hard.

Irish people during the Celtic Tiger:

Irish people after the Celtic Tiger crash:

Ireland in 2018 was a very different place. So it will be interesting to see if these social changes translated into attitude changes.

First, we use the `import_country()` function to download data from ESS. Specify the country and rounds you want to download.

``ire <-import_country(country = "Ireland", rounds = c(1, 9))``

The resulting `ire `object is a list, so we’ll need to extract the two `data.frames `from the list:

``````ire_1 <- ire[]

ire_9 <- ire[]``````

The exact same questions are not asked every year in ESS; there are rotating modules, sometimes questions are added or dropped. So to merge round 1 and round 9, first we find the common columns with the `intersect()` function.

``common_cols <- intersect(colnames(ire_1), colnames(ire_9))``

And then bind subsets of the two `data.frames `together that have the same columns with `rbind()` function.

``````ire_df <- rbind(subset(ire_1, select = common_cols),
subset(ire_9, select = common_cols))``````

Now with my merged `data.frame`, I only want to look at a few of the variables and clean up the dataset for the analysis.

Click here to look at all the variables in the different rounds of the survey.

``````att9 <- data.frame(country = data9\$cntry,
round = data9\$essround,
imm_same_eth = data9\$imsmetn,
imm_diff_eth = data9\$imdfetn,
imm_poor = data9\$impcntr,
imm_econ = data9\$imbgeco,
imm_culture = data9\$imueclt,
imm_qual_life = data9\$imwbcnt,
left_right = data9\$lrscale)

class(att9\$imm_same_eth)``````

All the variables in the dataset are a special class called “`haven_labelled`“. So we must convert them to numeric variables with a quick function. We exclude the first variable because we want to keep country name as a string character variable.

``att_df[2:15] <- lapply(att_df[2:15], function(x) as.numeric(as.character(x)))``

We can look at the distribution of our variables and count how many missing values there are with the `skim() `function from the `skimr `package

``````library(skimr)

skim(att_df)
``````

We can run a quick t-test to compare the mean attitudes to immigrants on the statement: “Immigrants make country worse or better place to live” across the two survey rounds.

Lower scores indicate an attitude that immigrants undermine Ireland’ quality of life and higher scores indicate agreement that they enrich it!

``t.test(att_df\$imm_qual_life ~ att_df\$round)``

In future blog, I will look at converting the raw output of R into publishable tables.

The results of the independent-sample t-test show that if we compare Ireland in 2002 and Ireland in 2018, there has been a statistically significant increase in positive attitudes towards immigrants and belief that Ireland’s quality of life is more enriched by their presence in the country.

As I am currently an immigrant in a foreign country myself, I am glad to come from a country that sees the benefits of immigrants!

If we load the `ggpubr `package, we can graphically look at the difference in mean attitude scores.

``````library(ggpubr)

box1 <- ggpubr::ggboxplot(att_df, x = "round", y = "imm_qual_life", color = "round", palette = c("#d11141", "#00aedb"),
ylab = "Attitude", xlab = "Round")

box1 + stat_compare_means(method = "t.test")``````

It’s not the most glamorous graph but it conveys the shift in Ireland to more positive attitudes to immigration!

I suspect that a country’s economic growth correlates with attitudes to immigration.

So let’s take the mean annual score values

``````ire_agg <- ireland[!duplicated(ireland\$mean_imm_qual_life),]
ire_agg <- ire_agg %>%
select(year, everything())``````

Next we can take data from Quandl website on annual Irish GDP growth (click here to learn how to access economic data via a Quandl API on R.)

``````gdp <- Quandl('ODA/IRL_LE', start_date='2002-01-01', end_date='2020-01-01',type="raw")
``````

Create a year variable from the date variable

``````gdp\$year <- substr(gdp\$Date, start = 1, stop = 4)
``````

Add year variable to the` ire_agg` data.frame that correspond to the ESS survey rounds.

``````year =c("2002","2004","2006","2008","2010","2012","2014","2016","2018")
year <- data.frame(year)
ire_agg <- cbind(ire_agg, year)``````

Merge the GDP and ESS datasets

``````ire_agg <- merge(ire_agg, gdp, by.x = "year", by.y = "year", all.x = TRUE)
``````

Scale the GDP and immigrant attitudes variables so we can put them on the same plot.

``````ire_agg\$scaled_gdp <- scale(ire_agg\$Value)

ire_agg\$scaled_imm_attitude <- scale(ire_agg\$mean_imm_qual_life)
``````

In order to graph both variables on the same graph, we turn the two scaled variables into two factors of a single variable.

``````ire_agg <- ire_agg %>%
select(year, scaled_imm_attitude, scaled_gdp) %>%
gather(key = "variable", value = "value", -year)``````

Next, we can change the names of the factors

``````ire_agg\$variable <- revalue(ire_agg\$variable, c("scaled_gdp"="GDP (scaled)", "scaled_imm_attitude" = "Attitudes (scaled)"))
``````

And finally, we can graph the plot.

The `geom_rect() `function graphs the coloured rectangles on the plot. I take colours from this color-hex website; the green rectangle for times of economic growth and red for times of recession. Makes sure the `geom-rect()` comes before the `geom_line().`

``````library(ggpthemes)

ggplot(ire_agg, aes(x = year, y = value, group = variable)) + geom_rect(aes(xmin= "2008",xmax= "2012",ymin=-Inf, ymax=Inf),fill="#d11141",colour=NA, alpha=0.01) +
geom_rect(aes(xmin= "2002" ,xmax= "2008",ymin=-Inf, ymax=Inf),fill="#00b159",colour=NA, alpha=0.01) +
geom_rect(aes(xmin= "2012" ,xmax= "2020",ymin=-Inf, ymax=Inf),fill="#00b159",colour=NA, alpha=0.01) +
geom_line(aes(color = as.factor(variable), linetype = as.factor(variable)), size = 1.3) +
scale_color_manual(values = c("#00aedb", "#f37735")) +
geom_point() +
geom_text(data=. %>%
arrange(desc(year)) %>%
group_by(variable) %>%
slice(1), aes(label=variable), position= position_jitter(height = 0.3), vjust =0.3, hjust = 0.1,
size = 4, angle= 0) + ggtitle("Relationship between Immigration Attitudes and GDP Growth") + labs(value = " ") + xlab("Year") + ylab("scaled") + theme_hc()``````

And we can see that there is a relationship between attitudes to immigrants in Ireland and Irish GDP growth. When GDP is growing, Irish people see that immigrants improve quality of life in Ireland and vice versa. The red section of the graph corresponds to the financial crisis.

# Scrape NATO defense expenditure data from Wikipedia with the rvest package in R We can all agree that Wikipedia is often our go-to site when we want to get information quick. When we’re doing IR or Poli Sci reesarch, Wikipedia will most likely have the most up-to-date data compared to other databases on the web that can quickly become out of date.

So in R, we can scrape a table from Wikipedia and turn into a database with the `rvest `package .

First, we copy and paste the Wikipedia page we want to scrape into the `read_html(`) function as a string:

``nato_members <- read_html("https://en.wikipedia.org/wiki/Member_states_of_NATO")``

Next we save all the tables on the Wikipedia page as a list. Turn the `header = TRUE`.

``nato_tables <- nato_members %>% html_table(header = TRUE, fill = TRUE)``

The table that I want is the third table on the page, so use [[two brackets]] to access the third list.

``nato_exp <- nato_tables[]``

The dataset is not perfect, but it is handy to have access to data this up-to-date. It comes from the most recent NATO report, published in 2019.

Some problems we will have to fix.

1. The first row is a messy replication of the header / more information across two cells in Wikipedia.
2. The headers are long and convoluted.
3. There are a few values in as N/A in the dataset, which R thinks is a string.
4. All the numbers have commas, so R thinks all the numeric values are all strings.

There are a few NA values that I would not want to impute because they are probably zero. Iceland has no armed forces and manages only a small coast guard. North Macedonia joined NATO in March 2020, so it doesn’t have all the data completely.

So first, let’s do some quick data cleaning:

Clean the variable names to remove symbols and adds underscores with a function from the `janitor `package

``````library(janitor)
nato_exp  <- nato_exp %>% clean_names()``````

Delete the first row. which contains some extra header text:

``nato_exp <- nato_exp[-c(1),]``

Rename the headers to better reflect the original Wikipedia table headings In this `rename()` function,

• the first string in the variable name we want and
• the second string is the original heading as it was cleaned from the above `clean_names()` function:
``````nato_exp <- nato_exp %>%
rename("def_exp_millions" = "defence_expenditure_us_f",
"def_exp_gdp" = "defence_expenditure_us_f_2",
"def_exp_per_capita" = "defence_expenditure_us_f_3",
"population" = "population_a",
"gdp" = "gdp_nominal_e",
"personnel" = "personnel_f")``````

Next turn all the` N/A` value strings to `NULL`. The na_strings object we create can be used with other instances of pesky missing data varieties, other than just N/A string.

``````na_strings <- c("N A", "N / A", "N/A", "N/ A", "Not Available", "Not available")

nato_exp <- nato_exp %>% replace_with_na_all(condition = ~.x %in% na_strings)``````

Remove all the commas from the number columns and convert the character strings to numeric values with a quick function we apply to all numeric columns in the data.frame.

``````remove_comma <- function(x) {as.numeric(gsub(",", "", x, fixed = TRUE))}

nato_exp[2:7] <- sapply(nato_exp[2:7], remove_comma)   ``````

Next, we can calculate the average NATO score of all the countries (excluding the `member_state` variable, which is a character string).

We’ll exclude the NATO total column (as it is not a `member_state` but an aggregate of them all) and the data about Iceland and North Macedonia, which have missing values.

``````nato_average <- nato_exp %>%
filter(member_state != 'NATO' & member_state != 'Iceland' & member_state != 'North Macedonia') %>%
summarise_if(is.numeric, mean, na.rm = TRUE)``````

Re-arrange the columns so the two `data.frames` match:

``````nato_average\$member_state = "NATO average"
nato_average <- nato_average %>% select(member_state, everything())``````

Bind the two `data.frames `together

``nato_exp <- rbind(nato_exp, nato_average)``

Create a new factor variable that categorises countries into either above or below the NATO average defense spending.

Also we can specify a category to distinguish those countries that have reached the NATO target of their defense spending equal to 2% of their GDP.

``````nato_exp <- nato_exp %>%
filter(member_state != 'NATO' & member_state!= "North Macedonia" & member_state!= "Iceland") %>%
dplyr::mutate(difference = case_when(def_exp_gdp >= 2 ~ "Above NATO 2% GDP quota", between(def_exp_gdp, 1.6143, 2) ~ "Above NATO average", between(def_exp_gdp, 1.61427, 1.61429) ~ "NATO average", def_exp_gdp <= 1.613 ~ "Below NATO average"))``````

Create a vector of hex colours to correspond to the different categories. I choose traffic light colors to indicate the

• green countries (those who have reached the NATO 2% quota),
• orange countries (above the NATO average but below the spending target) and
• red countries (below the NATO spending average).

The blue colour is for the NATO average bar,

``my_palette <- c( "Below NATO average" = "#E60000", "NATO average" = "#012169", "Above NATO average" = "#FF7800", "Above NATO 2% GDP quota" = "#4CBB17")``

Finally, we create a graph with `ggplot`, and use the` reorder()` function to arrange the bars in ascending order.

NATO allies are encouraged to hit the target of 2% of gross domestic product. So, we add a `geom_vline()` to demarcate the NATO 2% quota.

``````nato_bar <- nato_exp %>%
filter(member_state != 'NATO' & member_state!= "North Macedonia" & member_state!= "Iceland") %>%
ggplot(aes(x= reorder(member_state, def_exp_gdp), y = def_exp_gdp,
fill=factor(difference))) +
geom_bar(stat = "identity") +
geom_vline(xintercept = 22.55, colour="firebrick", linetype = "longdash", size = 1) +
geom_text(aes(x=22, label="NATO 2% quota", y=3), colour="firebrick", text=element_text(size=20)) +
labs(title = "NATO members Defense Expenditure as a percentage GDP ",
subtitle = "Source: NATO, 2019",
x = "NATO Member States",
y = "Defense Expenditure (as % GDP) ")

``````

Click here to read about adding flags to graphs with the `ggimage` package.

``````library(countrycode)
library(ggimage)

nato_exp\$iso2 <- countrycode(nato_exp\$member_state, "country.name", "iso2c")``````

Finally, we can print out the `nato_bar` graph!

``````nato_bar +
geom_flag(y = -0.2, aes(image = nato_exp\$iso2)) +
coord_flip() +
expand_limits(y = -0.2) +
theme(legend.title = element_blank(), axis.text.x=element_text(angle=45, hjust=1)) + scale_fill_manual(values = my_palette)`````` Use this package to really quickly access all the indicators from the World Bank website.

``````install.packages('WDI')
library(WDI)
library(ggthemes)``````

With the `WDIsearch`() function we can look for the World Bank indicator that measures oil rents as a percentage of a country’s GDP. You can look at the World Bank website and browse all the indicators available.

``WDIsearch('oil rent')``

The output is:

``````indicator             name
"NY.GDP.PETR.RT.ZS"   "Oil rents (% of GDP)"``````

Copy the indicator string and paste it into th`e WDI() `function. The country codes are the iso2 codes, which you can input as many as you want in the `c()`.

If you want all countries as regions that the World Bank has, do not add `country` argument.

We can compare Iran and Saudi Arabian oil rents from 1970 until the most recent value.

``````data = WDI(indicator='NY.GDP.PETR.RT.ZS', country=c('IR', 'SA'), start=1970, end=2019)
``````

And graph out the output. All only takes a few steps.

``````my_palette = c("#DA0000", "#239f40")
#both the hex colors are from the maps of the countries

oil_graph <- ggplot(oil_data, aes(year, NY.GDP.PETR.RT.ZS, color=country)) +
geom_line(size = 1.4) +
labs(title = "Oil rents as a percentage of GDP",
subtitle = "In Iran and Saudi Arabia from 1970 to 2019",
x = "Year",
y = "Average oil rent as percentage of GDP",
color = " ") +
scale_color_manual(values = my_palette)

oil_graph + theme_fivethirtyeight() +
theme(
plot.title = element_text(size = 30),
axis.title.y = element_text(size = 20),
axis.title.x = element_text(size = 20))
``````

For some reason the World Bank does not have data for Iran for most of the early 1990s. But I would imagine that they broadly follow the trends in Saudi Arabia.

I added the flags myself manually after I got frustrated with `geom_flag()` . It is something I will need to figure out for a future blog post!

It is crazy that in the late 1970s, oil accounted for over 80% of all Saudi Arabia’s Gross Domestic Product. Now we see both countries rely on a far smaller percentage. Due both to the fact that oil prices are volatile, climate change is a new constant threat and resource exhaustion is on the horizon, both countries have adjusted policies in attempts to diversify their sources of income.

Next we can use the World Bank data to create maps and compare regions on any World Bank scores.

``````library(rnaturalearth)
# to create maps
library(viridis) # for pretty colors``````

We will compare all Asian and Middle Easter countries with regard to all natural rents (not just oil) as a percentage of their GDP.

So, first we create a map with the `rnaturalearth` package. Click here to read a previous tutorial about all the features of this package.

I will choose only the geographical continent of Asia, which covers the majority of Middle East also.

``asia_map <- ne_countries(scale = "medium", continent = 'Asia', returnclass = "sf")``

Then, once again we use the `WDI()` function to download our World Bank data.

``nat_rents = WDI(indicator='NY.GDP.TOTL.RT.ZS', start=2016, end=2018)``

Next I’ll merge the with the `asia_map` object I created.

``asia_rents <- merge(asia_map, nat_rents, by.x = "iso_a2", by.y = "iso2c", all = TRUE)``

We only want the value from one year, so we can subset the dataset

``map_2017 <- asia_rents [which(asia_rents\$year == 2017),]``

And finally, graph out the data:

``````nat_rent_graph <- ggplot(data = map_2017) +
geom_sf(aes(fill = NY.GDP.TOTL.RT.ZS),
position = "identity") +
labs(fill ='Natural Resource Rents as % GDP') +
scale_fill_viridis_c(option = "viridis")

nat_rent_graph + theme_map()``````

# Compare clusters with dendextend package in R Packages we need

``install.packages("dendextend")library(dendextend)``

This blog will create dendogram to examine whether Asian countries cluster together when it comes to extent of judicial compliance. I’m examining Asian countries with populations over 1 million and data comes from the year 2019.

Judicial compliance measure how often a government complies with important decisions by courts with which it disagrees.

Higher scores indicate that the government often or always complies, even when they are unhappy with the decision. Lower scores indicate the government rarely or never complies with decisions that it doesn’t like.

It is important to make sure there are no NA values. So I will impute any missing variables.

``library(mice)imputed_data <- mice(asia_df, method="cart")asia_df <- complete(imputed_data)``

Next we can scale the dataset. This step is for when you are clustering on more than one variable and the variable units are not necessarily equivalent. The distance value is related to the scale on which the different variables are made.

Therefore, it’s good to scale all to a common unit of analysis before measuring any inter-observation dissimilarities.

``asia_scale <- scale(asia_df)``

Next we calculate the distance between the countries (i.e. different rows) on the variables of interest and create a `dist` object.

There are many different methods you can use to calculate the distances. Click here for a description of the main formulae you can use to calculate distances. In the linked article, they provide a helpful table to summarise all the common methods such as “`euclidean`“, “`manhattan`” or “`canberra`” formulae.

I will go with the “`euclidean`” method. but make sure your method suits the data type (binary, continuous, categorical etc.)

``asia_judicial_dist <- dist(asia_scale, method = "euclidean")class(asia_judicial_dist)``

We now have a `dist` object we can feed into the `hclust()` function.

With this function, we will need to make another decision regarding the method we will use.

The possible methods we can use are `"ward.D"``"ward.D2"``"single"``"complete"``"average"` (= UPGMA), `"mcquitty"` (= WPGMA), `"median"` (= WPGMC) or `"centroid"` (= UPGMC).

Click here for a more indepth discussion of the different algorithms that you can use

Again I will choose a common` "ward.D2"` method, which chooses the best clusters based on calculating: at each stage, which two clusters merge that provide the smallest increase in the combined error sum of squares.

``asia_judicial_hclust <- hclust(asia_judicial_dist, method = "ward.D2")class(asia_judicial_hclust)``

We next convert our `hclust` object into a `dendrogram `object so we can plot it and visualise the different clusters of judicial compliance.

``asia_judicial_dend <- as.dendrogram(asia_judicial_hclust)class(asia_judicial_dend)``

When we plot the different clusters, there are many options to change the color, size and dimensions of the dendrogram. To do this we use the `set()` function.

Click here to see a very comprehensive list of all the `set()` attributes you can use to modify your dendrogram from the dendextend package.

``asia_judicial_dend %>%set("branches_k_color", k=5) %>%    # five clustered groups of different colorsset("branches_lwd", 2) %>%          # size of the lines (thick or thin)set("labels_colors", k=5) %>%       # color the country labels, also five groupsplot(horiz = TRUE)                  # plot the dendrogram horizontally``

I choose to divide the countries into five clusters by color:

And if I zoom in on the ends of the branches, we can examine the groups.

The top branches appear to be less democratic countries. We can see that North Korea is its own cluster with no other countries sharing similar judicial compliance scores.

The bottom branches appear to be more democratic with more judicial independence. However, when we have our final dendrogram, it is our job now to research and investigate the characteristics that each countries shares regarding the role of the judiciary and its relationship with executive compliance.

Singapore, even though it is not a democratic country in the way that Japan is, shows a highly similar level of respect by the executive for judicial decisions.

Also South Korean executive compliance with the judiciary appears to be more similar to India and Sri Lanka than it does to Japan and Singapore.

So we can see that dendrograms are helpful for exploratory research and show us a starting place to begin grouping different countries together regarding a concept.

A really quick way to complete all steps in one go, is the following code. However, you must use the default methods for the `dist `and `hclust `functions. So if you want to fine tune your methods to suit your data, this quicker option may be too brute.

``asia_df %>%scale %>%dist %>%hclust %>%as.dendrogram %>%set("branches_k_color", k=5) %>%set("branches_lwd", 2) %>%set("labels_colors", k=5) %>%plot(horiz = TRUE)``

# Plot variables on a map with rnaturalearth package in R All the packages I will be using:

``````library(rnaturalearth)
library(countrycode)
library(tidyverse)
library(ggplot2)
library(ggthemes)
library(viridis)
``````

First, we access and store a map object from the rnaturalearth package, with all the spatial information in contains. We specify `returnclass = "sf"`, which will return a dataframe with simple features information.

Simple features or simple feature access refers to a formal standard (ISO 19125-1:2004) that describes how objects in the real world can be represented in computers, with emphasis on the spatial geometry of these objects. Our map has these attributes stored in the object.

With the `ne_countries()` function, we get the borders of all countries.

``````map <- ne_countries(scale = "medium", returnclass = "sf")
View(map)``````

This `map `object comes with lots of information about 241 countries and territories around the world.

In total, it has 65 columns, mostly with different variants of the names / locations of each country / territory. For example, ISO codes for each country. Further in the dataset, there are a few other variables such as GDP and population estimates for each country. So a handy source of data.

However, I want to use values from a different source; I have a `freedom_df` dataframe with a freedom of association variable.

The freedom of association index broadly captures to what extent are parties, including opposition parties, allowed to form and to participate in elections, and to what extent are civil society organizations able to form and to operate freely in each country.

So, we can merge them into one dataset.

Before that, I want to only use the scores from the most recent year to map out. So, take out only those values in the year 2019 (don’t forget the comma sandwiched between the round bracket and the square bracket):

``freedom19 <- freedom_df[which(freedom_df\$year == 2019),]``

My `freedom19` dataset uses the Correlates of War codes but no ISO country codes. So let’s add these COW codes to the `map` dataframe for ease of merging.

I will convert the ISO codes to COW codes with the `countrycodes() `function:

``map\$COWcode <- countrycode(map\$adm0_a3, "iso3c", "cown") ``

Click here to read more about the `countrycode()` function in R.

Now, with a universal variable common to both datasets, I can merge the two datasets with the common COW codes:

``map19 <- merge(map, freedom19, by.x = "COWcode", by.y = "ccode", all = TRUE)``

Click here to read more about the `merge()`` `function in R.

We’re all ready to graph the map. We can add the freedom of association variable into the `aes() `argument of the` geom_sf()` function. Again, the `sf` refers to simple features with geospatial information we will map out.

``````assoc_graph <- ggplot(data = map19) +
geom_sf(aes(fill = freedom_association_index),
position = "identity") +
labs(fill='Freedom of Association Index')  +
scale_fill_viridis_c(option = "viridis")
``````

The `scale_fill_viridis_c(option = "viridis")` changes the color spectrum of the main variable.

Other options include:

`"viridis"`

`"magma"`

`"plasma`

Finally we call the new graph stored in the `assoc_graph` object.

I use the `theme_map()` function from the `ggtheme `package to make the background very clean and to move the legend down to the bottom left of the screen where it takes up the otherwise very empty Pacific ocean / Antarctic expanse.

``assoc_graph + theme_map()``

And there we have it, a map of countries showing the Freedom of Association index across countries.

The index broadly captures to what extent are parties, including opposition parties, allowed to form and to participate in elections, and to what extent are civil society organizations able to form and to operate freely.

Yellow colors indicate more freedom, green colors indicate middle scores and blue colors indicate low levels of freedom.

Some of the countries have missing data, such as Germany and Yemen, for various reasons. A true perfectionist would go and find and fill in the data manually.

# Graph Google search trends with gtrendsR package in R. Google Trends is a search trends feature. It shows how frequently a given search term is entered into Google’s search engine, relative to the site’s total search volume over a given period of time.

( So note: because the results are all relative to the other search terms in the time period, the dates you provide to the `gtrendsR` function will change the shape of your graph and the relative percentage frequencies on the y axis of your plot).

To scrape data from Google Trends, we use the `gtrends()` function from the `gtrendsR` package and the `get_interest()` function from the `trendyy `package (a handy wrapper package for `gtrendsR`).

If necessary, also load the tidyverse and ggplot packages.

``````install.packages("gtrendsR")
install.packages("trendyy")
library(tidyverse)
library(ggplot2)
library(gtrendsR)
library(trendyy)``````

To scrape the Google trend data, call the `trendy()` function and write in the search terms.

For example, here we search for the term “Kamala Harris” during the period from 1st of January 2019 until today.

If you want to check out more specifications, for the package, you can check out the package PDF here. For example, we can change the geographical region (US state or country for example) with the `geo` specification.

We can also change the parameters of the `time `argument, we can specify the time span of the query with any one of the following strings:

• “now 1-H” (previous hour)
• “now 4-H” (previous four hours)
• “today+5-y” last five years (default)
• “all” (since the beginning of Google Trends (2004))

If don’t supply a string, the default is five year search data.

``kamala <- trendy("Kamala Harris", "2019-01-01", "2020-08-13") %>% get_interest()``

We call the `get_interest()` function to save this data from Google Trends into a data.frame version of the` kamala `object. If we didn’t execute this last step, the data would be in a form that we cannot use with `ggplot().`

``View(kamala)``

In this data.frame, there is a `date` variable for each week and a `hits` variable that shows the interest during that week. Remember,  this `hits `figure shows how frequently a given search term is entered into Google’s search engine relative to the site’s total search volume over a given period of time.

We will use these two variables to plot the y and x axis.

To look at the search trends in relation to the events during the Kamala Presidential campaign over 2019, we can add vertical lines along the date axis, with a data.frame, we can call `kamala_events.`

``````kamala_events = data.frame(date=as.Date(c("2019-01-21", "2019-06-25", "2019-12-03", "2020-08-12")),
event=c("Launch Presidential Campaign", "First Primary Debate", "Drops Out Presidential Race", "Chosen as Biden's VP"))
``````

Note the very specific order the `as.Date()` function requires.

Next, we can graph the trends, using the above date and hits variables:

``````ggplot(kamala, aes(x = as.Date(date), y = hits)) +
geom_line(colour = "steelblue", size = 2.5) +
geom_vline(data=kamala_events, mapping=aes(xintercept=date), color="red") +
geom_text(data=kamala_events, mapping=aes(x=date, y=0, label=event), size=4, angle=40, vjust=-0.5, hjust=0) +
xlab(label = "Search Dates") +
ylab(label = 'Relative Hits %')
``````

Which produces:

Super easy and a quick way to visualise the ups and downs of Kamala Harris’ political career over the past few months, operationalised as the relative frequency with which people Googled her name.

If I had chosen different dates, the relative hits as shown on the y axis would be different! So play around with it and see how the trends change when you increase or decrease the time period.

# Plot marginal effects with sjPlot package in R Without examining interaction effects in your model, sometimes we are incorrect about the real relationship between variables.

This is particularly evident in political science when we consider, for example, the impact of regime type on the relationship between our dependent and independent variables. The nature of the government can really impact our analysis.

For example, I were to look at the relationship between anti-government protests and executive bribery.

I would expect to see that the higher the bribery score in a country’s government, the higher prevalence of people protesting against this corrupt authority. Basically, people are angry when their government is corrupt. And they make sure they make this very clear to them by protesting on the streets.

First, I will describe the variables I use and their data type.

With the dependent variable` democracy_protest` being an interval score, based upon the question: In this year, how frequent and large have events of mass mobilization for pro-democratic aims been?

The main independent variable is another interval score on executive_bribery scale and is based upon the question: How clean is the executive (the head of government, and cabinet ministers), and their agents from bribery (granting favors in exchange for bribes, kickbacks, or other material inducements?)

Higher scores indicate cleaner governing executives.

So, let’s run a quick regression to examine this relationship:

``summary(protest_model <- lm(democracy_protest ~ executive_bribery, data = data_2010))``

Examining the results of the regression model:

We see that there is indeed a negative relationship. The cleaner the government, the less likely people in the country will protest in the year under examination. This confirms our above mentioned hypothesis.

However, examining the R2, we see that less than 1% of the variance in protest prevalence is explained by executive bribery scores.

Not very promising.

Is there an interaction effect with regime type? We can look at a scatterplot and see if the different regime type categories cluster in distinct patterns.

The four regime type categories are

• purple: liberal democracy (such as Sweden or Canada)
• teal: electoral democracy (such as Turkey or Mongolia)
• khaki green: electoral autocracy (such as Georgia or Ethiopia)
• red: closed autocracy (such as Cuba or China)

The color clusters indicate regime type categories do cluster.

• Liberal democracies (purple) cluster at the top left hand corner. Higher scores in clean executive index and lower prevalence in pro-democracy protesting.
• Electoral autocracies (teal) cluster in the middle.
• Electoral democracies (khaki green) cluster at the bottom of the graph.
• The closed autocracy countries (red) seem to have a upward trend, opposite to the overall best fitted line.

So let’s examine the interaction effect between regime types and executive corruption with mass pro-democracy protests.

Plot the model and add the * interaction effect:

``summary(protest_model_2 <-lm(democracy_protest ~ executive_bribery*regime_type, data = data_2010))``

Adding the regime type variable, the R2 shoots up to 27%.

The interaction effect appears to only be significant between clean executive scores and liberal democracies. The cleaner the country’s executive, the prevalence of mass mobilization and protests decreases by -0.98 and this is a statistically significant relationship.

The initial relationship we saw in the first model, the simple relationship between clean executive scores and protests, has disappeared. There appears to be no relationship between bribery and protests in the semi-autocratic countries; (those countries that are not quite democratic but not quite fully despotic).

Let’s graph out these interactions.

In the `plot_model()` function, first type the name of the model we fitted above, `protest_model`.

Next, choose the `type` . For different type arguments, scroll to the bottom of this blog post. We use the `type = "pred"` argument, which plots the marginal effects.

Marginal effects tells us how a dependent variable changes when a specific independent variable changes, if other covariates are held constant. The two terms typed here are the two variables we added to the model with the * interaction term.

``````install.packages("sjPlot")
library(sjPlot)

plot_model(protest_model, type = "pred", terms = c("executive_bribery", "regime_type"), title = 'Predicted values of Mass Mobilization Index',

legend.title = "Regime type")``````

Looking at the graph, we can see that the relationship changes across regime type. For liberal democracies (purple), there is a negative relationship. Low scores on the clean executive index are related to high prevalence of protests. So, we could say that when people in democracies see corrupt actions, they are more likely to protest against them.

However with closed autocracies (red) there is the opposite trend. Very corrupt countries in closed autocracies appear to not have high levels of protests.

This would make sense from a theoretical perspective: even if you want to protest in a very corrupt country, the risk to your safety or livelihood is often too high and you don’t bother. Also the media is probably not free so you may not even be aware of the extent of government corruption.

It seems that when there are no democratic features available to the people (free media, freedom of assembly, active civil societies, or strong civil rights protections, freedom of expression et cetera) the barriers to protesting are too high. However, as the corruption index improves and executives are seen as “cleaner”, these democratic features may be more accessible to them.

If we only looked at the relationship between the two variables and ignore this important interaction effects, we would incorrectly say that as

Of course, panel data would be better to help separate any potential causation from the correlations we can see in the above graphs.

The blue line is almost vertical. This matches with the regression model which found the coefficient in electoral autocracy is 0.001. Virtually non-existent.

### Different Plot Types

`type = "std"` – Plots standardized estimates.

`type = "std2"` – Plots standardized estimates, however, standardization follows Gelman’s (2008) suggestion, rescaling the estimates by dividing them by two standard deviations instead of just one. Resulting coefficients are then directly comparable for untransformed binary predictors.

`type = "pred"` – Plots estimated marginal means (or marginal effects). Simply wraps `ggpredict`.

`type = "eff"`– Plots estimated marginal means (or marginal effects). Simply wraps `ggeffect`.

`type = "slope"` and `type = "resid"` – Simple diagnostic-plots, where a linear model for each single predictor is plotted against the response variable, or the model’s residuals. Additionally, a loess-smoothed line is added to the plot. The main purpose of these plots is to check whether the relationship between outcome (or residuals) and a predictor is roughly linear or not. Since the plots are based on a simple linear regression with only one model predictor at the moment, the slopes (i.e. coefficients) may differ from the coefficients of the complete model.

`type = "diag"` – For Stan-models, plots the prior versus posterior samples. For linear (mixed) models, plots for multicollinearity-check (Variance Inflation Factors), QQ-plots, checks for normal distribution of residuals and homoscedasticity (constant variance of residuals) are shown. For generalized linear mixed models, returns the QQ-plot for random effects.

# Make Wes Anderson themed graphs with wesanderson package in R Well this is just delightful!

``````install.packages("wesanderson")
library(wesanderson)``````

After you install the wesanderson package, you can

1. create a ggplot2 graph object
2. choose the Wes Anderson color scheme you want to use and create a palette object
3. add the graph object and and the palette object and behold your beautiful data

I want to examine the breakdown of how each head of state was appointed to rule the country and the type of regime. First I’ll examine the break down in the 18th century.

To generate a vector of colors, the `wes_palette() `function requires:

```wes_palette(name, n, type = c("discrete", "continuous"))
```
• `name`: Name of desired palette
• `n`: Number of colors desired (i.e. how many categories. In my case, there are four regime types so n = 4).
• `type`: Either “continuous” or “discrete”. Use continuous if you want to automatically interpolate between colors.

``````eighteenth_century <- data_1880s %>%
filter(!is.na(regime)) %>%
filter(!is.na(appointment)) %>%
ggplot(aes(appointment)) + geom_bar(aes(fill = factor(regime)), position = position_stack(reverse = TRUE)) + theme(legend.position = "top", text = element_text(size=15), axis.text.x = element_text(angle = -30, vjust = 1, hjust = 0))``````

Both the regime variable and the appointment variable are discrete categories so we can use the geom_bar() function. When adding the palette to the barplot object, we can use the `scale_fill_manual()` function.

``eighteenth_century + scale_fill_manual(values = wes_palette("Darjeeling1", n = 4)``

Now to compare the breakdown with countries in the 21st century (2000 to present)

The names of all the palettes you can enter into the` wes_anderson()` function

# Include country labels to a regression plot with ggplot2 package in R Sometimes the best way to examine the relationship between our variables of interest is to plot it out and give it a good looking over. For me, it’s most helpful to see where different countries are in relation to each other and to see any interesting outliers.

For this, I can use the` geom_text()` function from the ggplot2 package.

I will look at the relationship between economic globalization and social globalization in OECD countries in the year 2000.

The KOF Globalisation Index, introduced by Dreher (2006) measures globalization along the economicsocial and political dimension for most countries in the world

First, as always, we install and load the necessary package. This time, it is the `ggplot2 `package

``````install.packages("ggplot2")
library(ggplot2)``````

``````fin <- ggplot(oecd2000, aes(economic_globalization, social_globalization))
+ ggtitle("Relationship between Globalization Index Scores among OECD countries in 2000")
+ scale_x_continuous("Economic Globalization Index")
+ scale_y_continuous("Social Globalization Index")
+ geom_smooth(method = "lm")
+ geom_point(aes(colour = polity_score), size = 2) + labs(color = "Polity Score")
+ geom_text(hjust = 0, nudge_x = 0.5, size = 4, aes(label = country))

fin ``````

In the `aes()` function, we enter the two variables we want to plot.

Then I use the next three lines to add titles to axes and graph

I use the` geom_smooth()` function with the “lm” method to add a best fitting regression line through the points on the plot. Click here to learn more about adding a regression line to a plot.

I add a legend to examine where countries with different democracy scores (taken from the Polity Index) are located on the globalization plane. Click here to learn about adding legends.

The last line is the `geom_text()` function that I use to specify that I want to label each observation (i.e. each OECD country) with its name, rather than the default dataset number.

Some geom_text() commands to use:

• nudge_x (or nudge_y) slightly “nudge” the labels from their corresponding points to help minimise messy overlapping.
• hjust and vjust move the text label “left”, “center”, “right”, “bottom”, “middle” or “top” of the point.

Yes, yes! There is a package that uses the color palettes of Wes Anderson movies to make graphs look just beautiful. Click here to use different Wes Anderson aesthetic themed graphs!

``````zissou_colors <- wes_palette("Zissou1", 100, type = "continuous")