Create density plots with ggridges package in R

Packages we will need:

library(tidyverse)
library(ggridges)
library(ggimage)  # to add png images
library(bbplot)   # for pretty graph themes

We will plot out the favourability opinion polls for the three main political parties in Ireland from 2016 to 2020. Data comes from Louwerse and Müller (2020)

Happy Danny Devito GIF by It's Always Sunny in Philadelphia - Find & Share on GIPHY

Before we dive into the ggridges plotting, we have a little data cleaning to do. First, we extract the last four “characters” from the date string to create a year variable.

I took this quick function from a StackOverflow response:

substrRight <- function(x, n){
  substr(x, nchar(x)-n+1, nchar(x))}

polls_csv$year <- substrRight(polls_csv$Date, 4)

Next, pivot the data from wide to long format.

More information of pivoting data with dplyr can be found here. I tend to check it at least once a month as the arguments refuse to stay in my head.

I only want to take the main parties in Ireland to compare in the plot.

polls <- polls_csv %>%
  select(year, FG:SF) %>% 
  pivot_longer(!year, names_to = "party", values_to = "opinion_poll")

I went online and found the logos for the three main parties (sorry, Labour) and saved them in the working directory I have for my RStudio. That way I can call the file with the prefix “~/**.png” rather than find the exact location they are saved on the computer.

polls %>% 
  filter(party == "FF" | party == "FG" | party == "SF" ) %>% 
  mutate(image = ifelse(party=="FF","~/ff.png",
 ifelse(party=="FG","~/fg.png", "~/sf.png"))) -> polls_three

Now we are ready to plot out the density plots for each party with the geom_density_ridges() function from the ggridges package.

We will add a few arguments into this function.

We add an alpha = 0.8 to make each density plot a little transparent and we can see the plots behind.

The scale = 2 argument pushes all three plots togheter so they are slightly overlapping. If scale =1, they would be totally separate and 3 would have them overlapping far more.

The rel_min_height = 0.01 argument removes the trailing tails from the plots that are under 0.01 density. This is again for aesthetics and just makes the plot look slightly less busy for relatively normally distributed densities

The geom_image takes the images and we place them at the beginning of the x axis beside the labels for each party.

Last, we use the bbplot package BBC style ggplot theme, which I really like as it makes the overall graph look streamlined with large font defaults.

polls_three %>% 
  ggplot(aes(x = opinion_poll, y = as.factor(party))) +  
  geom_density_ridges(aes(fill = party), 
                      alpha = 0.8, 
                      scale = 2,
                      rel_min_height = 0.01) + 
  ggimage::geom_image(aes(y = party, x= 1, image = image), asp = 0.9, size = 0.12) + 
  facet_wrap(~year) + 
  bbplot::bbc_style() +
  scale_fill_manual(values = c("#f2542d", "#edf6f9", "#0e9594")) +
  theme(legend.position = "none") + 
  labs(title = "Favourability Polls for the Three Main Parties in Ireland", subtitle = "Data from Irish Polling Indicator (Louwerse & Müller, 2020)")
Its Always Sunny In Philadelphia Thumbs Up GIF by HULU - Find & Share on GIPHY

Graphing Pew survey responses with ggplot in R

Packages we will need:

library(tidyverse)
library(forcats)
library(ggthemes)

We are going to look at a few questions from the 2019 US Pew survey on relations with foreign countries.

Data can be found by following this link:

We are going to make bar charts to plot out responses to the question asked to American participaints: Should the US cooperate more or less with some key countries? The countries asked were China, Russia, Germany, France, Japan and the UK.

Before we dive in, we can find some nice hex colors for the bar chart. There are four possible responses that the participants could give: cooperate more, cooperate less, cooperate the same as before and refuse to answer / don’t know.

pal <- c("Cooperate more" = "#0a9396",
         "Same as before" = "#ee9b00",
         "Don't know" = "#005f73",
         "Cooperate less" ="#ae2012")

We first select the questions we want from the full survey and pivot the dataframe to long form with pivot_longer(). This way we have a single column with all the different survey responses. that we can manipulate more easily with dplyr functions.

Then we summarise the data to count all the survey reponses for each of the four countries and then calculate the frequency of each response as a percentage of all answers.

Then we mutate the variables so that we can add flags. The geom_flag() function from the ggflags packages only recognises ISO2 country codes in lower cases.

After that we change the factors level for the four responses so they from positive to negative views of cooperation

pew %>% 
  select(id = case_id, Q2a:Q2f) %>% 
  pivot_longer(!id, names_to = "survey_question", values_to = "response")  %>% 
  group_by(survey_question, response) %>% 
  summarise(n = n()) %>%
  mutate(freq = n / sum(n)) %>% 
  ungroup() %>% 
  mutate(response_factor = as.factor(response)) %>% 
  mutate(country_question = ifelse(survey_question == "Q2a", "fr",
ifelse(survey_question == "Q2b", "gb",
ifelse(survey_question == "Q2c", "ru",
ifelse(survey_question == "Q2d", "cn",
ifelse(survey_question == "Q2e", "de",
ifelse(survey_question == "Q2f", "jp", survey_question))))))) %>% 
  mutate(response_string = ifelse(response_factor == 1, "Cooperate more",
ifelse(response_factor == 2, "Cooperate less",
ifelse(response_factor == 3, "Same as before",
ifelse(response_factor == 9, "Don't know", response_factor))))) %>% 
  mutate(response_string = fct_relevel(response_string, c("Cooperate less","Same as before","Cooperate more", "Don't know"))) -> pew_clean

We next use ggplot to plot out the responses.

We use the position = "stack" to make all the responses “stack” onto each other for each country. We use stat = "identity" because we are not counting each reponses. Rather we are using the freq variable we calculated above.

pew_clean %>%
  ggplot() +
  geom_bar(aes(x = forcats::fct_reorder(country_question, freq), y = freq, fill = response_string), color = "#e5e5e5", size = 3, position = "stack", stat = "identity") +
  geom_flag(aes(x = country_question, y = -0.05 , country = country_question), color = "black", size = 20) -> pew_graph

And last we change the appearance of the plot with the theme function

pew_graph + 
coord_flip() + 
  scale_fill_manual(values = pal) +
  ggthemes::theme_fivethirtyeight() + 
  ggtitle("Should the US cooperate more or less with the following country?") +
  theme(legend.title = element_blank(),
        legend.position = "top",
        legend.key.size = unit(2, "cm"),
        text = element_text(size = 25),
        legend.text = element_text(size = 20),
        axis.text = element_blank())

Lollipop plots with ggplot2 in R

Packages we will need:

library(tidyverse)
library(rvest)
library(ggflags)
library(countrycode)
library(ggpubr)

We will plot out a lollipop plot to compare EU countries on their level of income inequality, measured by the Gini coefficient.

A Gini coefficient of zero expresses perfect equality, where all values are the same (e.g. where everyone has the same income). A Gini coefficient of one (or 100%) expresses maximal inequality among values (e.g. for a large number of people where only one person has all the income or consumption and all others have none, the Gini coefficient will be nearly one).

To start, we will take data on the EU from Wikipedia. With rvest package, scrape the table about the EU countries from this Wikipedia page.

Click here to read more about the rvest pacakge

With the gsub() function, we can clean up the different variables with some regex. Namely delete the footnotes / square brackets and change the variable classes.

eu_site <- read_html("https://en.wikipedia.org/wiki/Member_state_of_the_European_Union")

eu_tables <- eu_site %>% html_table(header = TRUE, fill = TRUE)

eu_members <- eu_tables[[3]]

eu_members %<>% janitor::clean_names()  %>% 
  filter(!is.na(accession))

eu_members$iso3 <- countrycode::countrycode(eu_members$geo, "country.name", "iso3c")

eu_members$accession <- as.numeric(gsub("([0-9]+).*$", "\\1",eu_members$accession))

eu_members$name_clean <- gsub("\\[.*?\\]", "", eu_members$name)

eu_members$gini_clean <- gsub("\\[.*?\\]", "", eu_members$gini)

Next some data cleaning and grouping the year member groups into different decades. This indicates what year each country joined the EU. If we see clustering of colours on any particular end of the Gini scale, this may indicate that there is a relationship between the length of time that a country was part of the EU and their domestic income inequality level. Are the founding members of the EU more equal than the new countries? Or conversely are the newer countries that joined from former Soviet countries in the 2000s more equal. We can visualise this with the following mutations:

eu_members %>%
  filter(name_clean != "Totals/Averages") %>% 
  mutate(gini_numeric = as.numeric(gini_clean)) %>% 
  mutate(accession_decades = ifelse(accession < 1960, "1957", ifelse(accession > 1960 & accession < 1990, "1960s-1980s", ifelse(accession == 1995, "1990s", ifelse(accession > 2003, "2000s", accession))))) -> eu_clean 

To create the lollipop plot, we will use the geom_segment() functions. This requires an x and xend argument as the country names (with the fct_reorder() function to make sure the countries print out in descending order) and a y and yend argument with the gini number.

All the countries in the EU have a gini score between mid 20s to mid 30s, so I will start the y axis at 20.

We can add the flag for each country when we turn the ISO2 character code to lower case and give it to the country argument.

Click here to read more about the ggflags package

eu_clean %>% 
ggplot(aes(x= name_clean, y= gini_numeric, color = accession_decades)) +
  geom_segment(aes(x = forcats::fct_reorder(name_clean, -gini_numeric), 
                   xend = name_clean, y = 20, yend = gini_numeric, color = accession_decades), size = 4, alpha = 0.8) +
  geom_point(aes(color = accession_decades), size= 10) +
  geom_flag(aes(y = 20, x = name_clean, country = tolower(iso_3166_1_alpha_2)), size = 10) +
  ggtitle("Gini Coefficients of the EU countries") -> eu_plot

Last we add various theme changes to alter the appearance of the graph

eu_plot + 
coord_flip() +
ylim(20, 40) +
  theme(panel.border = element_blank(),
        legend.title = element_blank(),
        axis.title = element_blank(),
        axis.text = element_text(color = "white"),
        text= element_text(size = 35, color = "white"),
        legend.text = element_text(size = 20),
        legend.key = element_rect(colour = "#001219", fill = "#001219"),
        legend.key.width = unit(3, 'cm'),
        legend.position = "bottom",
        panel.grid.major.y = element_line(linetype="dashed"),
        plot.background = element_rect(fill = "#001219"),
        panel.background = element_rect(fill = "#001219"),
        legend.background = element_rect(fill = "#001219") )

We can see there does not seem to be a clear pattern between the year a country joins the EU and their level of domestic income inequality, according to the Gini score.

Of course, the Gini coefficient is not a perfect measurement, so take it with a grain of salt.

Another option for the lolliplot plot comes from the ggpubr package. It does not take the familiar aesthetic arguments like you can do with ggplot2 but it is very quick and the defaults look good!

eu_clean %>% 
  ggdotchart( x = "name_clean", y = "gini_numeric",
              color = "accession_decades",
              sorting = "descending",                      
              rotate = TRUE,                                
              dot.size = 10,   
              y.text.col = TRUE,
              ggtheme = theme_pubr()) + 
  ggtitle("Gini Coefficients of the EU countries") + 
  theme(panel.border = element_blank(),
        legend.title = element_blank(),
        axis.title = element_blank(),
        axis.text = element_text(color = "white"),
        text= element_text(size = 35, color = "white"),
        legend.text = element_text(size = 20),
        legend.key = element_rect(colour = "#001219", fill = "#001219"),
        legend.key.width = unit(3, 'cm'),
        legend.position = "bottom",
        panel.grid.major.y = element_line(linetype="dashed"),
        plot.background = element_rect(fill = "#001219"),
        panel.background = element_rect(fill = "#001219"),
        legend.background = element_rect(fill = "#001219") )

Replicating Eurostat graphs in R

Packages we will need:

library(eurostat)
library(tidyverse)
library(maggritr)
library(ggthemes)
library(forcats)

In this blog, we will try to replicate this graph from Eurostat!

It compares all European countries on their Digitical Intensity Index scores in 2020. This measures the use of different digital technologies by enterprises.

The higher the score, the higher the digital intensity of the enterprise, ranging from very low to very high. 

For more information on the index, I took the above information from this site: https://ec.europa.eu/eurostat/web/products-eurostat-news/-/ddn-20211029-1

First, we will download the digital index from Eurostat with the get_eurostat() function.

Click here to learn more about downloading data on EU from the Eurostat package.

Next some data cleaning. To copy the graph, we will aggregate the different levels into very low, low, high and very high categories with the grepl() function in some ifelse() statements.

The variable names look a bit odd with lots of blank space because I wanted to space out the legend in the graph to replicate the Eurostat graph above.

dig <- get_eurostat("isoc_e_dii", type = "label")

dig %<>% 
   mutate(dii_level = ifelse(grepl("very low", indic_is), "Very low        " , ifelse(grepl("with low", indic_is), "Low        ", ifelse(grepl("with high", indic_is), "High        ", ifelse(grepl("very high", indic_is), "Very high        ", indic_is)))))

Next I fliter out the year I want and aggregate all industry groups (from the sizen_r2 variable) in each country to calculate a single DII score for each country.

dig %>% 
  select(sizen_r2, geo, values, dii_level, year) %>%  
  filter(year == 2020) %>% 
  group_by(dii_level, geo) %>% 
  summarise(total_values = sum(values, na.rm = TRUE)) %>% 
  ungroup() -> my_dig

I use a hex finder website imagecolorpicker.com to find the same hex colors from the Eurostat graph and assign them to our version.

dii_pal <- c("Very low        " = "#f0aa4f",
             "Low        " = "#fee229",
             "Very high        " = "#154293", 
             "High        " = "#7fa1d4")

We can make sure the factors are in the very low to very high order (rather than alphabetically) with the ordered() function

my_dig$dii_level <- ordered(my_dig$dii_level, levels = c("Very Low        ", "Low        ", "High        ","Very high        "))

Next we filter out the geo rows we don’t want to add to the the graph.

Also we can change the name of Germany to remove its longer title.

my_dig %>% 
  filter(geo != "Euro area (EA11-1999, EA12-2001, EA13-2007, EA15-2008, EA16-2009, EA17-2011, EA18-2014, EA19-2015)") %>% 
  filter(geo != "United Kingdom") %>% 
  filter(geo != "European Union - 27 countries (from 2020)") %>% 
  filter(geo != "European Union - 28 countries (2013-2020)") %>% 
  mutate(geo = ifelse(geo == "Germany (until 1990 former territory of the FRG)", "Germany", geo)) -> my_dig 

And also, to have the same order of countries that are in the graph, we can add them as ordered factors.

my_dig$country <- factor(my_dig$geo, levels = c("Finland", "Denmark", "Malta", "Netherlands", "Belgium", "Sweden", "Estonia", "Slovenia", "Croatia", "Italy", "Ireland","Spain", "Luxembourg", "Austria", "Czechia", "France", "Germany", "Portugal", "Poland", "Cyprus", "Slovakia", "Hungary", "Lithuania", "Latvia", "Greece", "Romania", "Bulgaria", "Norway"), ordered = FALSE)

Now to plot the graph:

my_dig %>% 
  filter(!is.na(country)) %>% 
  group_by(country, dii_level) %>% 
  ggplot(aes(y = country, 
             x = total_values,
             fill = forcats::fct_rev(dii_level))) +
  geom_col(position = "fill", width = 0.7) + 
  scale_fill_manual(values = dii_pal) + 
  ggthemes::theme_pander() +
  coord_flip() +
  labs(title = "EU's Digital Intensity Index (DII) in 2020",
       subtitle = ("(% of enterprises with at least 10 persons employed)"),
       caption = "ec.europa/eurostat") +
  xlab("") + 
  ylab("") + 
  theme(text = element_text(family = "Verdana", color = "#154293"),
        axis.line.x = element_line(color = "black", size = 1.5),
        axis.text.x = element_text(angle = 90, size = 20, color = "#154293", hjust = 1),
        axis.text.y = element_text(color = "#808080", size = 13, face = "bold"),
        legend.position = "top", 
        legend.title = element_blank(),
        legend.text = element_text(color = "#808080", size = 20, face = "bold"),
        plot.title = element_text(size = 42, color = "#154293"),
        plot.subtitle = element_text(size = 25, color = "#154293"),
        plot.caption = element_text(size = 20, color = "#154293"),
        panel.background = element_rect(color = "#f2f2f2"))

It is not identical and I had to move the black line up and the Norway model more to the right with Paint on my computer! So a bit of cheating!

Click to read Part 1, Part 2 and Part 3 of the blog series on visualising Eurostat data

For information on the index discussed in this blog post: https://ec.europa.eu/eurostat/web/products-eurostat-news/-/ddn-20211029-1

Alternatives to pie charts: coxcomb and waffle charts

Packages we will need

library(tidyverse)
library(rnaturalearth)
library(countrycode)
library(peacesciencer)
library(ggthemes)
library(bbplot)

If we want to convey nuance in the data, sometimes that information is lost if we display many groups in a pie chart.

According to Bernard Marr, our brains are used to equal slices when we think of fractions of a whole. When the slices aren’t equal, as often is the case with real-world data, it’s difficult to envision the parts of a whole pie chart accurately.

Below are some slight alternatives that we can turn to and visualise different values across groups.

I’m going to compare regions around the world on their total energy consumption levels since the 1900s.

First, we can download the region data with information about the geography and income levels for each group, using the ne_countries() function from the rnaturalearth package.

map <- ne_countries(scale = "medium", returnclass = "sf")

Click here to learn more about downloading map data from the rnaturalearth package.

Next we will select the variables that we are interested in, namely the income group variable and geographic region variable:

map %>% 
  select(name_long, subregion, income_gr) %>% as_data_frame() -> region_var

And add a variable of un_code that it will be easier to merge datasets in a bit. Click here to learn more about countrycode() function.

region_var$un_code <- countrycode(region_var$name_long, "country.name", "un") 

Next, we will download national military capabilities (NMC) dataset. These variables – which attempt to operationalize a country’s power – are military expenditure, military personnel, energy consumption, iron and steel production, urban population, and total population. It serves as the basis for the most widely used indicator of national capability, CINC (Composite Indicator of National Capability) and covers the period 1816-2016.

To download them in one line of code, we use the create_stateyears() function from the peacesciencer package.

What, Like It'S Hard? Reese Witherspoon GIF - Find & Share on GIPHY
states <- create_stateyears(mry = FALSE) %>% add_nmc() 

Click here to read more about downloading Correlates of War and other IR variables from the peacesciencer package

Next we add a UN location code so we can easily merge both datasets we downloaded!

states$un_code <- countrycode(states$statenme, "country.name", "un")
states_df <- merge(states, region_var, by ="un_code", all.x = TRUE)

Next, let’s make the coxcomb graph.

First, we will create one high income group. The map dataset has a separate column for OECD and non-OECD countries. But it will be easier to group them together into one category. We do with with the ifelse() function within mutate().

Next we filter out any country that is NA in the dataset, just to keep it cleaner.

We then group the dataset according to income group and sum all the primary energy consumption in each region since 1900.

When we get to the ggplotting, we want order the income groups from biggest to smallest. To do this, we use the reorder() function with income_grp as the second argument.

To create the coxcomb chart, we need the geom_bar() and coord_polar() lines.

With the coord_polar() function, it takes the following arguments :

  • theta – the variable we map the angle to (either x or y)
  • start – indicates the starting point from 12 o’clock in radians
  • direction – whether we plot the data clockwise (1) or anticlockwise (-1)

We feed in a theta of “x” (this is important!), then a starting point of 0 and direction of -1.

Next we add nicer colours with hex values and label the legend in the scale_fill_manual() function.

I like using the fonts and size stylings in the bbc_style() theme.

Last we can delete some of the ticks and text from the plot to make it cleaner.

Last we add our title and source!

states_df %>% 
  mutate(income_grp = ifelse(income_grp == "1. High income: OECD", "1. High income", ifelse(income_grp == "2. High income: nonOECD", "1. High income", income_grp))) %>% 
  filter(!is.na(income_grp)) %>% 
  filter(year > 1899) %>% 
  group_by(income_grp) %>% 
  summarise(sum_pec = sum(pec, na.rm = TRUE)) %>% 
  ggplot(aes(x = reorder(sum_pec, income_grp), y = sum_pec, fill = as.factor(income_grp))) + 
  geom_bar(stat = "identity") + 
  coord_polar("x", start = 0, direction = -1)  + 
  ggthemes::theme_pander() + 
  scale_fill_manual(
    values = c("#f94144", "#f9c74f","#43aa8b","#277da1"), 
    labels = c("High Income", "Upper Middle Income", "Lower Middle Income", "Low Income"), name = "Income Level") +
  bbplot::bbc_style() + 
  theme(axis.text = element_blank(),
            axis.title.x = element_blank(),
            axis.title.y = element_blank(),
            axis.ticks = element_blank(),
            panel.grid = element_blank()) + 
  ggtitle(label = "Primary Energy Consumption across income levels since 1900", subtitle = "Source: Correlates of War CINC")

Happy Legally Blonde GIF - Find & Share on GIPHY

We can compare to the number of countries in each region :

states_df %>% 
  mutate(income_grp = ifelse(income_grp == "1. High income: OECD", "1. High income",
 ifelse(income_grp == "2. High income: nonOECD", "1. High income", income_grp))) %>% 
  filter(!is.na(income_grp)) %>% 
  filter(year == 2016) %>% 
  count(income_grp) %>% 
  ggplot(aes(reorder(n, income_grp), n, fill = as.factor(income_grp))) + 
  geom_bar(stat = "identity") + 
  coord_polar("x", start = 0, direction = - 1)  + 
  ggthemes::theme_pander() + 
  scale_fill_manual(
    values = c("#f94144", "#f9c74f","#43aa8b","#277da1"), 
    labels = c("High Income", "Upper Middle Income", "Lower Middle Income", "Low Income"), 
    name = "Income Level") +
  bbplot::bbc_style() + 
  theme(axis.text = element_blank(),
        axis.title.x = element_blank(),
        axis.title.y = element_blank(),
        axis.ticks = element_blank(),
        panel.grid = element_blank()) + 
  ggtitle(label = "Number of countries per region")

Another variation is the waffle plot!

It is important we do not install the CRAN version, but rather the version in development. I made the mistake of installing the non-github version and nothing worked.

Legally Blonde Liar GIF - Find & Share on GIPHY

It was an ocean of error messages.

So, instead, install the following version:

remotes::install_github("hrbrmstr/waffle")
library(waffle)

When we add the waffle::geom_plot() there are some arguments we can customise.

  • n_rows – rhe default is 10 but this is something you can play around with to see how long or wide you want the chart
  • size – again we can play around with this number to see what looks best
  • color – I will set to white for the lines in the graph, the default is black but I think that can look a bit too busy.
  • flip – set to TRUE or FALSE for whether you want the coordinates horizontal or vertically stacked
  • make_proportional – if we set to TRUE, compute proportions from the raw values? (i.e. each value n will be replaced with n/sum(n)); default is FALSE

We can also add theme_enhance_waffle() to make the graph cleaner and less cluttered.

states_df %>% 
  filter(year == 2016) %>% 
  filter(!is.na(income_grp)) %>% 
  mutate(income_grp = ifelse(income_grp == "1. High income: OECD",
 "1. High income", ifelse(income_grp == "2. High income: nonOECD", "1. High income", income_grp))) %>% 
  count(income_grp) %>% 
  ggplot(aes(fill = income_grp, values = n)) +
  scale_fill_manual(
values = c("#f94144", "#f9c74f","#43aa8b","#277da1"), 
labels = c("High Income", "Upper Middle Income", 
"Lower Middle Income", "Low Income"), 
name = "Income Level") +
  waffle::geom_waffle(n_rows = 10, size = 0.5, colour = "white",
              flip = TRUE, make_proportional = TRUE) + bbplot::bbc_style() +  
  theme_enhance_waffle() + 
  ggtitle(label = "Number of countries per region")

We can also look at the sum of military expenditure across each region

states_df %>% 
  filter(!is.na(income_grp)) %>%
  filter(year > 1899) %>% 
  mutate(income_grp = ifelse(income_grp == "1. High income: OECD",
 "1. High income", ifelse(income_grp == "2. High income: nonOECD", 
"1. High income", income_grp))) %>% 
group_by(income_grp) %>%
  summarise(sum_military = sum(milex, na.rm = TRUE)) %>% 
  ggplot(aes(fill = income_grp, values = sum_military)) +
  scale_fill_manual(
    values = c("#f94144", "#f9c74f","#43aa8b","#277da1"), 
    labels = c("High Income", "Upper Middle Income", 
               "Lower Middle Income", "Low Income"), 
    name = "Income Level") +
  geom_waffle(n_rows = 10, size = 0.3, colour = "white",
              flip = TRUE, make_proportional = TRUE) + bbplot::bbc_style() +  
  theme_enhance_waffle() + 
  ggtitle(label = "Sum of military expenditure per region")
Sexy Girls Rule GIF - Find & Share on GIPHY

Building a dataset for political science analysis in R, PART 2

Packages we will need

library(tidyverse)
library(peacesciencer)
library(countrycode)
library(bbplot)

The main workhorse of this blog is the peacesciencer package by Stephen Miller!

The package will create both dyad datasets and state datasets with all sovereign countries.

Thank you Mr Miller!

There are heaps of options and variables to add.

Go to the page to read about them all in detail.

Here is a short list from the package description of all the key variables that can be quickly added:

We create the dyad dataset with the create_dyadyears() function. A dyad-year dataset focuses on information about the relationship between two countries (such as whether the two countries are at war, how much they trade together, whether they are geographically contiguous et cetera).

In the literature, the study of interstate conflict has adopted a heavy focus on dyads as a unit of analysis.

Alternatively, if we want just state-year data like in the previous blog post, we use the function create_stateyears()

We can add the variables with type D to the create_dyadyears() function and we can add the variables with type S to the create_stateyears() !

Focusing on the create_dyadyears() function, the arguments we can include are directed and mry.

The directed argument indicates whether we want directed or non-directed dyad relationship.

In a directed analysis, data include two observations (i.e. two rows) per dyad per year (such as one for USA – Russia and another row for Russia – USA), but in a nondirected analysis, we include only one observation (one row) per dyad per year.

The mry argument indicates whether they want to extend the data to the most recently concluded calendar year – i.e. 2020 – or not (i.e. until the data was last available).

dyad_df <- create_dyadyears(directed = FALSE, mry = TRUE) %>%
  add_atop_alliance() %>%  
  add_nmc() %>%
  add_cow_trade() %>% 
  add_creg_fractionalization() 

I added dyadic variables for the

You can follow these links to check out the codebooks if you want more information about descriptions about each variable and how the data were collected!

The code comes with the COW code but I like adding the actual names also!

dyad_df$country_1 <- countrycode(dyad_df$ccode1, "cown", "country.name")

With this dataframe, we can plot the CINC data of the top three superpowers, just looking at any variable that has a 1 at the end and only looking at the corresponding country_1!

According to our pals over at le Wikipedia, the Composite Index of National Capability (CINC) is a statistical measure of national power created by J. David Singer for the Correlates of War project in 1963. It uses an average of percentages of world totals in six different components (such as coal consumption, military expenditure and population). The components represent demographic, economic, and military strength

First, let’s choose some nice hex colors

pal <- c("China" = "#DE2910",
         "United States" = "#3C3B6E", 
         "Russia" = "#FFD900")

And then create the plot

dyad_df %>% 
 filter(country_1 == "Russia" | 
          country_1 == "United States" | 
          country_1 == "China") %>% 
  ggplot(aes(x = year, y = cinc1, group = as.factor(country_1))) +
  geom_line(aes(color = country_1)) +
  geom_line(aes(color = country_1), size = 2, alpha = 0.8) + 
  scale_color_manual(values =  pal) +
  bbplot::bbc_style()

In PART 3, we will merge together our data with our variables from PART 1, look at some descriptive statistics and run some panel data regression analysis with our different variables!

Building a dataset for political science analysis in R, PART 1

When you want to create a dataset for large-n political science analysis from scratch, it can get muddled fast. Some tips I have found helpful to create clean data ready for panel data analysis.

Click here for PART 2 to create dyad-year and state-year variables with conflict, geographic features and alliance data from Correlates of War and Uppsala datasets.

Packages we will need

library(tidyverse)  # of course!
library(states)
library(WDI)
library(countrycode)
library(rnaturalearth)
library(VIM)

The states package by Andreas Beger can provide the skeleton for our panel dataset.

It create a cross-sectional, time-series dataset of independent sovereign countries that stretch back to 1816.

The package includes both the Gleditsch & Ward (G&W) and Correlates of War (COW) lists of independent states.

Click here for a discussion of the difference by Stephen Miller.

With the state_panel function from the states package, we create a data.frame from a start date to an end date, using the following syntax.

state_panel(start, end, by = NULL, partial = "any", useGW = TRUE)

The partial argument indicates how we want to deal with states that is independent for only part of the year. We can indicate “any”, “exact”, “first” or “last”.

For this example, I want to create a dataset starting in 1990 and ending in 2020. I put useGW = FALSE because I want to use the COW list of states.

df <- state_panel(1990, 2020, by = "year", partial = "last", useGW = FALSE)
View(df)

And this is the resulting dataset

So we have our basic data.frame. We can see how many states there have been over the years.

df %>% 
  group_by(year) %>% 
  count() %>%  
  arrange(n) 
# A tibble: 31 x 2
# Groups:   year [31]
    year     n
   <int> <int>
 1  1990   161
 2  1991   177
 3  1992   181
 4  1993   186
 5  1994   187
 6  1995   187
 7  1996   187
 8  1997   187
 9  1998   187
10  1999   190
11  2000   191
12  2001   191
13  2002   192
14  2003   192
15  2004   192
16  2005   192
17  2006   193
18  2007   193
19  2008   194
20  2009   194
# ... with 11 more rows

We can see that the early 1990s saw the creation of many states after the end of the Soviet Union. Since 2011, the dataset levels out at 195 (after the creation of South Sudan)

Next, we can add the country name with the countrycode() function from the countrycode package. We feed in the cowcode variable and add the full country names. Click here to read more about the function in more detail and see other options to add country ISO code, for example.

df$country <- countrycode(df$cowcode, "cown", "country.name")

With our dataset with all states, we can add variables for our analysis

We can use the WDI package to download any World Bank indicator.

Click here for more information about this super easy package.

I’ll first add some basic variables, such as population, GDP per capita and infant mortality. We can do this with the WDI() function. The indicator code for population is SP.POP.TOTL so we add that to the indicator argument. (If we wanted only a few countries, we can add a vector of ISO2 code strings to the country argument).

POP <- WDI(country = "all",
           indicator = 'SP.POP.TOTL',
           start = 1990, 
           end = 2020)

The default variable name for population is the long string, so I’ll quickly change that

POP$population <- POP$SP.POP.TOTL 
POP$SP.POP.TOTL <- NULL

I’ll do the same for GDP and infant mortality

GDP <- WDI(country = "all",
       indicator = 'NY.GDP.MKTP.KD',
       start = 1990, 
       end = 2020)

GDP$gdp <- GD$PNY.GDP.MKTP.KD
GDP$NY.GDP.MKTP.KD <- NULL

INF_MORT <- WDI(country = "all",
       indicator = 'SP.DYN.IMRT.IN',
       start = 1990, 
       end = 2020)

INF_MORT$infant_mortality <- INF_MORT$SP.DYN.IMRT.IN
INF_MORT$SP.DYN.IMRT.IN <- NULL

Next, I’ll bind all the variables them together with cbind()

wb_controls <- cbind(POP, GDP, INF_MORT)

This cbind will copy the country and year variables three times so we can delete any replicated variables:

wb_controls <- wb_controls[, !duplicated(colnames(wb_controls), fromLast = TRUE)] 

When we download World Bank data, it comes with aggregated data for regions and economic groups. If we only want in our dataset the variables for countries, we have to delete the extra rows that we don’t want. We have two options for this.

The first option is to add the cow codes and then filter out all the rows that do not have a cow code (i.e. all non-countries)

wb_controls$cow_code <- countrycode(wb_controls$country, "country.name", 'cown')

Then we re-organise the variables a bit more nicely in the dataset with select() and keep only the countries with filter() and the !is.na argument that will remove any row with NA values in the cow_code column.

df_v2 <- wb_controls %>%
  select(country, iso2c, cow_code, year, everything()) %>%
  filter(!is.na(cow_code))

Alternatively, we can merge the World Bank variables with our states df and it can filter out any row that is not a sovereign, independent state.

In the merge() function, we use by to indicate the columns by which we want to merge the datasets. The all argument indicates which dataset we want to keep and NOT delete rows that do not match. If we typed all = TRUE, it would not delete any rows that do not match.

wb_controls %<>%
  select(cow_code, year, everything()) 

df_v3 <- merge(df, wb_controls, by.x = c("cowcode", "year"), by.y = c("cow_code", "year"), all.x = TRUE)

You can see that df_v2 has 85 more rows that df_v3. So it is up to you which way you want to use, and which countries you want to include each year. The df_v3 contains states that are more likely to be recognised as sovereign. df_v2 contains more territories.

Let’s look at the prevalence of NA values across our dataset.

We can use the plot_missing() function from the states package.

plot_missing(df_v3, ccode = "cowcode")

It is good to see a lot of green!

Let’s add some constant variables, such as geographical information. The rnaturalearth package is great for plotting maps. Click here to see how to plot maps with the package.

For this dataset, we just want the various geography group variables to add to our dataset:

map <- ne_countries(scale = "medium", returnclass = "sf")

We want to take some of the interesting variables from this map object:

map %>% 
  select(admin, economy, income_grp, continent, region_un, subregion, region_wb) -> regions_sf

This regions_sf is not in a data.frame object, it is a simple features dataset. So we delete the variables that make it an sf object and explicitly coerce it to data.frame

regions_sf$geometry<- NULL
regions_df <- as.data.frame(regions_sf)

Finally, we add our COW codes like we did above:

regions_df$cow_code <- countrycode(regions_df$admin, "country.name", "cown")
Warning message:
In countrycode(regions_df$admin, "country.name", "cown") :

Some values were not matched unambiguously: Antarctica, Kashmir, Republic of Serbia, Somaliland, Western Sahara

Sometimes we cannot avoid hand-coding some of our variables. In this case, we don’t want to drop Serbia because the countrycode function couldn’t add the right code.

So we can check what its COW code is and add it to the dataset directly with the mutate function and an ifelse condition:

regions_df %<>% 
  dplyr::mutate(cow_code = ifelse(admin == "Republic of Serbia", 345, cow_code))

If we look at the countries, we can spot a problem. For Cyprus, it was counted twice – due to the control by both Turkish and Greek authorities. We can delete one of the versions because all the other World Bank variables look at Cyprus as one entity so they will be the same across both variables.

regions_df <- regions_df %>% slice(-c(38)) 

Next we merge the new geography variables to our dataset. Note that we only merge by one variable – the COW code – and indicate that we want to merge for every row in the x dataset (i.e. the first dataset in the function). So it will apply to each year row for each country!

df_v4 <- merge(df_v3, regions_df, by.x = "cowcode", by.y = "cow_code", all.x = TRUE)

So far so good! We have some interesting variables all without having to open a single CSV or DTA file!

Let’s look at the NA values in the data.frame

nhanes_miss = VIM::aggr(df_v3,
                   labels = names(df_v3), 
                   sortVars = TRUE,
                   numbers = TRUE)

We with the aggr() function from the VIM package to look at the prevalence of NA values. It’s always good to keep an eye on this and catch badly merged or badly specified datasets!

Click here for PART 2, where we add some Correlates of War data and interesting variables with the peacesciencer package .

Always Sunny Dance GIF by It's Always Sunny in Philadelphia - Find & Share on GIPHY

Compare Irish census years with compareBars and csodata package in R

Packages we will need:

library(csodata)
library(janitor)
library(ggcharts)
library(compareBars)
library(tidyverse)

First, let’s download population data from the Irish census with the Central Statistics Office (CSO) API package, developed by Conor Crowley.

You can search for the data you want to analyse via R or you can go to the CSO website and browse around the site.

I prefer looking through the site because sometimes I stumble across a dataset I didn’t even think to look for!

Keep note of the code beside the red dot star symbol if you’re looking around for datasets.

Click here to check out the CRAN PDF for the CSO package.

You can search for keywords with cso_search_toc(). I want total population counts for the whole country.

cso_search_toc("total population")

We can download the variables we want by entering the code into the cso_get_data() function

irish_pop <- cso_get_data("EY007")
View(irish_pop)

The EY007 code downloads population census data in both 2011 and 2016 at every age.

It needs a little bit of tidying to get it ready for graphing.

irish_pop %<>%  
  clean_names()

First, we can be lazy and use the clean_names() function from the janitor package.

GIF by The Good Place - Find & Share on GIPHY

Next we can get rid of the rows that we don’t want with select().

Then we use the pivot_longer() function to turn the data.frame from wide to long and to turn the x2011 and x2016 variables into one year variable.

irish_pop %>% 
  filter(at_each_year_of_age == "Population") %>% 
  filter(sex == 'Both sexes') %>% 
  filter(age_last_birthday != "All ages") %>% 
  select(!statistic) %>% 
  select(!sex) %>% 
  select(!at_each_year_of_age) -> irish_wide

irish_wide %>% 
  pivot_longer(!age_last_birthday,
    names_to = "year", 
    values_to = "pop_count",
    values_drop_na = TRUE) %>% 
    mutate(year = as.factor(year)) -> irish_long

No we can create our pyramid chart with the pyramid_chart() from the ggcharts package. The first argument is the age category for both the 2011 and 2016 data. The second is the actual population counts for each year. Last, enter the group variable that indicates the year.

irish_long %>%   
  pyramid_chart(age_last_birthday, pop_count, year)

One problem with the pyramid chart is that it is difficult to discern any differences between the two years without really really examining each year.

One way to more easily see the differences with the compareBars function

The compareBars package created by David Ranzolin can help to simplify comparative bar charts! It’s a super simple function to use that does a lot of visualisation leg work under the hood!

First we need to pivot the data.frame back to wide format and then input the age, and then the two groups – x2011 and x2016 – in the compareBars() function.

We can add more labels and colors to customise the graph also!

irish_long %>% 
  pivot_wider(names_from = year, values_from = pop_count) %>% 
  compareBars(age_last_birthday, x2011, x2016, orientation = "horizontal",
              xLabel = "Population",
              yLabel = "Year",
              titleLabel = "Irish Populations",
              subtitleLabel = "Comparing 2011 and 2016",
              fontFamily = "Arial",
              compareVarFill1 = "#FE6D73",
              compareVarFill2 = "#17C3B2") 

We can see that under the age of four-ish, 2011 had more at the time. And again, there were people in their twenties in 2011 compared to 2016.

However, there are more older people in 2016 than in 2011.

Similar to above it is a bit busy! So we can create groups for every five age years categories and examine the broader trends with fewer horizontal bars.

First we want to remove the word “years” from the age variable and convert it to a numeric class variable. We can easily do this with the parse_number() function from the readr package

irish_wide %<>% 
mutate(age_num = readr::parse_number(as.character(age_last_birthday))) 

Next we can group the age years together into five year categories, zero to 5 years, 6 to 10 years et cetera.

We use the cut() function to divide the numeric age_num variable into equal groups. We use the seq() function and input age 0 to 100, in increments of 5.

irish_wide$age_group = cut(irish_wide$age_num, seq(0, 100, 5))

Next, we can use group_by() to calculate the sum of each population number in each five year category.

And finally, we use the distinct() function to remove the duplicated rows (i.e. we only want to keep the first row that gives us the five year category’s population count for each category.

irish_wide %<>% 
  group_by(age_group) %>% 
  mutate(five_year_2011 = sum(x2011)) %>% 
  mutate(five_year_2016 = sum(x2016)) %>% 
  distinct(five_year_2011, five_year_2016, .keep_all = TRUE)

Next plot the bar chart with the five year categories

compareBars(irish_wide, age_group, five_year_2011, five_year_2016, orientation = "horizontal",
              xLabel = "Population",
              yLabel = "Year",
              titleLabel = "Irish Populations",
              subtitleLabel = "Comparing 2011 and 2016",
              fontFamily = "Arial",
              compareVarFill1 = "#FE6D73",
              compareVarFill2 = "#17C3B2") 

irish_wide2 %>% 
  select(age_group, five_year_2011, five_year_2016) %>% 
  pivot_longer(!age_group,
             names_to = "year", 
             values_to = "pop_count",
             values_drop_na = TRUE) %>% 
  mutate(year = as.factor(year)) -> irishlong2

irishlong2 %>%   
  pyramid_chart(age_group, pop_count, year)

The Good Place Yes GIF by NBC - Find & Share on GIPHY

Make a timeline graph with dates in ggplot2

We will use the geom_segment layer from ggplot2 to make a timeline graph!

This layer takes

  • x and xend for the start of the segment lines
  • y and yend inputs for the end of the segment lines

For our timeline, the x will be the start of each Irish Taoiseach’s term.

The xend will be the end of their term, when they get kicked out of office.

Taoisigh (plural of Taoiseach) are Irish prime ministers and are in charge of the executive branch when their party is in change.

For Ireland, that means that basically every Taoiseach has been the leader of one of the two main parties – Fianna Fail or Fine Gael.

Not very exciting.

Also they have all been men.

This is also not very exciting.

We have a bit more to go with increasing the diversity in Ireland’s top job.

The y argument is the Taoiseach number in office. Although there have been fifteen men that have held the office of Taoiseach, this does not mean that they only held office for one time only.

Ireland has a parliamentary system so when a party loses an election, the former Taoiseach can become the leader of the opposition and hope in the future they can become Taoiseach again. Some men have been Taoiseach two or three times in non-consecutive terms.

When we are adding the labels with the geom_text() layer, I created an order variable which indicates the first time each man took the office of Taoiseach.

This is so I only have the name of each man only once in the graph. If we don’t do this step, if a man held office more than once, their name appears every time on the graph and the plot becomes a crowded mess.

I add the ifelse statement so that the first name appears after the segment line and therefore text does not take up too much room on the left edge of the graph.

Last we use the scale_color_manual() function with nice hex colors for each of the political parties.

time_line <- df %>% 
 ggplot(aes(x = as.Date(start), y = number, color = party_factor)) +
 geom_segment(aes(xend = as.Date(end), yend = number, color =  party_factor), size = 6) +
 geom_text(aes(label = order, hjust = ifelse(taoiseach_number < 2, -0.7, 1.1)), size = 8, show.legend = FALSE) +
 scale_color_manual(values = c("Fine Gael" = "#004266", "Fianna Fáil" = "#FCB322", "Cumann na nGaedheal" = "#D62828"))

I increase the limits of the graph to accommodate the name labels. Most of the time, these extra bits of code in ggplot2 depend on the type of data you have and what fits on the graph plane nicely!

So this stages is often only finished after trial-and-error.

I add a snazzy theme_fivethirtyeight() theme from ggthemes package.

Last, with the theme() function, we can remove most of the elements of the graph to make the graph cleaner.

time_line <- time_line + 
  expand_limits(x = as.Date("1915-01-01")) +
  theme_fivethirtyeight() +
  theme(legend.position = "top",
        legend.title = element_blank(),
        legend.direction = "vertical",
        axis.title.y = element_blank(),
        axis.text.y = element_blank(),
        text = element_text(size = 20)) +
  labs(title = "Taoiseach Terms in Ireland",
 subtitle = "From 1922 to 2021") 

We can also create the pie chart to see which party has held power longest in Ireland.

With dplyr we can subtract the start date from the end date and add all the Taoiseach durations (in days) together with the cumsum() argument.

We then choose the highest duration value for each party with the slice(which.max()) functions.

I was lazy and I just re-wrote the values in a new data.frame and called it counts.

df %>%
  group_by(party_factor) %>% 
  dplyr::summarise(max_count = cumsum(duration_number)) %>%  
  slice(which.max(max_count)) %>% 
  select(party_factor, max_count) %>% 
  arrange(desc(max_count))

counts <- data.frame(group = c("Cumann na nGaedheal", "Fine Gael" ,"Fianna Fáil"), 
                     value = c(3381, 10143, 22539))

Create proportion values for our pie-chart graph. To do this divide value by the sum of the values and multiply by 100.

data <- counts %>% 
  arrange(desc(group)) %>%
  dplyr::mutate(prop = value / sum(value) * 100) 

Change the numeric variables to factors.

data$duration <- as.factor(data$value)
data$party_factor <- as.factor(data$group)

We use the coord_polar() to create the piechart. To learn more, check out the r-graph-gallery page about creating pie-charts:

pie_chart <- ggplot(data, aes(x = ", y = prop, fill = group)) + geom_bar(stat = "identity", width = 1, color = "white") + coord_polar("y", start = 0) +

theme(legend.position = "none") + scale_fill_manual(values = c("Fine Gael" = "#004266", "Fianna Fáil" = "#FCB322", "Cumann na nGaedheal" = "#D62828")) +
 labs(title = "Which party held the office of Taoiseach longest?", subtitle = "From 1922 to 2021")

We can tidy up the plot and get rid of theme elements we don’t want with theme_void()

pie_chart <- pie_chart + theme_void() + theme(legend.title = element_blank(), legend.position = none, text = element_text(size = 40))

I want to add both graphs together so I can save the pie chart with a transparent background with the ggsave() function. I also make sure the lines are not jagged with the type = "cairo" from with Cairo package.

ggsave(pie_chart, file="pie_chart.png", type="cairo", bg = "transparent", width = 50, height = 50, units = "cm")

And we can use canva.com to add them together and create a single chart

And viola!

Examining speeches from the UN Security Council Part 1

Let’s look at how many speeches took place at the UN Security Council every year from 1995 until 2019.

Hesitate Episode 16 GIF by The Simpsons - Find & Share on GIPHY

I want to only look at countries, not organisations. So a quick way to do that is to add a variable to indicate whether the speaker variable has an ISO code.

Only countries have ISO codes, so I can use this variable to filter away all the organisations that made speeches

library(countrycode)

speech$iso2 <- countrycode(speech$country, "country.name", "iso2c")

library(bbplot)

speech %>% 
  dplyr::filter(!is.na(iso2)) %>% 
  group_by(year) %>% 
  count() %>% 
  ggplot(aes(x = year, y = n)) + 
  geom_line(size = 1.2, alpha = 0.4) +
  geom_label(aes(label = n)) +
  bbplot::bbc_style() +
  theme(plot.title = element_text(hjust = 0.5)) +
  labs(title = "Number of speeches given by countries at UNSC")

We can see there has been a relatively consistent upward trend in the number of speeches that countries are given at the UN SC. Time will tell what impact COVID will have on these trends.

There was a particularly sharp increase in speeches in 2015.

We can look and see who was talking, and in the next post, we can examine what they were talking about in 2015 with some simple text analytic packages and functions.

First, we will filter only the year 2015 and count the number of observations per group (i.e. the number of speeches per country this year).

To add flags to the graph, add the iso2 code to the dataset (and it must be in lower case).

Click here to read more about adding circular flags to graphs and maps

speech %>% 
  dplyr::filter(year == 2015) %>% 
  group_by(country) %>% 
  dplyr::summarise(speech_count = n()) -> speech_2015

speech_2015$iso2_lower <- tolower(speech_2015$iso2)

We can clean up the names and create a variable that indicates whether the country is one of the five Security Council Permanent Members, a Temporary Member elected or a Non-,ember.

I also clean up the names to make the country’s names in the dataset smaller. For example, “United Kingdom Of Great Britain And Northern Ireland”, will be very cluttered in the graph compared to just “UK” so it will be easier to plot.

library(ggflags)
library(ggthemes)

speech_2015 %>% 
# To avoid the graph being too busy, we only look at countries that gave over 20 speeches
  dplyr::filter(speech_count > 20) %>% 

# Clean up some names so the graph is not too crowded
  dplyr::mutate(country = ifelse(country == "United Kingdom Of Great Britain And Northern Ireland", "UK", country)) %>%
  dplyr::mutate(country = ifelse(country == "Russian Federation", "Russia", country)) %>%
  dplyr::mutate(country = ifelse(country == "United States Of America", "USA", country)) %>%
  dplyr::mutate(country = ifelse(country == "Republic Of Korea", "South Korea", country)) %>%
  dplyr::mutate(country = ifelse(country == "Venezuela (Bolivarian Republic Of)", "Venezuela", country)) %>% 
  dplyr::mutate(country = ifelse(country == "Islamic Republic Of Iran", "Iran", country)) %>% 
  dplyr::mutate(country = ifelse(country == "Syrian Arab Republic", "Syria", country)) %>% 
 
# Create a Member status variable:
# China, France, Russia, the United Kingdom, and the United States are UNSC Permanent Members
  dplyr::mutate(Member = ifelse(country == "UK", "Permanent", 
  ifelse(country == "USA", "Permanent",
  ifelse(country == "China", "Permanent",
  ifelse(country == "Russia", "Permanent",
  ifelse(country == "France", "Permanent",

# Non-permanent members in their first year (elected October 2014)
  ifelse(country == "Angola", "Temporary (Elected 2014)",
  ifelse(country == "Malaysia", "Temporary (Elected 2014)",              
  ifelse(country == "Venezuela", "Temporary (Elected 2014)",       
  ifelse(country == "New Zealand", "Temporary (Elected 2014)",
  ifelse(country == "Spain", "Temporary (Elected 2014)",                 

# Non-permanent members in their second year (elected October 2013)        
  ifelse(country == "Chad", "Temporary (Elected 2013)",                                                               
  ifelse(country == "Nigeria", "Temporary (Elected 2013)",
  ifelse(country == "Jordan", "Temporary (Elected 2013)",
  ifelse(country == "Chile", "Temporary (Elected 2013)",
  ifelse(country == "Lithuania", "Temporary (Elected 2013)", 
 
# Non members that will join UNSC next year (elected October 2015)          
  ifelse(country == "Egypt", "Non-Member (Elected 2015)",                                                               
  ifelse(country == "Sengal", "Non-Member (Elected 2015)",
  ifelse(country == "Uruguay", "Non-Member (Elected 2015)",
  ifelse(country == "Japan", "Non-Member (Elected 2015)",
  ifelse(country == "Ukraine", "Non-Member (Elected 2015)", 

# Everyone else is a regular non-member           
               "Non-Member"))))))))))))))))))))) -> speech_2015

When we have over a dozen nested ifelse() statements, we will need to check that we have all our corresponding closing brackets.

Next choose some colours for each Memberships status. I always take my hex values from https://coolors.co/

membership_palette <- c("Permanent" = "#e63946", "Non-Member" = "#2a9d8f", "Non-Member (Elected 2015)" = "#a8dadc", "Temporary (Elected 2013)" = "#457b9d","Temporary (Elected 2014)" = "#1d3557")
Season 4 Applause GIF by The Simpsons - Find & Share on GIPHY

And all that is left to do is create the bar chart.

With geom_bar(), we can indicate stat = "identity" because we are giving the plot the y values and ggplot does not need to do the automatic aggregation on its own.

To make sure the bars are descending from most speeches to fewest speeches, we use the reorder() function. The second argument is the variable according to which we want to order the bars. So for us, we give the speech_count integer variable to order our country bars with x = reorder(country, speech_count).

We can change the bar from vertical to horizontal with coordflip().

I add flags with geom_flag() and feed the lower case ISO code to the country = iso2_lower argument.

I add the bbc_style() again because I like the font, size and sparse lines on the plot.

We can move the title of the plot into the centre with plot.title = element_text(hjust = 0.5))

Finally, we can supply the membership_palette vector to the values = argument in the scale_fill_manual() function to specify the colours we want.

speech_2015 %>%  ggplot(aes(x = reorder(country, speech_count), y = speech_count)) + 
  geom_bar(stat = "identity", aes(fill = as.factor(Member))) +
  coord_flip() +
  ggflags::geom_flag(mapping = aes(y = -15, x = country, country = iso2_lower), size = 10) +
  geom_label(mapping = aes( label = speech_count), size = 8) +
  theme(legend.position = "top") + 
  labs(title = "UNSC speeches given in 2015", y = "Number of speeches", x = "") +
  bbplot::bbc_style() +
  theme(text = element_text(size = 20),
  plot.title = element_text(hjust = 0.5)) +
  scale_fill_manual(values =  membership_palette)

In the next post, we will look at the texts themselves. Here is a quick preview.

library(tidytext)

speech_tokens <- speech %>%
  unnest_tokens(word, text) %>% 

  anti_join(stop_words)

We count the number of tokens (i.e. words) for each country in each year. With the distinct() function we take only one observation per year per country. This reduces the number of rows from 16601520 in speech_tokesn to 3142 rows in speech_words_count :

speech_words_count <- speech_tokens %>%
  group_by(year, country) %>%
  mutate(word_count = n_distinct(word)) %>%
  select(country, year, word_count, permanent, iso2_lower) %>%
  distinct() 

Subset the data.frame to only plot the five Permanent Members. Now we only have 125 rows (25 years of total annual word counts for 5 countries!)

permanent_words_summary <- speech_words_count %>% 
  filter(permanent == 1) 

Choose some nice hex colors for my five countries:

five_pal <- c("#ffbc42","#d81159","#8f2d56","#218380","#73d2de")

It is a bit convoluted to put the flags ONLY at the start and end of the lines. We need to subset the dataset two times with the geom_flag() sections. First, we subset the data.frame to year == 1995 and the flags appear at the start of the word_count on the y axis. Then we subset to year == 2019 and do the same

ggplot(data = permanent_word_summary) +
  geom_line(aes(x = year, y = word_count, group = as.factor(country), color = as.factor(country)), 
size = 2) +
  ggflags::geom_flag(data = subset(permanent_word_summary, year == 1995), aes(x = 1995, y = word_count,  country = iso2_lower), size = 9) +
  ggflags::geom_flag(data = subset(permanent_word_summary, 
year == 2019), 
aes(x = 2019, 
y = word_count, 
country = iso2_lower), 
size = 12) + 
  bbplot::bbc_style() +
 theme(legend.position = "right") + labs(title = "Number of words spoken by Permanent Five in the UN Security Council") + 
  scale_color_manual(values = five_pal)

We can see that China has been the least chattiest country if we are measuring chatty with number of words spoken. Translation considerations must also be taken into account. We can see here again at around the 2015 mark, there was a discernible increase in the number of words spoken by most of the countries!

Episode 16 GIF by The Simpsons - Find & Share on GIPHY