Convert event-level data to panel-level data with tidyr in R

Packages we will need:

library(tidyverse)
library(magrittr)
library(lubridate)
library(tidyr)
library(rvest)
library(janitor)

In this post, we are going to scrape NATO accession data from Wikipedia and turn it into panel data. This means turning a list of every NATO country and their accession date into a time-series, cross-sectional dataset with information about whether or not a country is a member of NATO in any given year.

This is helpful for political science analysis because simply a dummy variable indicating whether or not a country is in NATO would lose information about the date they joined. The UK joined NATO in 1948 but North Macedonia only joined in 2020. A simple binary variable would not tell us this if we added it to our panel data.

Consoling 30 Rock GIF - Find & Share on GIPHY

We will first scrape a table from the Wikipedia page on NATO member states with a few functions form the rvest pacakage.

Click here to read more about the rvest package:

nato_members <- read_html("https://en.wikipedia.org/wiki/Member_states_of_NATO")

nato_tables <- nato_members %>% html_table(header = TRUE, fill = TRUE)

nato_member_joined <- nato_tables[[1]]

We have information about each country and the date they joined. In total there are 30 rows, one for each member of NATO.

Next we are going to clean up the data, remove the numbers in the [square brackets], and select the columns that we want.

A very handy function from the janitor package cleans the variable names. They are lower_case_with_underscores rather than how they are on Wikipedia.

Next we remove the square brackets and their contents with sub("\\[.*", "", insert_variable_name)

And the accession date variable is a bit tricky because we want to convert it to date format, extract the year and convert back to an integer.

nato_member_joined %<>% 
  clean_names() %>% 
  select(country = member_state, 
         accession = accession_3) %>% 
  mutate(member_2020 = 2020,
         country = sub("\\[.*", "", country),
         accession = sub("\\[.*", "", accession),
         accession = parse_date_time(accession, "dmy"),
         accession = format(as.Date(accession, format = "%d/%m/%Y"),"%Y"),
         accession = as.numeric(as.character(accession)))

When we have our clean data, we will pivot the data to longer form. This will create one event column that has a value of accession or member in 2020.

This gives us the start and end year of our time variable for each country.

nato_member_joined %<>% 
  pivot_longer(!country, names_to = "event", values_to = "year") 

Our dataset now has 60 observations. We see Albania joined in 2009 and is still a member in 2020, for example.

Next we will use the complete() function from the tidyr package to fill all the dates in between 1948 until 2020 in the dataset. This will increase our dataset to 2,160 observations and a row for each country each year.

Nect we will group the dataset by country and fill the nato_member status variable down until the most recent year.

nato_member_joined %<>% 
  mutate(year = as.Date(as.character(year), format = "%Y")) %>% 
  mutate(year = ymd(year)) %>% 
  complete(country, year = seq.Date(min(year), max(year), by = "year")) %>% 
  mutate(nato_member = ifelse(event == "accession", 1, 
                              ifelse(event == "member_2020", 1, 0))) %>% 
  group_by(country) %>% 
  fill(nato_member, .direction = "down") %>%
  ungroup()

Last, we will use the ifelse() function to mutate the event variable into one of three categories: 'accession‘, 'member‘ or ‘not member’.

nato_member_joined %>%
  mutate(nato_member = replace_na(nato_member, 0),
         year = parse_number(as.character(year)),
         event = ifelse(nato_member == 0, "not member", event),
         event = ifelse(nato_member == 1 & is.na(event), "member", event),
         event = ifelse(event == "member_2020", "member", event))  %>% 
  distinct(country, year, .keep_all = TRUE) -> nato_panel
High Five 30 Rock GIF - Find & Share on GIPHY

Lump groups together and create “other” category with forcats package

Packages we will need:

library(tidyverse)
library(forcats)
library(tidytext)
library(ggthemes)
library(democracyData)
library(magrittr)

For this blog, we are going to look at the titles of all countries’ heads of state, such as Kings, Presidents, Emirs, Chairman … understandably, there are many many many ways to title the leader of a country.

First, we will download the PACL dataset from the democracyData package.

Click here to read more about this super handy package:

If you want to read more about the variables in this dataset, click the link below to download the codebook by Cheibub et al.

pacl <- redownload_pacl()

We are going to look at the npost variable; this captures the political title of the nominal head of stage. This can be King, President, Sultan et cetera!

pacl %>% 
  count(npost) %>% 
  arrange(desc(n))

If we count the occurence of each title, we can see there are many ways to be called the head of a country!

"president"                         3693
"prime minister"                    2914
"king"                               470
"Chairman of Council of Ministers"   229
"premier"                            169
"chancellor"                         123
"emir"                               117
"chair of Council of Ministers"      111
"head of state"                       90
"sultan"                              67
"chief of government"                 63
"president of the confederation"      63
""                                    44
"chairman of Council of Ministers"    44
"shah"                                33

# ... with 145 more rows

155 groups is a bit difficult to meaningfully compare.

So we can collapse some of the groups together and lump all the titles that occur relatively seldomly – sometimes only once or twice – into an “other” category.

Clueless Movie Tai GIF - Find & Share on GIPHY

First, we use grepl() function to take the word president and chair (chairman, chairwoman, chairperson et cetera) and add them into broader categories.

Also, we use the tolower() function to make all lower case words and there is no confusion over the random capitalisation.

 pacl %<>% 
  mutate(npost = tolower(npost)) %>% 
  mutate(npost = ifelse(grepl("president", npost), "president", npost)) %>% 
  mutate(npost = ifelse(grepl("chair", npost), "chairperson", npost))

Next, we create an "other leader type" with the fct_lump_prop() function.

We specifiy a threshold and if the group appears fewer times in the dataset than this level we set, it is added into the “other” group.

pacl %<>% 
  mutate(regime_prop = fct_lump_prop(npost,
                                   prop = 0.005,
                                   other_level = "Other leader type")) %>% 
  mutate(regime_prop = str_to_title(regime_prop)) 

Now, instead of 155 types of leader titles, we have 10 types and the rest are all bundled into the Other Leader Type category

President            4370
Prime Minister       2945
Chairperson           520
King                  470
Other Leader Type     225
Premier               169
Chancellor            123
Emir                  117
Head Of State          90
Sultan                 67
Chief Of Government    63
The Office Smile GIF - Find & Share on GIPHY

The forcast package has three other ways to lump the variables together.

First, we can quickly look at fct_lump_min().

We can set the min argument to 100 and look at how it condenses the groups together:

pacl %>% 
  mutate(npost = tolower(npost)) %>% 
 
  mutate(post_min = fct_lump_min(npost,
                                   min = 100,
                                   other_level = "Other type")) %>% 
  mutate(post_min = str_to_title(post_min)) %>% 
  count(post_min) %>% 
  arrange(desc(n))
President       4370
Prime Minister  2945
Chairperson      520
King             470
Other Type       445
Premier          169
Chancellor       123
Emir             117

We can see that if the post appears fewer than 100 times, it is now in the Other Type category. In the previous example, Head Of State only appeared 90 times so it didn’t make it.

Next we look at fct_lump_lowfreq().

This function lumps together the least frequent levels. This one makes sure that “other” category remains as the smallest group. We don’t add another numeric argument.

pacl %>% 
  mutate(npost = tolower(npost)) %>% 
  mutate(post_lowfreq  = fct_lump_lowfreq(npost,
                                   other_level = "Other type")) %>% 
  mutate(post_lowfreq = str_to_title(post_lowfreq)) %>% 
  count(post_lowfreq) %>% 
  arrange(desc(n))
President       4370
Prime Minister  2945
Other Type      1844

This one only has three categories and all but president and prime minister are chucked into the Other type category.

Last, we can look at the fct_lump_n() to make sure we have a certain number of groups. We add n = 5 and we create five groups and the rest go to the Other type category.

pacl %>% 
  mutate(npost = tolower(npost)) %>% 
  mutate(post_n  = fct_lump_n(npost,
                                n = 5,
                                other_level = "Other type")) %>% 
  mutate(post_n = str_to_title(post_n)) %>% 
  count(post_n) %>% 
  arrange(desc(n))
President       4370
Prime Minister  2945
Other Type       685
Chairperson      520
King             470
Premier          169
Sums It Up The Office GIF - Find & Share on GIPHY

Next we can make a simple graph counting the different leader titles in free, partly free and not free Freedom House countries. We will use the download_fh() from DemocracyData package again

fh <- download_fh()

We will use the reorder_within() function from tidytext package.

Click here to read the full blog post explaining the function from Julia Silge’s blog.

First we add Freedom House data with the inner_join() function

Then we use the fct_lump_n() and choose the top five categories (plus the Other Type category we make)

pacl %<>% 
  inner_join(fh, by = c("cown", "year")) %>% 
  mutate(npost  = fct_lump_n(npost,
                  n = 5,
                  other_level = "Other type")) %>%
  mutate(npost = str_to_title(npost))

Then we group_by the three Freedom House status levels and count the number of each title:

pacl %<>% 
  group_by(status) %>% 
  count(npost) %>% 
  ungroup() %>% 

Using reorder_within(), we order the titles from most to fewest occurences WITHIN each status group:

pacl %<>%
  mutate(npost = reorder_within(npost, n, status)) 

To plot the columns, we use geom_col() and separate them into each Freedom House group, using facet_wrap(). We add scales = "free y" so that we don’t add every title to each group. Without this we would have empty spaces in the Free group for Emir and King. So this step removes a lot of clutter.

pacl_colplot <- pacl %>%
  ggplot(aes(fct_reorder(npost, n), n)) +
  geom_col(aes(fill = npost), show.legend = FALSE) +
  facet_wrap(~status, scales = "free_y") 

Last, I manually added the colors to each group (which now have longer names to reorder them) so that they are consistent across each group. I am sure there is an easier and less messy way to do this but sometimes finding the easier way takes more effort!

We add the scale_x_reordered() function to clean up the names and remove everything from the underscore in the title label.

pacl_colplot + scale_fill_manual(values = c("Prime Minister___F" = "#005f73",
                                "Prime Minister___NF" = "#005f73",
                                "Prime Minister___PF" = "#005f73",
                                
                               "President___F" = "#94d2bd",
                               "President___NF" = "#94d2bd",
                               "President___PF" = "#94d2bd",
                               
                               "Other Type___F" = "#ee9b00",
                               "Other Type___NF" = "#ee9b00",
                               "Other Type___PF" = "#ee9b00",
                               
                               "Chairperson___F" = "#bb3e03",
                               "Chairperson___NF" = "#bb3e03",
                               "Chairperson___PF" = "#bb3e03",
                               
                               "King___F" = "#9b2226",
                               "King___NF" = "#9b2226",
                               "King___PF" = "#9b2226",
                               
                               "Emir___F" = "#001219", 
                               "Emir___NF" = "#001219",
                               "Emir___PF" = "#001219")) +
  scale_x_reordered() +
  coord_flip() + 
  ggthemes::theme_fivethirtyeight() + 
  themes(text = element_size(size = 30))

In case you were curious about the free country that had a chairperson, Nigeria had one for two years.

pacl %>%
  filter(status == "F") %>% 
  filter(npost == "Chairperson") %>% 
  select(Country = pacl_country) %>% 
  knitr::kable("latex") %>%
  kableExtra::kable_classic(font_size = 30)

References

Cheibub, J. A., Gandhi, J., & Vreeland, J. R. (2010). Democracy and dictatorship revisited. Public choice143(1), 67-101.

Visualise DemocracyData with graphs and maps

Packages we will need:

library(tidyverse)
library(democracyData)
library(magrittr)
library(ggrepel)
library(ggthemes)
library(countrycode)

In this post, we will look at easy ways to graph data from the democracyData package.

The two datasets we will look at are the Anckar-Fredriksson dataset of political regimes and Freedom House Scores.

Regarding democracies, Anckar and Fredriksson (2018) distinguish between republics and monarchies. Republics can be presidential, semi-presidential, or parliamentary systems.

Within the category of monarchies, almost all systems are parliamentary, but a few countries are conferred to the category semi-monarchies.

Bill Murray King GIF - Find & Share on GIPHY

Autocratic countries can be in the following main categories: absolute monarchy, military rule, party-based rule, personalist rule, and oligarchy.

anckar <- democracyData::redownload_anckar()
fh <- download_fh()

We will see which regime types have been free or not since 1970.

We join the fh dataset to the anckar dataset with inner_join(). Luckily, both the datasets have the cown and year variables with which we can merge.

Then we sumamrise the mean Freedom House level for each regime type.

anckar %>% 
  inner_join(fh, by = c("cown", "year")) %>% 
  filter(!is.na(regimebroadcat)) %>%
  group_by(regimebroadcat, year) %>% 
  summarise(mean_fh = mean(fh_total_reversed, na.rm = TRUE)) -> anckar_sum

We want to place a label for each regime line in the graph, so create a small dataframe with regime score information only about the first year.

anckar_start <- anckar_sum %>%
  group_by(regimebroadcat) %>% 
  filter(year == 1972) %>% 
  ungroup() 

And we pick some more jewel toned colours for the graph and put them in a vector.

my_palette <- c("#ca6702", "#bb3e03", "#ae2012", "#9b2226", "#001219", "#005f73", "#0a9396", "#94d2bd", "#ee9b00")

And we graph it out

anckar_sum %>%
  ggplot(aes(x = year, y = mean_fh, groups = as.factor(regimebroadcat))) + 
  geom_point(aes(color = regimebroadcat), alpha = 0.7, size = 2) + 
  geom_line(aes(color = regimebroadcat), alpha = 0.7, size = 2) +
  ggrepel::geom_label_repel(data = anckar_start, hjust = 1.5,
            aes(x = year,
                y = mean_fh,
                color = regimebroadcat,
                label = regimebroadcat),
            alpha = 0.7,
            show.legend = FALSE, 
            size = 9) + 
  scale_color_manual(values = my_palette) +
  expand_limits(x = 1965) +  
  ggthemes::theme_pander() + 
  theme(legend.position = "none",
        axis.text = element_text(size = 30, colour ="grey40")) 

We can also use map data that comes with the tidyverse() package.

To merge the countries easily, I add a cown variable to this data.frame

world_map <- map_data("world")

world_map %<>% 
  mutate(cown = countrycode::countrycode(region, "country.name", "cown"))

I want to only look at regimes types in the final year in the dataset – which is 2018 – so we filter only one year before we merge with the map data.frame.

The geom_polygon() part is where we indiciate the variable we want to plot. In our case it is the regime category

anckar %>% 
 filter(year == max(year)) %>%
  inner_join(world_map, by = c("cown")) %>%
  mutate(regimebroadcat = ifelse(region == "Libya", 'Military rule', regimebroadcat)) %>% 
  ggplot(aes(x = long, y = lat, group = group)) + 
  geom_polygon(aes(fill = regimebroadcat), color = "white", size = 1) 
Bill Murray Laughing GIF - Find & Share on GIPHY

We can next look at the PIPE dataset and see which countries have been uninterrupted republics over time.

pipe <- democracyData::redownload_pipe()

We graph out the max_republic_age variable with geom_bar()


pipe %>% 
  mutate(iso_lower = tolower(countrycode::countrycode(PIPE_cowcodes, "cown", "iso2c"))) %>% 
  mutate(country_name = countrycode::countrycode(PIPE_cowcodes, "cown", "country.name")) %>% 
  filter(year == max(year)) %>% 
  filter(max_republic_age > 100) %>% 
  ggplot(aes(x = reorder(country_name, max_republic_age), y = max_republic_age)) + 
  geom_bar(stat = "identity", width = 0.7, aes(fill = as.factor(europe))) +
  ggflags::geom_flag(aes(y = max_republic_age, x = country_name, 
                         country = iso_lower), size = 15) + 
  coord_flip() +  ggthemes::theme_pander() -> pipe_plot

And fix up some aesthetics:

pipe_plot + 
  theme(axis.text = element_text(size = 30),
        legend.text = element_text(size = 30),
        legend.title = element_blank(),
        axis.title = element_blank(),
        legend.position = "bottom") + 
  labs(y= "", x = "") + 
scale_fill_manual(values =  c("#d62828", "#457b9d"),
 labels = c("Former British Settler Colony", "European Country")) 

I added the header and footer in Canva.com

Bill Murray Ok GIF - Find & Share on GIPHY

Download democracy data with democracyData package in R

Packages we will need:

library(democracyData)
library(tidyverse)
library(magrittr)       # for pipes
library(ggstream)       # proportion plots
library(ggthemes)       # nice ggplot themes
library(forcats)        # reorder factor variables
library(ggflags)        # add flags
library(peacesciencer)  # more great polisci data
library(countrycode)    # add ISO codes to countries

This blog will highlight some quick datasets that we can download with this nifty package.

To install the democracyData package, it is best to do this via the github of Xavier Marquez:

remotes::install_github("xmarquez/democracyData", force = TRUE)
library(democracyData)

We can download the dataset from the Democracy and Dictatorship Revisited paper by Cheibub Gandhi and Vreeland (2010) with the redownload_pacl() function. It’s all very simple!

pacl <- redownload_pacl()
Happy Maya Rudolph GIF by PeacockTV - Find & Share on GIPHY

This gives us over 80 variables, with information on things such as regime type, geographical data, the name and age of the leaders, and various democracy variables.

We are going to focus on the different regimes across the years.

The six-fold regime classification Cheibub et al (2010) present is rooted in the dichotomous classification of regimes as democracy and dictatorship introduced in Przeworski et al. (2000). They classify according to various metrics, primarily by examining the way in which governments are removed from power and what constitutes the “inner sanctum” of power for a given regime. Dictatorships can be distinguished according to the characteristics of these inner sanctums. Monarchs rely on family and kin networks along with consultative councils; military rulers confine key potential rivals from the armed forces within juntas; and, civilian dictators usually create a smaller body within a regime party—a political bureau—to coopt potential rivals. Democracies highlight their category, depending on how the power of a given leadership ends

We can change the regime variable from numbers to a factor variables, describing the type of regime that the codebook indicates:

pacl %<>% 
  mutate(regime_name = ifelse(regime == 0, "Parliamentary democracies",
       ifelse(regime == 1, "Mixed democracies",
       ifelse(regime == 2, "Presidential democracies",
       ifelse(regime == 3, "Civilian autocracies",
       ifelse(regime == 4, "Military dictatorships",
       ifelse(regime ==  5,"Royal dictatorships", regime))))))) %>%
  mutate(regime = as.factor(regime)) 

Before we make the graph, we can give traffic light hex colours to the types of democracy. This goes from green (full democracy) to more oranges / reds (autocracies):

regime_palette <- c("Military dictatorships" = "#f94144", 
                    "Civilian autocracies" = "#f3722c", 
                    "Royal dictatorships" =  "#f8961e", 
                    "Mixed democracies" = "#f9c74f", 
                    "Presidential democracies" = "#90be6d", 
                    "Parliamentary democracies" = "#43aa8b")

We will use count() to count the number of countries in each regime type and create a variable n

pacl %>% 
  mutate(regime_name = as.factor(regime_name)) %>% 
  mutate(regime_name = fct_relevel(regime_name, 
 levels = c("Parliamentary democracies", 
           "Presidential democracies",
           "Mixed democracies",
           "Royal dictatorships",
           "Civilian autocracies",
           "Military dictatorships"))) %>% 
  group_by(year, un_continent_name) %>% 
  filter(!is.na(regime_name)) %>% 
  count(regime_name) %>% 
  ungroup() %>%  
  filter(un_continent_name != "") %>%
  filter(un_continent_name != "Oceania") -> pacl_count
Cant Handle It Kristen Wiig GIF by Saturday Night Live - Find & Share on GIPHY

We have all the variables we need.

We can now graph the count variables across different regions.

pacl_count %>% 
  ggplot(aes(x = year, y = n, 
             groups = regime_name, 
             fill = regime_name)) +
  ggstream::geom_stream(type = "proportion") + 
  facet_wrap(~un_continent_name) + 
  scale_fill_manual(values = regime_palette) + 
  ggthemes::theme_fivethirtyeight() + 
  theme(legend.title = element_blank(),
        text = element_text(size = 30)) 

I added the title and source header / footer section on canva.com to finish the graph.

Of course, the Cheibub et al (2010) dataset is not the only one that covers types of regimes.

Curtis Bell in 2016 developed the Rulers, Elections, and Irregular Governance Dataset (REIGN) dataset.

This describes political conditions in every country (including tenures and personal characteristics of world leaders, the types of political institutions and political regimes in effect, election outcomes and election announcements, and irregular events like coups)

Again, to download this dataset with the democracyData package, it is very simple:

reign <- download_reign()
Saturday Night Live Happy Dance GIF - Find & Share on GIPHY

I want to compare North and South Korea since their independence from Japan and see the changes in regimes and democracy scores over the years.

Next, we can easily download Freedom House or Polity 5 scores.

The Freedom House Scores default dataset ranges from 1972 to 2020, covering around 195 countries (depending on the year)

fh <- download_fh()

Alternatively, we can look at Polity Scores. This default dataset countains around 190 ish countries (again depending on the year and the number of countries in existance at that time) and covers a far longer range of years; from 1880 to 2018.

polityiv <- redownload_polityIV()

Alternatively, to download democracy scores, we can also use the peacesciencer dataset. Click here to read more about this package:

democracy_scores <- peacesciencer::create_stateyears() %>% 
  add_gwcode_to_cow() %>%
  add_democracy() 

With inner_join() we can merge these two datasets together:

reign %>% 
  select(ccode = cown, everything()) %>% 
  inner_join(democracy_scores, by = c("year", "ccode")) -> reign_demo

We next choose the years and countries for our plot.

Also, for the geom_flag() we will need the country name to be lower case ISO code. Click here to read more about the ggflags package.

reign_demo %>% 
    filter(year > 1945) %>% 
    mutate(gwf_regimetype = str_to_title(gwf_regimetype)) %>% 
    mutate(iso2c_lower = tolower(countrycode::countrycode(reign_country, "country.name", "iso2c"))) %>% 
filter(reign_country == "Korea North" | reign_country == "Korea South") -> korea_reign

We may to use specific hex colours for our graphs. I always prefer these deeper colours, rather than the pastel defaults that ggplot uses. I take them from coolors.co website!

korea_palette <- c("Military" = "#5f0f40",
                   "Party-Personal" = "#9a031e",
                   "Personal" = "#fb8b24",
                   "Presidential" = "#2a9d8f",
                   "Parliamentary" = "#1e6091")

We will add a flag to the start of the graph, so we create a mini dataset that only has the democracy scores for the first year in the dataset.

  korea_start <- korea_reign %>%
    group_by(reign_country) %>% 
    slice(which.min(year)) %>% 
    ungroup() 

Next we plot the graph

korea_reign %>% 
 ggplot(aes(x = year, y = v2x_polyarchy, groups = reign_country))  +
    geom_line(aes(color = gwf_regimetype), 
         size = 7, alpha = 0.7, show.legend = FALSE) +
    geom_point(aes(color = gwf_regimetype), size = 7, alpha = 0.7) +
    ggflags::geom_flag(data = korea_start, 
       aes(y = v2x_polyarchy, x = 1945, country = iso2c_lower), 
           size = 20) -> korea_plot

And then work on the aesthetics of the plot:

korea_plot + ggthemes::theme_fivethirtyeight() + 
    ggtitle("Electoral democracy on Korean Peninsula") +
    labs(subtitle = "Sources: Teorell et al. (2019) and Curtis (2016)") +
    xlab("Year") + 
    ylab("Democracy Scores") + 
    theme(plot.title = element_text(face = "bold"),
      axis.ticks = element_blank(),
      legend.box.background = element_blank(),
      legend.title = element_blank(),
      legend.text = element_text(size = 40),
      text = element_text(size = 30)) +
    scale_color_manual(values = korea_palette) + 
    scale_x_continuous(breaks = round(seq(min(korea_reign$year), max(korea_reign$year), by = 5),1))

While North Korea has been consistently ruled by the Kim dynasty, South Korea has gone through various types of government and varying levels of democracy!

References

Cheibub, J. A., Gandhi, J., & Vreeland, J. R. (2010). . Public choice143(1), 67-101.

Przeworski, A., Alvarez, R. M., Alvarez, M. E., Cheibub, J. A., Limongi, F., & Neto, F. P. L. (2000). Democracy and development: Political institutions and well-being in the world, 1950-1990 (No. 3). Cambridge University Press.

Scrape and graph election polling data from Wikipedia

Packages we will need:

library(rvest)
library(tidyverse)
library(magrittr)
library(forcats)
library(janitor)

With the Korean Presidential elections coming up, I wanted to graph the polling data since the beginning of this year.

Happy Paul Rudd GIF by Saturday Night Live - Find & Share on GIPHY

The data we can use is all collated together on Wikipedia.

Click here to read more about using the rvest package for scraping data from websites and click here to read the CRAN PDF for the package.

poll_html <- read_html("https://en.wikipedia.org/wiki/2022_South_Korean_presidential_election")

poll_tables <- poll_html %>% html_table(header = TRUE, fill = TRUE)

There are 22 tables on the page in total.

I count on the page that the polling data is the 16th table on the page, so extract index [[16]] from the list

feb_poll <- poll_tables[[16]]
View(feb_poll)

It is a bit messy, so we will need to do a bit of data cleaning before we can graph.

John Mulaney Snl GIF by Saturday Night Live - Find & Share on GIPHY

First the names of many variables are missing or on row 2 / 3 of the table, due to pictures and split cells in Wikipedia.

 [1] "Polling firm / Client" "Polling firm / Client" "Fieldwork  date"       "Sample  size" "Margin of  error"     
 [6] ""       ""      ""     ""      ""                     
[11] ""  "Others/Undecided"   "Lead"   

The clean_names() function from the janitor package does a lot of the brute force variable name cleaning!

feb_poll %<>% clean_names()

We now have variable names rather than empty column names, at least.

 [1] "polling_firm_client" "polling_firm_client_2" "fieldwork_date"        "sample_size"  "margin_of_error"      
 [6] "x"  "x_2"  "x_3"  "x_4"  "x_5"                  
[11] "x_6"  "others_undecided"   "lead"

We can choose the variables we want and rename the x variables with the names of each candidate, according to Wikipedia.

feb_poll %<>% 
  select(fieldwork_date, 
         Lee = x, 
         Yoon = x_2,
         Shim = x_3,
         Ahn = x_4, 
         Kim = x_5, 
         Heo = x_6,
         others_undecided)

We then delete the rows that contain text not related to the poll number values.

feb_poll = feb_poll[-25,]
feb_poll = feb_poll[-81,]
feb_poll = feb_poll[-1,]

I want to clean up the fieldwork_date variable and convert it from character to Date class.

First I found that very handy function on Stack Overflow that extracts the last n characters from a string variable.

substrRight <- function(x, n){
  substr(x, nchar(x)-n+1, nchar(x))
}

If we look at the table, some of the surveys started in Feb but ended in March. We want to extract the final section (i.e. the March section) and use that.

So we use grepl() to find rows that have both Feb AND March, and just extract the March section. If it only has one of those months, we leave it as it is.

feb_poll %<>% 
  mutate(clean_date = ifelse(grepl("Feb", fieldwork_date) & grepl("Mar", fieldwork_date), substrRight(fieldwork_date, 5), fieldwork_date))

Next want to extract the three letter date from this variables and create a new month variable

feb_poll %<>%
  mutate(month = substrRight(clean_date, 3)) 

Following that, we use the parse_number() function from tidyr package to extract the first number found in the string and create a day_number varible (with integer class now)

 feb_poll %<>%
   mutate(day_number = parse_number(clean_date))   

We want to take these two variables we created and combine them together with the unite() function from tidyr again! We want to delete the variables after we unite them. But often I want to keep the original variables, so usually I change the argument remove to FALSE.

We indicate we want to have nothing separating the vales with the sep = "" argument

 feb_poll %<>%
     unite("date", day_number:month, sep = "", remove = TRUE)

And we convert this new date into Date class with as.Date() function.

Here is a handy cheat sheet to help choose the appropriate % key so the format recognises the dates. I will never memorise these values, so I always need to refer to this site.

We have days as numbers (1, 2, 3) and abbreviated 3 character month (Jan, Feb, Mar), so we choose %d and %b

feb_poll %<>%
  mutate(dates_format = as.Date(date, "%d%b")) %>% 
  select(dates_format, Lee:others_undecided) 

Next, we will use the pivot_longer() function to combine all the poll number values into one column. This will make it far easier to plot later.

feb_poll %<>%
  pivot_longer(!dates_format, names_to = "candidate", values_to = "favour") 

After than, we need to clean the actual numbers, remove the percentage signs and convert from character to number class. We use the str_extract() and the regex code to extract the number and not keep the percentage sign.

feb_poll %<>%
    mutate(candidate = as.factor(candidate),
 favour_percent = str_extract(favour, "\\d+\\.*\\d*")) %>% 
   mutate(favour_percent = as.integer(favour_percent)) 

Some of the different polls took place on the same day. So we will take the average poll favourability value for each candidate on each day with the group_by() function

feb_poll %<>%
  group_by(dates_format, candidate) %>% 
  mutate(favour_percent_mean = mean(favour_percent, na.rm = TRUE)) %>% 
  ungroup() %>% 
  select(candidate, dates_format, favour_percent_mean) 

And this is how the cleaned up data should look!

We repeat for the 17th and 16th tables, which contain data going back to the beginning of January 2022

early_feb_poll <- poll_tables[[17]]
early_feb_poll = early_feb_poll[-37,]
early_feb_poll = early_feb_poll[-1,]

We repeat the steps from above with early Feb in one chunk

early_feb_poll %<>%
  clean_names() %>% 
  mutate(month = substrRight(fieldwork_date, 3))  %>% 
  mutate(day_number = parse_number(fieldwork_date)) %>%
  unite("date", day_number:month, sep = "", remove = FALSE) %>% 
  mutate(dates_format = as.Date(date, "%d%b")) %>% 
  select(dates_format, 
         Lee = lee_jae_myung, 
         Yoon = yoon_seok_youl,
         Shim = sim_sang_jung,
         Ahn = ahn_cheol_soo, 
         Kim = kim_dong_yeon, 
         Heo = huh_kyung_young,
         others_undecided) %>% 
  pivot_longer(!dates_format, names_to = "candidate", values_to = "favour") %>% 
  mutate(candidate = as.factor(candidate),
         favour_percent = str_extract(favour, "\\d+\\.*\\d*")) %>% 
  mutate(favour_percent = as.integer(favour_percent)) %>% 
  group_by(dates_format, candidate) %>% 
  mutate(favour_percent_mean = mean(favour_percent, na.rm = TRUE)) %>% 
  ungroup() %>% 
  select(candidate, dates_format, favour_percent_mean)

And we use rbind() to combine the two data.frames

polls <- rbind(feb_poll, early_feb_poll)

Next we repeat with January data:

jan_poll <- poll_tables[[18]]

jan_poll = jan_poll[-34,]
jan_poll = jan_poll[-1,]

jan_poll %<>% 
  clean_names() %>% 
  mutate(month = substrRight(fieldwork_date, 3))  %>% 
  mutate(day_number = parse_number(fieldwork_date)) %>%   # drops any non-numeric characters before or after the first number. 
  unite("date", day_number:month, sep = "", remove = FALSE) %>% 
  mutate(dates_format = as.Date(date, "%d%b")) %>% 
  select(dates_format, 
         Lee = lee_jae_myung, 
         Yoon = yoon_seok_youl,
         Shim = sim_sang_jung,
         Ahn = ahn_cheol_soo, 
         Kim = kim_dong_yeon, 
         Heo = huh_kyung_young,
         others_undecided) %>% 
  pivot_longer(!dates_format, names_to = "candidate", values_to = "favour") %>% 
  mutate(candidate = as.factor(candidate),
         favour_percent = str_extract(favour, "\\d+\\.*\\d*")) %>% 
  mutate(favour_percent = as.integer(favour_percent)) %>% 
  group_by(dates_format, candidate) %>% 
  mutate(favour_percent_mean = mean(favour_percent, na.rm = TRUE)) %>% 
  ungroup() %>% 
  select(candidate, dates_format, favour_percent_mean)

And bind to our combined data.frame:

polls <- rbind(polls, jan_poll)

We can create variables to help us filter different groups of candidates. If we want to only look at the largest candidates, we can makes an important variable and then filter

We can lump the candidates that do not have data from every poll (i.e. the smaller candidate) and add them into the “other_undecided” category with the fct_lump_min() function from the forcats package

polls %>% 
  mutate(important = ifelse(candidate %in% c("Ahn", "Yoon", "Lee", "Shim"), 1, 0)) %>% 
  mutate(few_candidate = fct_lump_min(candidate, min = 110, other_level = "others_undecided")) %>% 
  group_by(few_candidate, dates_format) %>% 
  filter(important == 1) -> poll_data

I want to only look at the main two candidates from the main parties that have been polling in the 40% range – Lee and Yoon – as well as the data for Ahn (who recently dropped out and endorsed Yoon).

poll_data %>% 
  filter(candidate %in% c("Lee", "Yoon", "Ahn")) -> lee_yoon_data

We take the official party hex colors for the graph and create a vector to use later with the scale_color_manual() function below:

party_palette <- c(
  "Ahn" = "#df550a",
  "Lee" = "#00a0e2",
  "Yoon" = "#e7001f")

And we plot the variables.

lee_yoon_data %>% 
  ggplot(aes(x = dates_format, y = favour_percent_mean,
             groups = candidate, color = candidate)) + 
  geom_line( size = 2, alpha = 0.8) +
  geom_point(fill = "#5e6472", shape = 21, size = 4, stroke = 3) + 
  labs(title = "Polling data for Korean Presidential Election", subtitle = "Source: various polling companies, via Wikipedia") -> poll_graph

The bulk of aesthetics for changing the graph appearance in the theme()

poll_graph + theme(panel.border = element_blank(),
        legend.position = "bottom",        
        text = element_text(size = 15, color = "white"),
        plot.title = element_text(size = 40),
        legend.title = element_blank(),
        legend.text = element_text(size = 50, color = "white"),
        axis.text.y = element_text(size = 20), 
        axis.text.x = element_text(size = 20),
        legend.background = element_rect(fill = "#5e6472"),
        axis.title = element_blank(),
        axis.text = element_text(color = "white", size = 20),
        panel.grid.major.y = element_blank(),
        panel.grid.minor.y = element_blank(),
        panel.grid.major.x = element_blank(),
        panel.grid.minor.x = element_blank(),
        legend.key = element_rect(fill = "#5e6472"),
        plot.background = element_rect(fill = "#5e6472"),
        panel.background = element_rect(fill = "#5e6472")) +
  scale_color_manual(values = party_palette) 

Last, with the annotate() functions, we can also add an annotation arrow and text to add some more information about Ahn Cheol-su the candidate dropping out.

  annotate("text", x = as.Date("2022-02-11"), y = 13, label = "Ahn dropped out just as the polling blackout began", size = 10, color = "white") +
  annotate(geom = "curve", x = as.Date("2022-02-25"), y = 13, xend = as.Date("2022-03-01"), yend = 10, 
    curvature = -.3, arrow = arrow(length = unit(2, "mm")), size = 1, color = "white")

We will just have to wait until next Wednesday / Thursday to see who is the winner ~

Over It Reaction GIF by Saturday Night Live - Find & Share on GIPHY

Exploratory Data Analysis and Descriptive Statistics for Political Science Research in R

Packages we will use:

library(tidyverse)      # of course
library(ggridges)       # density plots
library(GGally)         # correlation matrics
library(stargazer)      # tables
library(knitr)          # more tables stuff
library(kableExtra)     # more and more tables
library(ggrepel)        # spread out labels
library(ggstream)       # streamplots
library(bbplot)         # pretty themes
library(ggthemes)       # more pretty themes
library(ggside)         # stack plots side by side
library(forcats)        # reorder factor levels

Before jumping into any inferentional statistical analysis, it is helpful for us to get to know our data. For me, that always means plotting and visualising the data and looking at the spread, the mean, distribution and outliers in the dataset.

Before we plot anything, a simple package that creates tables in the stargazer package. We can examine descriptive statistics of the variables in one table.

Click here to read this practically exhaustive cheat sheet for the stargazer package by Jake Russ. I refer to it at least once a week.

I want to summarise a few of the stats, so I write into the summary.stat() argument the number of observations, the mean, median and standard deviation.

The kbl() and kable_classic() will change the look of the table in R (or if you want to copy and paste the code into latex with the type = "latex" argument).

In HTML, they do not appear.

Seth Meyers Ok GIF by Late Night with Seth Meyers - Find & Share on GIPHY

To find out more about the knitr kable tables, click here to read the cheatsheet by Hao Zhu.

Choose the variables you want, put them into a data.frame and feed them into the stargazer() function

stargazer(my_df_summary, 
          covariate.labels = c("Corruption index",
                               "Civil society strength", 
                               'Rule of Law score',
                               "Physical Integerity Score",
                               "GDP growth"),
          summary.stat = c("n", "mean", "median", "sd"), 
          type = "html") %>% 
  kbl() %>% 
  kable_classic(full_width = F, html_font = "Times", font_size = 25)
StatisticNMeanMedianSt. Dev.
Corruption index1790.4770.5190.304
Civil society strength1790.6700.8050.287
Rule of Law score1737.4517.0004.745
Physical Integerity Score1790.6960.8070.284
GDP growth1630.0190.0200.032

Next, we can create a barchart to look at the different levels of variables across categories. We can look at the different regime types (from complete autocracy to liberal democracy) across the six geographical regions in 2018 with the geom_bar().

my_df %>% 
  filter(year == 2018) %>%
  ggplot() +
  geom_bar(aes(as.factor(region),
               fill = as.factor(regime)),
           color = "white", size = 2.5) -> my_barplot

And we can add more theme changes

my_barplot + bbplot::bbc_style() + 
  theme(legend.key.size = unit(2.5, 'cm'),
        legend.text = element_text(size = 15),
        text = element_text(size = 15)) +
  scale_fill_manual(values = c("#9a031e","#00a896","#e36414","#0f4c5c")) + 
  scale_color_manual(values = c("#9a031e","#00a896","#e36414","#0f4c5c")) 

This type of graph also tells us that Sub-Saharan Africa has the highest number of countries and the Middle East and North African (MENA) has the fewest countries.

However, if we want to look at each group and their absolute percentages, we change one line: we add geom_bar(position = "fill"). For example we can see more clearly that over 50% of Post-Soviet countries are democracies ( orange = electoral and blue = liberal democracy) as of 2018.

We can also check out the density plot of democracy levels (as a numeric level) across the six regions in 2018.

With these types of graphs, we can examine characteristics of the variables, such as whether there is a large spread or normal distribution of democracy across each region.

my_df %>% 
  filter(year == 2018) %>%
  ggplot(aes(x = democracy_score, y = region, fill = regime)) +
  geom_density_ridges(color = "white", size = 2, alpha = 0.9, scale = 2) -> my_density_plot

And change the graph theme:

my_density_plot + bbplot::bbc_style() + 
  theme(legend.key.size = unit(2.5, 'cm')) +
  scale_fill_manual(values = c("#9a031e","#00a896","#e36414","#0f4c5c")) + 
  scale_color_manual(values = c("#9a031e","#00a896","#e36414","#0f4c5c")) 

Click here to read more about the ggridges package and click here to read their CRAN PDF.

Next, we can also check out Pearson’s correlations of some of the variables in our dataset. We can make these plots with the GGally package.

The ggpairs() argument shows a scatterplot, a density plot and correlation matrix.

my_df %>%
  filter(year == 2018) %>%
  select(regime, 
         corruption, 
         civ_soc, 
         rule_law, 
         physical, 
         gdp_growth) %>% 
  ggpairs(columns = 2:5, 
          ggplot2::aes(colour = as.factor(regime), 
          alpha = 0.9)) + 
  bbplot::bbc_style() +
  scale_fill_manual(values = c("#9a031e","#00a896","#e36414","#0f4c5c")) + 
  scale_color_manual(values = c("#9a031e","#00a896","#e36414","#0f4c5c"))

Click here to read more about the GGally package and click here to read their CRAN PDF.

We can use the ggside package to stack graphs together into one plot.

There are a few arguments to add when we choose where we want to place each graph.

For example, geom_xsideboxplot(aes(y = freedom_house), orientation = "y") places a boxplot for the three Freedom House democracy levels on the top of the graph, running across the x axis. If we wanted the boxplot along the y axis we would write geom_ysideboxplot(). We add orientation = "y" to indicate the direction of the boxplots.

Next we indiciate how big we want each graph to be in the panel with theme(ggside.panel.scale = .5) argument. This makes the scatterplot take up half and the boxplot the other half. If we write .3, the scatterplot takes up 70% and the boxplot takes up the remainning 30%. Last we indicade scale_xsidey_discrete() so the graph doesn’t think it is a continuous variable.

We add Darjeeling Limited color palette from the Wes Anderson movie.

Click here to learn about adding Wes Anderson theme colour palettes to graphs and plots.

my_df %>%
 filter(year == 2018) %>% 
 filter(!is.na(fh_number)) %>% 
  mutate(freedom_house = ifelse(fh_number == 1, "Free", 
         ifelse(fh_number == 2, "Partly Free", "Not Free"))) %>%
  mutate(freedom_house = forcats::fct_relevel(freedom_house, "Not Free", "Partly Free", "Free")) %>% 
ggplot(aes(x = freedom_from_torture, y = corruption_level, colour = as.factor(freedom_house))) + 
  geom_point(size = 4.5, alpha = 0.9) +
  geom_smooth(method = "lm", color ="#1d3557", alpha = 0.4) +  
  geom_xsideboxplot(aes(y = freedom_house), orientation = "y", size = 2) +
  theme(ggside.panel.scale = .3) +
  scale_xsidey_discrete() +
  bbplot::bbc_style() + 
  facet_wrap(~region) + 
  scale_color_manual(values= wes_palette("Darjeeling1", n = 3))

The next plot will look how variables change over time.

We can check out if there are changes in the volume and proportion of a variable across time with the geom_stream(type = "ridge") from the ggstream package.

In this instance, we will compare urban populations across regions from 1800s to today.

my_df %>% 
  group_by(region, year) %>% 
  summarise(mean_urbanization = mean(urban_population_percentage, na.rm = TRUE)) %>% 
  ggplot(aes(x = year, y = mean_urbanization, fill = region)) +
  geom_stream(type = "ridge") -> my_streamplot

And add the theme changes

  my_streamplot + ggthemes::theme_pander() + 
  theme(
legend.title = element_blank(),
        legend.position = "bottom",
        legend.text = element_text(size = 25),
        axis.text.x = element_text(size = 25),
        axis.title.y = element_blank(),
        axis.title.x = element_blank()) +
  scale_fill_manual(values = c("#001219",
                               "#0a9396",
                               "#e9d8a6",
                               "#ee9b00", 
                               "#ca6702",
                               "#ae2012")) 

Click here to read more about the ggstream package and click here to read their CRAN PDF.

We can also look at interquartile ranges and spread across variables.

We will look at the urbanization rate across the different regions. The variable is calculated as the ratio of urban population to total country population.

Before, we will create a hex color vector so we are not copying and pasting the colours too many times.

my_palette <- c("#1d3557",
                "#0a9396",
                "#e9d8a6",
                "#ee9b00", 
                "#ca6702",
                "#ae2012")

We use the facet_wrap(~year) so we can separate the three years and compare them.

my_df %>% 
  filter(year == 1980 | year == 1990 | year == 2000)  %>% 
  ggplot(mapping = aes(x = region, 
                       y = urban_population_percentage, 
                       fill = region)) +
  geom_jitter(aes(color = region),
              size = 3, alpha = 0.5, width = 0.15) +
  geom_boxplot(alpha = 0.5) + facet_wrap(~year) + 
  scale_fill_manual(values = my_palette) +
  scale_color_manual(values = my_palette) + 
  coord_flip() + 
  bbplot::bbc_style()

If we want to look more closely at one year and print out the country names for the countries that are outliers in the graph, we can run the following function and find the outliers int he dataset for the year 1990:

is_outlier <- function(x) {
  return(x < quantile(x, 0.25) - 1.5 * IQR(x) | x > quantile(x, 0.75) + 1.5 * IQR(x))
}

We can then choose one year and create a binary variable with the function

my_df_90 <- my_df %>% 
  filter(year == 1990) %>% 
  filter(!is.na(urban_population_percentage))

my_df_90$my_outliers <- is_outlier(my_df_90$urban_population_percentage)

And we plot the graph:

my_df_90 %>% 
  ggplot(mapping = aes(x = region, y = urban_population_percentage, fill = region)) +
  geom_jitter(aes(color = region), size = 3, alpha = 0.5, width = 0.15) +
  geom_boxplot(alpha = 0.5) +
  geom_text_repel(data = my_df_90[which(my_df_90$my_outliers == TRUE),],
            aes(label = country_name), size = 5) + 
  scale_fill_manual(values = my_palette) +
  scale_color_manual(values = my_palette) + 
  coord_flip() + 
  bbplot::bbc_style() 

In the next blog post, we will look at t-tests, ANOVAs (and their non-parametric alternatives) to see if the difference in means / medians is statistically significant and meaningful for the underlying population.

Bo Burnham What GIF - Find & Share on GIPHY

Comparing proportions across time with ggstream in R

Packages we need:

library(tidyverse)
library(ggstream)
library(magrittr)
library(bbplot)
library(janitor)

We can look at proportions of energy sources across 10 years in Ireland. Data source comes from: https://www.seai.ie/data-and-insights/seai-statistics/monthly-energy-data/electricity/

Before we graph the energy sources, we can tidy up the variable names with the janitor package. We next select column 2 to 12 which looks at the sources for electricity generation. Other rows are aggregates and not the energy-related categories we want to look at.

Next we pivot the dataset longer to make it more suitable for graphing.

We can extract the last two digits from the month dataset to add the year variable.

elec %<>% 
  janitor::clean_names()

elec[2:12,] -> elec

el <- elec %>% 
  pivot_longer(!electricity_generation_g_wh, 
               names_to = "month", values_to = "value") %>% 

substrRight <- function(x, n){
  substr(x, nchar(x) - n + 1, nchar(x))}

el$year <- substrRight(el$month, 2)

el %<>% select(year, month, elec_type = electricity_generation_g_wh, elec_value = value) 

First we can use the geom_stream from the ggstream package. There are three types of plots: mirror, ridge and proportion.

First we will plot the proportion graph.

Select the different types of energy we want to compare, we can take the annual values, rather than monthly with the tried and trusted group_by() and summarise().

Optionally, we can add the bbc_style() theme for the plot and different hex colors with scale_fill_manual() and feed a vector of hex values into the values argument.

el %>% 
  filter(elec_type == "Oil" | 
           elec_type == "Coal" |
           elec_type == "Peat" | 
           elec_type == "Hydro" |
           elec_type == "Wind" |
           elec_type == "Natural Gas") %>% 
  group_by(year, elec_type) %>%
  summarise(annual_value = sum(elec_value, na.rm = TRUE)) %>% 
  ggplot(aes(x = year, 
             y = annual_value,
             group = elec_type,
             fill = elec_type)) +
  ggstream::geom_stream(type = "proportion") + 
  bbplot::bbc_style() +
  labs(title = "Comparing energy proportions in Ireland") +
  scale_fill_manual(values = c("#f94144",
                               "#277da1",
                               "#f9c74f",
                               "#f9844a",
                               "#90be6d",
                               "#577590"))

With ggstream::geom_stream(type = "mirror")

With ggstream::geom_stream(type = "ridge")

Without the ggstream package, we can recreate the proportion graph with slightly different looks

https://giphy.com/gifs/filmeditor-clueless-movie-l0ErMA0xAS1Urd4e4

el %>% 
  filter(elec_type == "Oil" | 
           elec_type == "Coal" |
           elec_type == "Peat" | 
           elec_type == "Hydro" |
           elec_type == "Wind" |
           elec_type == "Natural Gas") %>% 
  group_by(year, elec_type) %>%
  summarise(annual_value = sum(elec_value, na.rm = TRUE)) %>% 
  ggplot(aes(x = year, 
             y = annual_value,
             group = elec_type,
             fill = elec_type)) +
  geom_area(alpha=0.8 , size=1.5, colour="white") +
  bbplot::bbc_style() +
  labs(title = "Comparing energy proportions in Ireland") +
  theme(legend.key.size = unit(2, "cm")) + 
  scale_fill_manual(values = c("#f94144",
                               "#277da1",
                               "#f9c74f",
                               "#f9844a",
                               "#90be6d",
                               "#577590"))

Love You Hug GIF by Filmin - Find & Share on GIPHY

Create density plots with ggridges package in R

Packages we will need:

library(tidyverse)
library(ggridges)
library(ggimage)  # to add png images
library(bbplot)   # for pretty graph themes

We will plot out the favourability opinion polls for the three main political parties in Ireland from 2016 to 2020. Data comes from Louwerse and Müller (2020)

Happy Danny Devito GIF by It's Always Sunny in Philadelphia - Find & Share on GIPHY

Before we dive into the ggridges plotting, we have a little data cleaning to do. First, we extract the last four “characters” from the date string to create a year variable.

I took this quick function from a StackOverflow response:

substrRight <- function(x, n){
  substr(x, nchar(x)-n+1, nchar(x))}

polls_csv$year <- substrRight(polls_csv$Date, 4)

Next, pivot the data from wide to long format.

More information of pivoting data with dplyr can be found here. I tend to check it at least once a month as the arguments refuse to stay in my head.

I only want to take the main parties in Ireland to compare in the plot.

polls <- polls_csv %>%
  select(year, FG:SF) %>% 
  pivot_longer(!year, names_to = "party", values_to = "opinion_poll")

I went online and found the logos for the three main parties (sorry, Labour) and saved them in the working directory I have for my RStudio. That way I can call the file with the prefix “~/**.png” rather than find the exact location they are saved on the computer.

polls %>% 
  filter(party == "FF" | party == "FG" | party == "SF" ) %>% 
  mutate(image = ifelse(party=="FF","~/ff.png",
 ifelse(party=="FG","~/fg.png", "~/sf.png"))) -> polls_three

Now we are ready to plot out the density plots for each party with the geom_density_ridges() function from the ggridges package.

We will add a few arguments into this function.

We add an alpha = 0.8 to make each density plot a little transparent and we can see the plots behind.

The scale = 2 argument pushes all three plots togheter so they are slightly overlapping. If scale =1, they would be totally separate and 3 would have them overlapping far more.

The rel_min_height = 0.01 argument removes the trailing tails from the plots that are under 0.01 density. This is again for aesthetics and just makes the plot look slightly less busy for relatively normally distributed densities

The geom_image takes the images and we place them at the beginning of the x axis beside the labels for each party.

Last, we use the bbplot package BBC style ggplot theme, which I really like as it makes the overall graph look streamlined with large font defaults.

polls_three %>% 
  ggplot(aes(x = opinion_poll, y = as.factor(party))) +  
  geom_density_ridges(aes(fill = party), 
                      alpha = 0.8, 
                      scale = 2,
                      rel_min_height = 0.01) + 
  ggimage::geom_image(aes(y = party, x= 1, image = image), asp = 0.9, size = 0.12) + 
  facet_wrap(~year) + 
  bbplot::bbc_style() +
  scale_fill_manual(values = c("#f2542d", "#edf6f9", "#0e9594")) +
  theme(legend.position = "none") + 
  labs(title = "Favourability Polls for the Three Main Parties in Ireland", subtitle = "Data from Irish Polling Indicator (Louwerse & Müller, 2020)")
Its Always Sunny In Philadelphia Thumbs Up GIF by HULU - Find & Share on GIPHY

Comparing mean values across OECD countries with ggplot

Packages we will need:

library(tidyverse)
library(magrittr) # for pipes
library(ggrepel) # to stop overlapping labels
library(ggflags)
library(countrycode) # if you want create the ISO2C variable

I came across code for this graph by Tanya Shapiro on her github for #TidyTuesday.

Her graph compares Dr. Who actors and their average audience rating across their run as the Doctor on the show. So I have very liberally copied her code for my plot on OECD countries.

That is the beauty of TidyTuesday and the ability to be inspired and taught by other people’s code.

I originally was going to write a blog about how to download data from the OECD R package. However, my attempts to download the data leads to an unpleasant looking error and ends the donwload request.

I will try to work again on that blog in the future when the package is more established.

So, instead, I went to the OECD data website and just directly downloaded data on level of trust that citizens in each of the OECD countries feel about their governments.

Then I cleaned up the data in excel and used countrycode() to add ISO2 and country name data.

Click here to read more about the countrycode() package.

First I will only look at EU countries. I tried with all the countries from the OECD but it was quite crowded and hard to read.

I add region data from another dataset I have. This step is not necessary but I like to colour my graphs according to categories. This time I am choosing geographic regions.

my_df %<>%
  mutate(region = case_when(
    e_regiongeo == 1 ~ "Western",
    e_regiongeo == 2  ~ "Northern",
    e_regiongeo == 3  ~ "Southern", 
    e_regiongeo == 4  ~ "Eastern"))

To make the graph, we need two averages:

  1. The overall average trust level for all countries (avg_trust) and
  2. The average for each country across the years (country_avg_trust),
my_df %<>% 
  mutate(avg_trust = mean(trust, na.rm = TRUE)) %>% 
  group_by(country_name) %>% 
  mutate(country_avg_trust = mean(trust, na.rm = TRUE)) %>%
  ungroup()

When we plot the graph, we need a few geom arguments.

Along the x axis we have all the countries, and reorder them from most trusting of their goverments to least trusting.

We will color the points with one of the four geographic regions.

We use geom_jitter() rather than geom_point() for the different yearly trust values to make the graph a little more interesting.

I also make the sizes scaled to the year in the aes() argument. Again, I did this more to look interesting, rather than to convey too much information about the different values for trust across each country. But smaller circles are earlier years and grow larger for each susequent year.

The geom_hline() plots a vertical line to indicate the average trust level for all countries.

We then use the geom_segment() to horizontally connect the country’s individual average (the yend argument) to the total average (the y arguement). We can then easily see which countries are above or below the total average. The x and xend argument, we supply the country_name variable twice.

Next we use the geom_flag(), which comes from the ggflags package. In order to use this package, we need the ISO 2 character code for each country in lower case!

Click here to read more about the ggflags package.

my_df %>%
ggplot(aes(x = reorder(country_name, trust_score), y = trust_score, color = as.factor(region))) +
geom_jitter(aes(color = as.factor(region), size = year), alpha = 0.7, width = 0.15) +
geom_hline(aes(yintercept = avg_trust), color = "white", size = 2)+
geom_segment(aes(x = country_name, xend = country_name, y = country_avg_trust, yend = avg_trust), size = 2, color = "white") +
ggflags::geom_flag(aes(x = country_name, y = country_avg_trust, country = iso2), size = 10) + 
  coord_flip() + 
  scale_color_manual(values = c("#9a031e","#fb8b24","#5f0f40","#0f4c5c")) -> my_plot

Last we change the aesthetics of the graph with all the theme arguments!

my_plot +
 theme(panel.border = element_blank(),
        legend.position = "right",
        legend.title = element_blank(),
        legend.text = element_text(size = 20),
        legend.background = element_rect(fill = "#5e6472"),
        axis.title = element_blank(),
        axis.text = element_text(color = "white", size = 20),
        text= element_text(size = 15, color = "white"),
        panel.grid.major.y = element_blank(),
        panel.grid.minor.y = element_blank(),
        panel.grid.major.x = element_blank(),
        panel.grid.minor.x = element_blank(),
        legend.key = element_rect(fill = "#5e6472"),
        plot.background = element_rect(fill = "#5e6472"),
        panel.background = element_rect(fill = "#5e6472")) +
  guides(colour = guide_legend(override.aes = list(size=10))) 

And that is the graph.

We can see that countries in southern Europe are less trusting of their governments than in other regions. Western countries seem to occupy the higher parts of the graph, with France being the least trusting of their government in the West.

There is a large variation in Northern countries. However, if we look at the countries, we can see that the Scandinavian countries are more trusting and the Baltic countries are among the least trusting. This shows they are more similar in their trust levels to other Post-Soviet countries.

Next we can look into see if there is a relationship between democracy scores and level of trust in the goverment with a geom_point() scatterplot

The geom_smooth() argument plots a linear regression OLS line, with a standard error bar around.

We want the labels for the country to not overlap so we use the geom_label_repel() from the ggrepel package. We don’t want an a in the legend, so we add show.legend = FALSE to the arguments


my_df %>% 
  filter(!is.na(trust_score)) %>% 
  ggplot(aes(x = democracy_score, y = trust_score)) +
  geom_smooth(method = "lm", color = "#a0001c", size = 3) +
  geom_point(aes(color = as.factor(region)), size = 20, alpha = 0.6) +
 geom_label_repel(aes(label = country_name, color = as.factor(region)), show.legend = FALSE, size = 5) + 
scale_color_manual(values = c("#9a031e","#fb8b24","#5f0f40","#0f4c5c")) -> scatter_plot

And we change the theme and add labels to the plot.

scatter_plot + theme(panel.border = element_blank(),
        legend.position = "bottom",
        legend.title = element_blank(),
        legend.text = element_text(size = 20),
        legend.background = element_rect(fill = "#5e6472"),
        text= element_text(size = 15, color = "white"),

        legend.key = element_rect(fill = "#5e6472"),
        plot.background = element_rect(fill = "#5e6472"),
        panel.background = element_rect(fill = "#5e6472")) +
  guides(colour = guide_legend(override.aes = list(size=10)))  +
  labs(title = "Democracy and trust levels", 
       y = "Democracy score",
       x = "Trust level of respondents",
       caption="Data from OECD") 

We can filter out the two countries with low democracy and examining the consolidated democracies.

Thank you for reading!

Pop Tv Comedy GIF by Schitt's Creek - Find & Share on GIPHY

Graphing Pew survey responses with ggplot in R

Packages we will need:

library(tidyverse)
library(forcats)
library(ggthemes)

We are going to look at a few questions from the 2019 US Pew survey on relations with foreign countries.

Data can be found by following this link:

We are going to make bar charts to plot out responses to the question asked to American participaints: Should the US cooperate more or less with some key countries? The countries asked were China, Russia, Germany, France, Japan and the UK.

Before we dive in, we can find some nice hex colors for the bar chart. There are four possible responses that the participants could give: cooperate more, cooperate less, cooperate the same as before and refuse to answer / don’t know.

pal <- c("Cooperate more" = "#0a9396",
         "Same as before" = "#ee9b00",
         "Don't know" = "#005f73",
         "Cooperate less" ="#ae2012")

We first select the questions we want from the full survey and pivot the dataframe to long form with pivot_longer(). This way we have a single column with all the different survey responses. that we can manipulate more easily with dplyr functions.

Then we summarise the data to count all the survey reponses for each of the four countries and then calculate the frequency of each response as a percentage of all answers.

Then we mutate the variables so that we can add flags. The geom_flag() function from the ggflags packages only recognises ISO2 country codes in lower cases.

After that we change the factors level for the four responses so they from positive to negative views of cooperation

pew %>% 
  select(id = case_id, Q2a:Q2f) %>% 
  pivot_longer(!id, names_to = "survey_question", values_to = "response")  %>% 
  group_by(survey_question, response) %>% 
  summarise(n = n()) %>%
  mutate(freq = n / sum(n)) %>% 
  ungroup() %>% 
  mutate(response_factor = as.factor(response)) %>% 
  mutate(country_question = ifelse(survey_question == "Q2a", "fr",
ifelse(survey_question == "Q2b", "gb",
ifelse(survey_question == "Q2c", "ru",
ifelse(survey_question == "Q2d", "cn",
ifelse(survey_question == "Q2e", "de",
ifelse(survey_question == "Q2f", "jp", survey_question))))))) %>% 
  mutate(response_string = ifelse(response_factor == 1, "Cooperate more",
ifelse(response_factor == 2, "Cooperate less",
ifelse(response_factor == 3, "Same as before",
ifelse(response_factor == 9, "Don't know", response_factor))))) %>% 
  mutate(response_string = fct_relevel(response_string, c("Cooperate less","Same as before","Cooperate more", "Don't know"))) -> pew_clean

We next use ggplot to plot out the responses.

We use the position = "stack" to make all the responses “stack” onto each other for each country. We use stat = "identity" because we are not counting each reponses. Rather we are using the freq variable we calculated above.

pew_clean %>%
  ggplot() +
  geom_bar(aes(x = forcats::fct_reorder(country_question, freq), y = freq, fill = response_string), color = "#e5e5e5", size = 3, position = "stack", stat = "identity") +
  geom_flag(aes(x = country_question, y = -0.05 , country = country_question), color = "black", size = 20) -> pew_graph

And last we change the appearance of the plot with the theme function

pew_graph + 
coord_flip() + 
  scale_fill_manual(values = pal) +
  ggthemes::theme_fivethirtyeight() + 
  ggtitle("Should the US cooperate more or less with the following country?") +
  theme(legend.title = element_blank(),
        legend.position = "top",
        legend.key.size = unit(2, "cm"),
        text = element_text(size = 25),
        legend.text = element_text(size = 20),
        axis.text = element_blank())