Check out part 1 of this blog where you can follow along how to scrape the data that we will use in this blog. It will create a dataset of the current MPs in the Irish Dail.
In this blog, we will use the ggparliament package, created by Zoe Meers.
With this dataset of the 33rd Dail, we will reduce it down to get the number of seats that each party holds.
If we don’t want to graph every party, we can lump most of the smaller parties into an “other” category. We can do this with the fct_lump_n() function from the forcats package. I want the top five biggest parties only in the graph. The rest will be colored as “Other”.
<fct> <int>
1 Fianna Fail 38
2 Sinn Fein 37
3 Fine Gael 35
4 Independent 19
5 Other 19
6 Green Party 12
Before we graph, I found the hex colors that represent each of the biggest Irish political party. We can create a new party color variables with the case_when() function and add each color.
If we view the dail_33_coord data.frame we can see that the parliament_data() function calculated new x and y coordinate variables for the semi-circle graph.
I don’t know what the theta variables is for… But there it is also … maybe to make circular shapes?
We feed the x and y coordinates into the ggplot() function and then add the geom_parliament_seat() layer to produce our graph!
Click here to check out the PDF for the ggparliament package
dail_33_coord %>%
ggplot(aes(x = x,
y = y,
colour = party_groups)) +
geom_parliament_seats(size = 20) -> dail_33_plot
And we can make it look more pretty with bbc_style() plot and colors.
This blogpost will walk through how to scrape and clean up data for all the members of parliament in Ireland.
Or we call them in Irish, TDs (or Teachtaí Dála) of the Dáil.
We will start by scraping the Wikipedia pages with all the tables. These tables have information about the name, party and constituency of each TD.
On Wikipedia, these datasets are on different webpages.
This is a pain.
However, we can get around this by creating a list of strings for each number in ordinal form – from1st to 33rd. (because there have been 33 Dáil sessions as of January 2023)
We don’t need to write them all out manually: “1st”, “2nd”, “3rd” … etc.
Instead, we can do this with the toOrdinal() function from the package of the same name.
dail_sessions <- sapply(1:33,toOrdinal)
Next we can feed this vector of strings with the beginning of the HTML web address for Wikipedia as a string.
We paste the HTML string and the ordinal number strings together with the stri_paste() function from the stringi package.
This iterates over the length of the dail_sessions vector (in this case a length of 33) and creates a vector of each Wikipedia page URL.
With the first_name variable, we can use the new pacakge by Kalimu. This guesses the gender of the name. Later, we can track the number of women have been voted into the Dail over the years.
Of course, this will not be CLOSE to 100% correct … so later we will have to check each person manually and make sure they are accurate.
In the next blog, we will graph out the various images to explore these data in more depth. For example, we can make a circle plot with the composition of the current Dail with the ggparliament package.
We can go into more depth with it in the next blog… Stay tuned.
Next we will pivot the variables to long format. The new names variable will be survey_question and the responses (Strongly agree, Somewhat agree etc) will go to the new response variable!
And we use the hex colours from the original graph … very brown… I used this hex color picker website to find the right hex numbers: https://imagecolorpicker.com/en
And last, we use the geom_bar() – with position = "stack" and stat = "identity" arguments – to create the bar chart.
To add the numbers, write geom_text() function with label = frequency within aes() and then position = position_stack() with hjust and vjust to make sure you’re happy with where the numbers are
gun_reordered %>%
filter(!is.na(response)) %>%
mutate(frequency = round(freq * 100), 0) %>%
ggplot(aes(x = survey_question_reorder,
y = frequency, fill = response)) +
geom_bar(position = "stack",
stat = "identity") +
coord_flip() +
scale_fill_manual(values = brown_palette) +
geom_text(aes(label = frequency), size = 10,
color = "black",
position = position_stack(vjust = 0.5)) +
bbplot::bbc_style() +
labs(title = "Broad support for barring people with mental illnesses
\n from obtaining guns, expanded background checks",
subtitle = "% who",
caption = "Note: No answer resposes not shown.\n Source: Survey of U.S. adults conducted April 5-11 2021.") +
scale_x_discrete(labels = c(
"Allowing people to carry conealed \n guns without a person",
"Shortening waiting periods for people \n who want to buy guns leagally",
"Allowing reachers and school officials \n to carry guns in K-12 school",
"Allowing people to carry \n concealed guns in more places",
"Banning assault-style weapons",
"Banning high capacity ammunition \n magazines that hold more than 10 rounds",
"Creating a federal government \n database to track all gun sales",
"Making private gun sales \n subject to background check",
"Preventing people with mental \n illnesses from purchasing guns"
))
Unfortunately this does not have diverving stacks from the middle of the graph
We can make a diverging stacked bar chart using function likert() from the HH package.
For this we want to turn the dataset back to wider with a column for each of the responses (strongly agree, somewhat agree etc) and find the frequency of each response for each of the questions on different gun control measures.
Then with the likert() function, we take the survey question variable and with the ~tilda~ make it the product of each response. Because they are the every other variable in the dataset we can use the shorthand of the period / fullstop.
We use positive.order = TRUE because we want them in a nice descending order to response, not in alphabetical order or something like that
gun_reordered %<>%
filter(!is.na(response)) %>%
select(survey_question, response, freq) %>%
pivot_wider(names_from = response, values_from = freq ) %>%
ungroup() %>%
HH::likert(survey_question ~., positive.order = TRUE,
main = "Broad support for barring people with mental illnesses
\n from obtaining guns, expanded background checks")
With this function, it is difficult to customise … but it is very quick to make a diverging stacked bar chart.
If we return to ggplot2, which is more easy to customise … I found a solution on Stack Overflow! Thanks to this answer! The solution is to put two categories on one side of the centre point and two categories on the other!
ggplot(data = gun_final, aes(x = survey_question_reorder,
fill = response)) +
geom_bar(data = subset(gun_final, response %in% c("Strongly favor",
"Somewhat favor")),
aes(y = -frequency), position="stack", stat="identity") +
geom_bar(data = subset(gun_final, !response %in% c("Strongly favor",
"Somewhat favor")),
aes(y = frequency), position="stack", stat="identity") +
coord_flip() +
scale_fill_manual(values = brown_palette) +
geom_text(data = gun_final, aes(y = frequency, label = frequency), size = 10, color = "black", position = position_stack(vjust = 0.5)) +
bbplot::bbc_style() +
labs(title = "Broad support for barring people with mental illnesses
\n from obtaining guns, expanded background checks",
subtitle = "% who",
caption = "Note: No answer resposes not shown.\n Source: Survey of U.S. adults conducted April 5-11 2021.") +
scale_x_discrete(labels = c(
"Allowing people to carry conealed \n guns without a person",
"Shortening waiting periods for people \n who want to buy guns leagally",
"Allowing reachers and school officials \n to carry guns in K-12 school",
"Allowing people to carry \n concealed guns in more places",
"Banning assault-style weapons",
"Banning high capacity ammunition \n magazines that hold more than 10 rounds",
"Creating a federal government \n database to track all gun sales",
"Making private gun sales \n subject to background check",
"Preventing people with mental \n illnesses from purchasing guns"
))
Next to complete in PART 2 of this graph, I need to figure out how to add lines to graphs and add the frequency in the correct place
In this blog, we will graph out some of the key features of Ireland’ foreign policy and so we can have a quick overview of the key relationships and trends.
Data on alliance portfolios comes from the Correlates of War and is used to calculate similarity of foreign policy positions (see Altfeld & Mesquita, 1979).
The assumption is that similar alliance portfolios are the result of similar foreign policy positions.
With increasing in level of commitment, the strength of alliance commitments can be:
no commitment
entente
neutrality or nonaggression pact
defense pact
We will map out alliance similarity. This will use the measurement calculated with Cohen’s Kappa. Check out Hage’s (2011) article to read more about the different ways to measure alliance similarity.
Next we can look at UN similarity.
The UN voting variable calculates three values:
1 = Yes
2 = Abstain
3 = No
Based on these data, if two countries in a similar way on the same UN resolutions, this is a measure of the degree to which dyad members’ foreign policy positions are similar.
Last we are going to look at globalization scores. The data comes from the the KOF Globalisation Index. This measures the economic, social and political dimensions of globalisation. Globalisation in the economic, social and political fields has been on the rise since the 1970s, receiving a particular boost after the end of the Cold War.
And compare Ireland to other EU countries on financial KOF index scores. We will put Ireland in green and the rest of the countries as grey to make it pop.
Ireland appears to follow the general EU trends and is not an outlier for financial globalisation scores.
Last September 17th 2021 marked the 30th anniversary of the entry of North Korea and South Korea into full membership in the United Nations. Prior to this, they were only afforded observer status.
Join them all together and filter out any country that does not have the word “Korea” in its name.
un_votes %>%
inner_join(un_votes_issues, by = "rcid") %>%
inner_join(country_votes, by = "rcid") %>%
mutate(year = format(date, format = "%Y")) %>%
filter(grepl("Korea", country)) -> korea_un
First we can make a wordcloud of all the different votes for which they voted YES. Is there a discernable difference in the types of votes that each country supported?
First, download the stop words that we can remove (such as the, and, if)
data("stop_words")
Then I will make a North Korean dataframe of all the votes for which this country voted YES. I remove some of the messy formatting with the gsub argument and count the occurence of each word. I get rid of a few of the procedural words that are more related to the technical wording of the resolutions, rather than related to the tpoic of the vote.
Next we can look more in detail at the votes that they countries abstained from voting in.
We can use the tidytext function that reorders the geom_bar in each country. You can read the blog of Julie Silge to learn more about the functions, it is a bit tricky but it fixes the problem of randomly ordered bars across facets.
South Korea was far more likely to abstain from votes that North Korea on all issues
Next we can simply plot out the Human Rights votes that each country voted to support. Even though South Korea has far higher human rights scores, North Korea votes in support of more votes on this topic.
In this post, we will look at easy ways to graph data from the democracyData package.
The two datasets we will look at are the Anckar-Fredriksson dataset of political regimes and Freedom House Scores.
Regarding democracies, Anckar and Fredriksson (2018) distinguish between republics and monarchies. Republics can be presidential, semi-presidential, or parliamentary systems.
Within the category of monarchies, almost all systems are parliamentary, but a few countries are conferred to the category semi-monarchies.
Autocratic countries can be in the following main categories: absolute monarchy, military rule, party-based rule, personalist rule, and oligarchy.
I want to only look at regimes types in the final year in the dataset – which is 2018 – so we filter only one year before we merge with the map data.frame.
The geom_polygon() part is where we indiciate the variable we want to plot. In our case it is the regime category
anckar %>%
filter(year == max(year)) %>%
inner_join(world_map, by = c("cown")) %>%
mutate(regimebroadcat = ifelse(region == "Libya", 'Military rule', regimebroadcat)) %>%
ggplot(aes(x = long, y = lat, group = group)) +
geom_polygon(aes(fill = regimebroadcat), color = "white", size = 1)
library(tidyverse)
library(ggridges)
library(ggimage) # to add png images
library(bbplot) # for pretty graph themes
We will plot out the favourability opinion polls for the three main political parties in Ireland from 2016 to 2020. Data comes from Louwerse and Müller (2020)
Before we dive into the ggridges plotting, we have a little data cleaning to do. First, we extract the last four “characters” from the date string to create a year variable.
I went online and found the logos for the three main parties (sorry, Labour) and saved them in the working directory I have for my RStudio. That way I can call the file with the prefix “~/**.png” rather than find the exact location they are saved on the computer.
Now we are ready to plot out the density plots for each party with the geom_density_ridges() function from the ggridges package.
We will add a few arguments into this function.
We add an alpha = 0.8 to make each density plot a little transparent and we can see the plots behind.
The scale = 2 argument pushes all three plots togheter so they are slightly overlapping. If scale =1, they would be totally separate and 3 would have them overlapping far more.
The rel_min_height = 0.01 argument removes the trailing tails from the plots that are under 0.01 density. This is again for aesthetics and just makes the plot look slightly less busy for relatively normally distributed densities
The geom_image takes the images and we place them at the beginning of the x axis beside the labels for each party.
Last, we use the bbplot package BBC style ggplot theme, which I really like as it makes the overall graph look streamlined with large font defaults.
polls_three %>%
ggplot(aes(x = opinion_poll, y = as.factor(party))) +
geom_density_ridges(aes(fill = party),
alpha = 0.8,
scale = 2,
rel_min_height = 0.01) +
ggimage::geom_image(aes(y = party, x= 1, image = image), asp = 0.9, size = 0.12) +
facet_wrap(~year) +
bbplot::bbc_style() +
scale_fill_manual(values = c("#f2542d", "#edf6f9", "#0e9594")) +
theme(legend.position = "none") +
labs(title = "Favourability Polls for the Three Main Parties in Ireland", subtitle = "Data from Irish Polling Indicator (Louwerse & Müller, 2020)")
We are going to make bar charts to plot out responses to the question asked to American participaints: Should the US cooperate more or less with some key countries? The countries asked were China, Russia, Germany, France, Japan and the UK.
Before we dive in, we can find some nice hex colors for the bar chart. There are four possible responses that the participants could give: cooperate more, cooperate less, cooperate the same as before and refuse to answer / don’t know.
pal <- c("Cooperate more" = "#0a9396",
"Same as before" = "#ee9b00",
"Don't know" = "#005f73",
"Cooperate less" ="#ae2012")
We first select the questions we want from the full survey and pivot the dataframe to long form with pivot_longer(). This way we have a single column with all the different survey responses. that we can manipulate more easily with dplyr functions.
Then we summarise the data to count all the survey reponses for each of the four countries and then calculate the frequency of each response as a percentage of all answers.
Then we mutate the variables so that we can add flags. The geom_flag() function from the ggflags packages only recognises ISO2 country codes in lower cases.
After that we change the factors level for the four responses so they from positive to negative views of cooperation
We use the position = "stack" to make all the responses “stack” onto each other for each country. We use stat = "identity" because we are not counting each reponses. Rather we are using the freq variable we calculated above.
pew_clean %>%
ggplot() +
geom_bar(aes(x = forcats::fct_reorder(country_question, freq), y = freq, fill = response_string), color = "#e5e5e5", size = 3, position = "stack", stat = "identity") +
geom_flag(aes(x = country_question, y = -0.05 , country = country_question), color = "black", size = 20) -> pew_graph
And last we change the appearance of the plot with the theme function
pew_graph +
coord_flip() +
scale_fill_manual(values = pal) +
ggthemes::theme_fivethirtyeight() +
ggtitle("Should the US cooperate more or less with the following country?") +
theme(legend.title = element_blank(),
legend.position = "top",
legend.key.size = unit(2, "cm"),
text = element_text(size = 25),
legend.text = element_text(size = 20),
axis.text = element_blank())
We will plot out a lollipop plot to compare EU countries on their level of income inequality, measured by the Gini coefficient.
A Gini coefficient of zero expresses perfect equality, where all values are the same (e.g. where everyone has the same income). A Gini coefficient of one (or 100%) expresses maximal inequality among values (e.g. for a large number of people where only one person has all the income or consumption and all others have none, the Gini coefficient will be nearly one).
To start, we will take data on the EU from Wikipedia. With rvest package, scrape the table about the EU countries from this Wikipedia page.
With the gsub() function, we can clean up the different variables with some regex. Namely delete the footnotes / square brackets and change the variable classes.
Next some data cleaning and grouping the year member groups into different decades. This indicates what year each country joined the EU. If we see clustering of colours on any particular end of the Gini scale, this may indicate that there is a relationship between the length of time that a country was part of the EU and their domestic income inequality level. Are the founding members of the EU more equal than the new countries? Or conversely are the newer countries that joined from former Soviet countries in the 2000s more equal. We can visualise this with the following mutations:
To create the lollipop plot, we will use the geom_segment() functions. This requires an x and xend argument as the country names (with the fct_reorder() function to make sure the countries print out in descending order) and a y and yend argument with the gini number.
All the countries in the EU have a gini score between mid 20s to mid 30s, so I will start the y axis at 20.
We can add the flag for each country when we turn the ISO2 character code to lower case and give it to the country argument.
We can see there does not seem to be a clear pattern between the year a country joins the EU and their level of domestic income inequality, according to the Gini score.
Another option for the lolliplot plot comes from the ggpubr package. It does not take the familiar aesthetic arguments like you can do with ggplot2 but it is very quick and the defaults look good!
In this blog, we will try to replicate this graph from Eurostat!
It compares all European countries on their Digitical Intensity Index scores in 2020. This measures the use of different digital technologies by enterprises.
The higher the score, the higher the digital intensity of the enterprise, ranging from very low to very high.
First, we will download the digital index from Eurostat with the get_eurostat() function.
Click here to learn more about downloading data on EU from the Eurostat package.
Next some data cleaning. To copy the graph, we will aggregate the different levels into very low, low, high and very high categories with the grepl() function in some ifelse() statements.
The variable names look a bit odd with lots of blank space because I wanted to space out the legend in the graph to replicate the Eurostat graph above.
Next I fliter out the year I want and aggregate all industry groups (from the sizen_r2 variable) in each country to calculate a single DII score for each country.
If we want to convey nuance in the data, sometimes that information is lost if we display many groups in a pie chart.
According to Bernard Marr, our brains are used to equal slices when we think of fractions of a whole. When the slices aren’t equal, as often is the case with real-world data, it’s difficult to envision the parts of a whole pie chart accurately.
Below are some slight alternatives that we can turn to and visualise different values across groups.
I’m going to compare regions around the world on their total energy consumption levels since the 1900s.
First, we can download the region data with information about the geography and income levels for each group, using the ne_countries() function from the rnaturalearth package.
Next, we will download national military capabilities (NMC) dataset. These variables – which attempt to operationalize a country’s power – are military expenditure, military personnel, energy consumption, iron and steel production, urban population, and total population. It serves as the basis for the most widely used indicator of national capability, CINC (Composite Indicator of National Capability) and covers the period 1816-2016.
To download them in one line of code, we use the create_stateyears() function from the peacesciencer package.
states <- create_stateyears(mry = FALSE) %>% add_nmc()
Click here to read more about downloading Correlates of War and other IR variables from the peacesciencer package
Next we add a UN location code so we can easily merge both datasets we downloaded!
First, we will create one high income group. The map dataset has a separate column for OECD and non-OECD countries. But it will be easier to group them together into one category. We do with with the ifelse() function within mutate().
Next we filter out any country that is NA in the dataset, just to keep it cleaner.
We then group the dataset according to income group and sum all the primary energy consumption in each region since 1900.
When we get to the ggplotting, we want order the income groups from biggest to smallest. To do this, we use the reorder() function with income_grp as the second argument.
To create the coxcomb chart, we need the geom_bar() and coord_polar() lines.
With the coord_polar() function, it takes the following arguments :
theta – the variable we map the angle to (either x or y)
start – indicates the starting point from 12 o’clock in radians
direction – whether we plot the data clockwise (1) or anticlockwise (-1)
We feed in a theta of “x” (this is important!), then a starting point of 0 and direction of -1.
Next we add nicer colours with hex values and label the legend in the scale_fill_manual() function.
I like using the fonts and size stylings in the bbc_style() theme.
Last we can delete some of the ticks and text from the plot to make it cleaner.
Last we add our title and source!
states_df %>%
mutate(income_grp = ifelse(income_grp == "1. High income: OECD", "1. High income", ifelse(income_grp == "2. High income: nonOECD", "1. High income", income_grp))) %>%
filter(!is.na(income_grp)) %>%
filter(year > 1899) %>%
group_by(income_grp) %>%
summarise(sum_pec = sum(pec, na.rm = TRUE)) %>%
ggplot(aes(x = reorder(sum_pec, income_grp), y = sum_pec, fill = as.factor(income_grp))) +
geom_bar(stat = "identity") +
coord_polar("x", start = 0, direction = -1) +
ggthemes::theme_pander() +
scale_fill_manual(
values = c("#f94144", "#f9c74f","#43aa8b","#277da1"),
labels = c("High Income", "Upper Middle Income", "Lower Middle Income", "Low Income"), name = "Income Level") +
bbplot::bbc_style() +
theme(axis.text = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
axis.ticks = element_blank(),
panel.grid = element_blank()) +
ggtitle(label = "Primary Energy Consumption across income levels since 1900", subtitle = "Source: Correlates of War CINC")
We can compare to the number of countries in each region :
states_df %>%
mutate(income_grp = ifelse(income_grp == "1. High income: OECD", "1. High income",
ifelse(income_grp == "2. High income: nonOECD", "1. High income", income_grp))) %>%
filter(!is.na(income_grp)) %>%
filter(year == 2016) %>%
count(income_grp) %>%
ggplot(aes(reorder(n, income_grp), n, fill = as.factor(income_grp))) +
geom_bar(stat = "identity") +
coord_polar("x", start = 0, direction = - 1) +
ggthemes::theme_pander() +
scale_fill_manual(
values = c("#f94144", "#f9c74f","#43aa8b","#277da1"),
labels = c("High Income", "Upper Middle Income", "Lower Middle Income", "Low Income"),
name = "Income Level") +
bbplot::bbc_style() +
theme(axis.text = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
axis.ticks = element_blank(),
panel.grid = element_blank()) +
ggtitle(label = "Number of countries per region")
Another variation is the waffle plot!
It is important we do not install the CRAN version, but rather the version in development. I made the mistake of installing the non-github version and nothing worked.
Here is a short list from the package description of all the key variables that can be quickly added:
We create the dyad dataset with the create_dyadyears() function. A dyad-year dataset focuses on information about the relationship between two countries (such as whether the two countries are at war, how much they trade together, whether they are geographically contiguous et cetera).
In the literature, the study of interstate conflict has adopted a heavy focus on dyads as a unit of analysis.
Alternatively, if we want just state-year data like in the previous blog post, we use the function create_stateyears()
We can add the variables with type D to the create_dyadyears() function and we can add the variables with type S to the create_stateyears() !
Focusing on the create_dyadyears() function, the arguments we can include are directed and mry.
The directed argument indicates whether we want directed or non-directed dyad relationship.
In a directed analysis, data include two observations (i.e. two rows) per dyad per year (such as one for USA – Russia and another row for Russia – USA), but in a nondirected analysis, we include only one observation (one row) per dyad per year.
The mry argument indicates whether they want to extend the data to the most recently concluded calendar year – i.e. 2020 – or not (i.e. until the data was last available).
You can follow these links to check out the codebooks if you want more information about descriptions about each variable and how the data were collected!
The code comes with the COW code but I like adding the actual names also!
With this dataframe, we can plot the CINC data of the top three superpowers, just looking at any variable that has a 1 at the end and only looking at the corresponding country_1!
According to our pals over at le Wikipedia, the Composite Index of National Capability (CINC) is a statistical measure of national power created by J. David Singer for the Correlates of War project in 1963. It uses an average of percentages of world totals in six different components (such as coal consumption, military expenditure and population). The components represent demographic, economic, and military strength
In PART 3, we will merge together our data with our variables from PART 1, look at some descriptive statistics and run some panel data regression analysis with our different variables!
When you want to create a dataset for large-n political science analysis from scratch, it can get muddled fast. Some tips I have found helpful to create clean data ready for panel data analysis.
The partial argument indicates how we want to deal with states that is independent for only part of the year. We can indicate “any”, “exact”, “first” or “last”.
For this example, I want to create a dataset starting in 1990 and ending in 2020. I put useGW = FALSE because I want to use the COW list of states.
We can see that the early 1990s saw the creation of many states after the end of the Soviet Union. Since 2011, the dataset levels out at 195 (after the creation of South Sudan)
Next, we can add the country name with the countrycode() function from the countrycode package. We feed in the cowcode variable and add the full country names. Click here to read more about the function in more detail and see other options to add country ISO code, for example.
I’ll first add some basic variables, such as population, GDP per capita and infant mortality. We can do this with the WDI() function. The indicator code for population is SP.POP.TOTL so we add that to the indicator argument. (If we wanted only a few countries, we can add a vector of ISO2 code strings to the country argument).
POP <- WDI(country = "all",
indicator = 'SP.POP.TOTL',
start = 1990,
end = 2020)
The default variable name for population is the long string, so I’ll quickly change that
When we download World Bank data, it comes with aggregated data for regions and economic groups. If we only want in our dataset the variables for countries, we have to delete the extra rows that we don’t want. We have two options for this.
The first option is to add the cow codes and then filter out all the rows that do not have a cow code (i.e. all non-countries)
Then we re-organise the variables a bit more nicely in the dataset with select() and keep only the countries with filter() and the !is.na argument that will remove any row with NA values in the cow_code column.
Alternatively, we can merge the World Bank variables with our states df and it can filter out any row that is not a sovereign, independent state.
In the merge() function, we use by to indicate the columns by which we want to merge the datasets. The all argument indicates which dataset we want to keep and NOT delete rows that do not match. If we typed all = TRUE, it would not delete any rows that do not match.
You can see that df_v2 has 85 more rows that df_v3. So it is up to you which way you want to use, and which countries you want to include each year. The df_v3 contains states that are more likely to be recognised as sovereign. df_v2 contains more territories.
Let’s look at the prevalence of NA values across our dataset.
We can use the plot_missing() function from the states package.
plot_missing(df_v3, ccode = "cowcode")
It is good to see a lot of green!
Let’s add some constant variables, such as geographical information. The rnaturalearth package is great for plotting maps. Click here to see how to plot maps with the package.
For this dataset, we just want the various geography group variables to add to our dataset:
This regions_sf is not in a data.frame object, it is a simple features dataset. So we delete the variables that make it an sf object and explicitly coerce it to data.frame
Warning message:
In countrycode(regions_df$admin, "country.name", "cown") :
Some values were not matched unambiguously: Antarctica, Kashmir, Republic of Serbia, Somaliland, Western Sahara
Sometimes we cannot avoid hand-coding some of our variables. In this case, we don’t want to drop Serbia because the countrycode function couldn’t add the right code.
So we can check what its COW code is and add it to the dataset directly with the mutate function and an ifelse condition:
If we look at the countries, we can spot a problem. For Cyprus, it was counted twice – due to the control by both Turkish and Greek authorities. We can delete one of the versions because all the other World Bank variables look at Cyprus as one entity so they will be the same across both variables.
regions_df <- regions_df %>% slice(-c(38))
Next we merge the new geography variables to our dataset. Note that we only merge by one variable – the COW code – and indicate that we want to merge for every row in the x dataset (i.e. the first dataset in the function). So it will apply to each year row for each country!
We with the aggr() function from the VIM package to look at the prevalence of NA values. It’s always good to keep an eye on this and catch badly merged or badly specified datasets!
Click here for PART 2, where we add some Correlates of War data and interesting variables with the peacesciencer package .
No we can create our pyramid chart with the pyramid_chart() from the ggcharts package. The first argument is the age category for both the 2011 and 2016 data. The second is the actual population counts for each year. Last, enter the group variable that indicates the year.
One problem with the pyramid chart is that it is difficult to discern any differences between the two years without really really examining each year.
One way to more easily see the differences with the compareBars function
The compareBars package created by David Ranzolin can help to simplify comparative bar charts! It’s a super simple function to use that does a lot of visualisation leg work under the hood!
First we need to pivot the data.frame back to wide format and then input the age, and then the two groups – x2011 and x2016 – in the compareBars() function.
We can add more labels and colors to customise the graph also!
We can see that under the age of four-ish, 2011 had more at the time. And again, there were people in their twenties in 2011 compared to 2016.
However, there are more older people in 2016 than in 2011.
Similar to above it is a bit busy! So we can create groups for every five age years categories and examine the broader trends with fewer horizontal bars.
First we want to remove the word “years” from the age variable and convert it to a numeric class variable. We can easily do this with the parse_number() function from the readr package
Next we can group the age years together into five year categories, zero to 5 years, 6 to 10 years et cetera.
We use the cut() function to divide the numeric age_num variable into equal groups. We use the seq() function and input age 0 to 100, in increments of 5.
Next, we can use group_by() to calculate the sum of each population number in each five year category.
And finally, we use the distinct() function to remove the duplicated rows (i.e. we only want to keep the first row that gives us the five year category’s population count for each category.
We will use the geom_segment layer from ggplot2 to make a timeline graph!
This layer takes
x and xend for the start of the segment lines
y and yend inputs for the end of the segment lines
For our timeline, the x will be the start of each Irish Taoiseach’s term.
The xend will be the end of their term, when they get kicked out of office.
Taoisigh (plural of Taoiseach) are Irish prime ministers and are in charge of the executive branch when their party is in change.
For Ireland, that means that basically every Taoiseach has been the leader of one of the two main parties – Fianna Fail or Fine Gael.
Not very exciting.
Also they have all been men.
This is also not very exciting.
We have a bit more to go with increasing the diversity in Ireland’s top job.
The y argument is the Taoiseach number in office. Although there have been fifteen men that have held the office of Taoiseach, this does not mean that they only held office for one time only.
Ireland has a parliamentary system so when a party loses an election, the former Taoiseach can become the leader of the opposition and hope in the future they can become Taoiseach again. Some men have been Taoiseach two or three times in non-consecutive terms.
When we are adding the labels with the geom_text() layer, I created an order variable which indicates the first time each man took the office of Taoiseach.
This is so I only have the name of each man only once in the graph. If we don’t do this step, if a man held office more than once, their name appears every time on the graph and the plot becomes a crowded mess.
My graphs usually…
I add the ifelse statement so that the first name appears after the segment line and therefore text does not take up too much room on the left edge of the graph.
Last we use the scale_color_manual() function with nice hex colors for each of the political parties.
I increase the limits of the graph to accommodate the name labels. Most of the time, these extra bits of code in ggplot2 depend on the type of data you have and what fits on the graph plane nicely!
So this stages is often only finished after trial-and-error.
I add a snazzy theme_fivethirtyeight() theme from ggthemes package.
Last, with the theme() function, we can remove most of the elements of the graph to make the graph cleaner.
I want to add both graphs together so I can save the pie chart with a transparent background with the ggsave() function. I also make sure the lines are not jagged with the type = "cairo" from with Cairo package.
Let’s look at how many speeches took place at the UN Security Council every year from 1995 until 2019.
I want to only look at countries, not organisations. So a quick way to do that is to add a variable to indicate whether the speaker variable has an ISO code.
Only countries have ISO codes, so I can use this variable to filter away all the organisations that made speeches
library(countrycode)
speech$iso2 <- countrycode(speech$country, "country.name", "iso2c")
library(bbplot)
speech %>%
dplyr::filter(!is.na(iso2)) %>%
group_by(year) %>%
count() %>%
ggplot(aes(x = year, y = n)) +
geom_line(size = 1.2, alpha = 0.4) +
geom_label(aes(label = n)) +
bbplot::bbc_style() +
theme(plot.title = element_text(hjust = 0.5)) +
labs(title = "Number of speeches given by countries at UNSC")
We can see there has been a relatively consistent upward trend in the number of speeches that countries are given at the UN SC. Time will tell what impact COVID will have on these trends.
There was a particularly sharp increase in speeches in 2015.
We can look and see who was talking, and in the next post, we can examine what they were talking about in 2015 with some simple text analytic packages and functions.
First, we will filter only the year 2015 and count the number of observations per group (i.e. the number of speeches per country this year).
To add flags to the graph, add the iso2 code to the dataset (and it must be in lower case).
We can clean up the names and create a variable that indicates whether the country is one of the five Security Council Permanent Members, a Temporary Member elected or a Non-,ember.
I also clean up the names to make the country’s names in the dataset smaller. For example, “United Kingdom Of Great Britain And Northern Ireland”, will be very cluttered in the graph compared to just “UK” so it will be easier to plot.
library(ggflags)
library(ggthemes)
speech_2015 %>%
# To avoid the graph being too busy, we only look at countries that gave over 20 speeches
dplyr::filter(speech_count > 20) %>%
# Clean up some names so the graph is not too crowded
dplyr::mutate(country = ifelse(country == "United Kingdom Of Great Britain And Northern Ireland", "UK", country)) %>%
dplyr::mutate(country = ifelse(country == "Russian Federation", "Russia", country)) %>%
dplyr::mutate(country = ifelse(country == "United States Of America", "USA", country)) %>%
dplyr::mutate(country = ifelse(country == "Republic Of Korea", "South Korea", country)) %>%
dplyr::mutate(country = ifelse(country == "Venezuela (Bolivarian Republic Of)", "Venezuela", country)) %>%
dplyr::mutate(country = ifelse(country == "Islamic Republic Of Iran", "Iran", country)) %>%
dplyr::mutate(country = ifelse(country == "Syrian Arab Republic", "Syria", country)) %>%
# Create a Member status variable:
# China, France, Russia, the United Kingdom, and the United States are UNSC Permanent Members
dplyr::mutate(Member = ifelse(country == "UK", "Permanent",
ifelse(country == "USA", "Permanent",
ifelse(country == "China", "Permanent",
ifelse(country == "Russia", "Permanent",
ifelse(country == "France", "Permanent",
# Non-permanent members in their first year (elected October 2014)
ifelse(country == "Angola", "Temporary (Elected 2014)",
ifelse(country == "Malaysia", "Temporary (Elected 2014)",
ifelse(country == "Venezuela", "Temporary (Elected 2014)",
ifelse(country == "New Zealand", "Temporary (Elected 2014)",
ifelse(country == "Spain", "Temporary (Elected 2014)",
# Non-permanent members in their second year (elected October 2013)
ifelse(country == "Chad", "Temporary (Elected 2013)",
ifelse(country == "Nigeria", "Temporary (Elected 2013)",
ifelse(country == "Jordan", "Temporary (Elected 2013)",
ifelse(country == "Chile", "Temporary (Elected 2013)",
ifelse(country == "Lithuania", "Temporary (Elected 2013)",
# Non members that will join UNSC next year (elected October 2015)
ifelse(country == "Egypt", "Non-Member (Elected 2015)",
ifelse(country == "Sengal", "Non-Member (Elected 2015)",
ifelse(country == "Uruguay", "Non-Member (Elected 2015)",
ifelse(country == "Japan", "Non-Member (Elected 2015)",
ifelse(country == "Ukraine", "Non-Member (Elected 2015)",
# Everyone else is a regular non-member
"Non-Member"))))))))))))))))))))) -> speech_2015
When we have over a dozen nested ifelse() statements, we will need to check that we have all our corresponding closing brackets.
Next choose some colours for each Memberships status. I always take my hex values from https://coolors.co/
And all that is left to do is create the bar chart.
With geom_bar(), we can indicate stat = "identity" because we are giving the plot the y values and ggplot does not need to do the automatic aggregation on its own.
To make sure the bars are descending from most speeches to fewest speeches, we use the reorder() function. The second argument is the variable according to which we want to order the bars. So for us, we give the speech_count integer variable to order our country bars with x = reorder(country, speech_count).
We can change the bar from vertical to horizontal with coordflip().
I add flags with geom_flag() and feed the lower case ISO code to the country = iso2_lower argument.
I add the bbc_style() again because I like the font, size and sparse lines on the plot.
We can move the title of the plot into the centre with plot.title = element_text(hjust = 0.5))
Finally, we can supply the membership_palette vector to the values = argument in the scale_fill_manual() function to specify the colours we want.
speech_2015 %>% ggplot(aes(x = reorder(country, speech_count), y = speech_count)) +
geom_bar(stat = "identity", aes(fill = as.factor(Member))) +
coord_flip() +
ggflags::geom_flag(mapping = aes(y = -15, x = country, country = iso2_lower), size = 10) +
geom_label(mapping = aes( label = speech_count), size = 8) +
theme(legend.position = "top") +
labs(title = "UNSC speeches given in 2015", y = "Number of speeches", x = "") +
bbplot::bbc_style() +
theme(text = element_text(size = 20),
plot.title = element_text(hjust = 0.5)) +
scale_fill_manual(values = membership_palette)
In the next post, we will look at the texts themselves. Here is a quick preview.
We count the number of tokens (i.e. words) for each country in each year. With the distinct() function we take only one observation per year per country. This reduces the number of rows from 16601520 in speech_tokesn to 3142 rows in speech_words_count :
It is a bit convoluted to put the flags ONLY at the start and end of the lines. We need to subset the dataset two times with the geom_flag() sections. First, we subset the data.frame to year == 1995 and the flags appear at the start of the word_count on the y axis. Then we subset to year == 2019 and do the same
ggplot(data = permanent_word_summary) +
geom_line(aes(x = year, y = word_count, group = as.factor(country), color = as.factor(country)),
size = 2) +
ggflags::geom_flag(data = subset(permanent_word_summary, year == 1995), aes(x = 1995, y = word_count, country = iso2_lower), size = 9) +
ggflags::geom_flag(data = subset(permanent_word_summary,
year == 2019),
aes(x = 2019,
y = word_count,
country = iso2_lower),
size = 12) +
bbplot::bbc_style() +
theme(legend.position = "right") + labs(title = "Number of words spoken by Permanent Five in the UN Security Council") +
scale_color_manual(values = five_pal)
We can see that China has been the least chattiest country if we are measuring chatty with number of words spoken. Translation considerations must also be taken into account. We can see here again at around the 2015 mark, there was a discernible increase in the number of words spoken by most of the countries!
When we save our plots and graphs in R, we can use the ggsave() function and specify the type, size and look of the file.
We are going to look two features in particular: anti-aliasing lines with the Cairo package and creating transparent backgrounds.
Make your graph background transparent
First, let’s create a pie chart with a transparent background. The pie chart will show which party has held the top spot in Irish politics for the longest.
After we prepare and clean our data of Irish Taoisigh start and end dates in office and create a doughnut chart (see bottom of blog for doughnut graph code), we save it to our working directorywith ggsave().
If we want to add our doughnut chart to a power point but we don’t want it to be a white background, we can ask ggsave to save the chart as transparent and then we can add it to our powerpoint or report!
To do this, we specify bg argument to "transparent"
When we save our graph in R with ggsave(), we can specify in the type argument that we want type = cairo.
I make a quick graph that looks at the trends in migration and GDP from 1960s to 2018 in Ireland. I made the lines extra large to demonstrate the difference between aliased and anti-aliased lines in the graphs.
library(ggflags)
library(bbplot) # for pretty BBC style graphs
library(countrycode) # for ISO2 country codes
library(rvest) # for webscrapping
Click here to add rectangular flags to graphs and click here to add rectangular flags to MAPS!
Apropos of this week’s US news, we are going to graph the number of different or autocoups in South America and display that as both maps and bar charts.
According to our pals at the Wikipedia, a self-coup, or autocoup (from the Spanish autogolpe), is a form of putsch or coup d’état in which a nation’s leader, despite having come to power through legal means, dissolves or renders powerless the national legislature and unlawfully assumes extraordinary powers not granted under normal circumstances.
In order to add flags to maps, we need to make sure our dataset has three variables for each country:
Longitude
Latitude
ISO2 code (in lower case)
In order to add longitude and latitude, I will scrape these from a website with the rvest dataset and merge them with my existing dataset.
In this case, a warning message pops up to tell me:
Some values were not matched unambiguously: Kosovo, Somaliland, Zanzibar
One important step is to convert the ISO codes from upper case to lower case. The geom_flag() function from the ggflag package only recognises lower case (e.g Chile is cl, not CL).
Finally we can graph our maps comparing the different types of coups in South America.
Click here to learn how to graph variables onto maps with the rnaturalearth package.
The geom_flag() function requires an x = longitude, y = latitude and a country argument in the form of our lower case ISO2 country codes. You can play around the latitude and longitude flag and also label position by adding or subtracting from them. The size of the flag can be added outside the aes() argument.
We can place the number of coups under the flag with the geom_label() function.
The theme_map() function we add comes from ggthemes package.
autocoup_map <- autocoup_df%>%
dplyr::filter(subregion == "South America") %>%
ggplot() +
geom_sf(aes(fill = coup_cat)) +
ggflags::geom_flag(aes(x = longitude, y = latitude+0.5, country = iso2_lower), size = 8) +
geom_label(aes(x = longitude, y = latitude+3, label = auto_coup_sum, color = auto_coup_sum), fill = "white", colour = "black") +
theme_map()
autocoup_map + scale_fill_manual(values = coup_palette, name = "Auto Coups", labels = c("No autocoup", "More than 1", "More than 10", "More than 50"))
Not hard at all.
And we can make a quick barchart to rank the countries. For this I will use square flags from the ggimage package. Click here to read more about the ggimage package
Additionally, I will use the theme from the bbplot pacakge. Click here to read more about the bbplot package.
Click here to check out the vignette to read about all the different graphs with which you can use bbplot !
We will look at the Soft Power rankings from Portland Communications. According to Wikipedia, In politics (and particularly in international politics), soft power is the ability to attract and co-opt, rather than coerce or bribe other countries to view your country’s policies and actions favourably. In other words, soft power involves shaping the preferences of others through appeal and attraction.
A defining feature of soft power is that it is non-coercive; the currency of soft power includes culture, political values, and foreign policies.
Joseph Nye’s primary definition, soft power is in fact:
“the ability to get what you want through attraction rather than coercion or payments. When you can get others to want what you want, you do not have to spend as much on sticks and carrots to move them in your direction. Hard power, the ability to coerce, grows out of a country’s military and economic might. Soft power arises from the attractiveness of a country’s culture, political ideals and policies. When our policies are seen as legitimate in the eyes of others, our soft power is enhanced”
(Nye, 2004: 256).
Every year, Portland Communication ranks the top countries in the world regarding their soft power. In 2019, the winner was la France!
Click here to read the most recent report by Portland on the soft power rankings.
We will also add circular flags to the graphs with the ggflags package. The geom_flag() requires the ISO two letter code as input to the argument … but it will only accept them in lower case. So first we need to make the country code variable suitable:
Here I run a simple scatterplot and compare Post-Soviet states and see whether there has been a major change in class equality between 1991 after the fall of the Soviet Empire and today. Is there a relationship between class equality and demolcratisation? Is there a difference in the countries that are now in EU compared to the Post-Soviet states that are not?
library(ggrepel) # to stop text labels overlapping
library(gridExtra) # to place two plots side-by-side
library(ggbubr) # to modify the gridExtra titles
region_liberties_91 <- vdem %>%
dplyr::filter(year == 1991) %>%
dplyr::filter(regions == 'Post-Soviet') %>%
dplyr::filter(!is.na(EU_member)) %>%
ggplot(aes(x = democracy, y = class_equality, color = EU_member)) +
geom_point(aes(size = population)) +
scale_alpha_continuous(range = c(0.1, 1))
plot_91 <- region_liberties_91 +
bbplot::bbc_style() +
labs(subtitle = "1991") +
ylim(-2.5, 3.5) +
xlim(0, 1) +
geom_text_repel(aes(label = country_name), show.legend = FALSE, size = 7) +
scale_size(guide="none")
region_liberties_18 <- vdem %>%
dplyr::filter(year == 2018) %>%
dplyr::filter(regions == 'Post-Soviet') %>%
dplyr::filter(!is.na(EU_member)) %>%
ggplot(aes(x = democracy_score, y = class_equality, color = EU_member)) +
geom_point(aes(size = population)) +
scale_alpha_continuous(range = c(0.1, 1))
plot_18 <- region_liberties_15 +
bbplot::bbc_style() +
labs(subtitle = "2015") +
ylim(-2.5, 3.5) +
xlim(0, 1) +
geom_text_repel(aes(label = country_name), show.legend = FALSE, size = 7) +
scale_size(guide = "none")
my_title = text_grob("Relationship between democracy and class equality in Post-Soviet states", size = 22, face = "bold")
my_y = text_grob("Democracy Score", size = 20, face = "bold")
my_x = text_grob("Class Equality Score", size = 20, face = "bold", rot = 90)
grid.arrange(plot_1, plot_2, ncol=2, top = my_title, bottom = my_y, left = my_x)
The BBC cookbook vignette offers the full function. So we can tweak it any way we want.
For example, if I want to change the default axis labels, I can make my own slightly adapted my_bbplot() function
This blog post will look at a simple function from the jtools package that can give us two different pseudo R2 scores, VIF score and robust standard errors for our GLM models in R
Packages we need:
library(jtools)
library(tidyverse)
From the Varieties of Democracy dataset, we can examine the v2regendtype variable, which codes how a country’s governing regime ends.
It turns out that 1994 was a very coup-prone year. Many regimes ended due to either military or non-military coups.
We can extract all the regimes that end due to a coup d’etat in 1994. Click here to read the VDEM codebook on this variable.
With this new binary variable, we run a straightforward logistic regression in R.
To do this in R, we can run a generalised linear model and specify the family argument to be “binomial” :
summary(model_bin_1 <- glm(coup_binary ~ diagonal_accountability + military_control_score,
family = "binomial", data = vdem_2)
However some of the key information we want is not printed in the default R summary table.
This is where the jtools package comes in. It was created by Jacob Long from the University of South Carolina to help create simple summary tables that we can modify in the function. Click here to read the CRAN package PDF.
The summ() function can give us more information about the fit of the binomial model. This function also works with regular OLS lm() type models.
Set the vifs argument to TRUE for a multicollineary check.
summ(model_bin_1, vifs = TRUE)
And we can see there is no problem with multicollinearity with the model; the VIF scores for the two independent variables in this model are well below 5.
Click here to read more about the Variance Inflation Factor and dealing with pesky multicollinearity.
In the above MODEL FIT section, we can see both the Cragg-Uhler (also known as Nagelkerke) and the McFadden Pseudo R2 scores give a measure for the relative model fit. The Cragg-Uhler is just a modification of the Cox and Snell R2.
There is no agreed equivalent to R2 when we run a logistic regression (or other generalized linear models). These two Pseudo measures are just two of the many ways to calculate a Pseudo R2 for logistic regression. Unfortunately, there is no broad consensus on which one is the best metric for a well-fitting model so we can only look at the trends of both scores relative to similar models. Compared to OLS R2 , which has a general rule of thumb (e.g. an R2 over 0.7 is considered a very good model), comparisons between Pseudo R2 are restricted to the same measure within the same data set in order to be at all meaningful to us. However, a McFadden’s Pseudo R2 ranging from 0.3 to 0.4 can loosely indicate a good model fit. So don’t be disheartened if your Pseudo scores seems to be always low.
If we add another continuous variable – judicial corruption score – we can see how this affects the overall fit of the model.
summary(model_bin_2 <- glm(coup_binary ~
diagonal_accountability +
military_control_score +
judicial_corruption,
family = "binomial",
data = vdem_2))
And run the summ() function like above:
summ(model_bin_2, vifs = TRUE)
The AIC of the second model is smaller, so this model is considered better. Additionally, both the Pseudo R2 scores are larger! So we can say that the model with the additional judicial corruption variable is a better fitting model.
Click here to learn more about the AIC and choosing model variables with a stepwise algorithm function.
stargazer(model_bin, model_bin_2, type = "text")
One additional thing we can specify in the summ() function is the robust argument, which we can use to specify the type of standard errors that we want to correct for.
The assumption of homoskedasticity is does not need to be met in order to run a logistic regression. So I will run a “gaussian” general linear model (i.e. a linear model) to show the impact of changing the robust argument.
We suffer heteroskedasticity when the variance of errors in our model vary (i.e are not consistently random) across observations. It causes inefficient estimators and we cannot trust our p-values.
We can set the robust argument to “HC1” This is the default standard error that Stata gives.
Set it to “HC3” to see the default standard error that we get with the sandwich package in R.
So run a simple regression to see the relationship between freedom from torture scale and the three independent variables in the model
summary(model_glm1 <- glm(freedom_torture ~ civil_lib + exec_bribe + judicial_corr, data = vdem90, family = "gaussian"))
Now I run the same summ() function but just change the robust argument:
First with no standard error correction. This means the standard errors are calculated with maximum likelihood estimators (MLE). The main problem with MLE is that is assumes normal distribution of the errors in the model.
summ(model_glm1, vifs = TRUE)
Next with the default STATA robust argument:
summ(model_glm1, vifs = TRUE, robust = "HC1")
And last with the default from R’s sandwich package:
summ(model_glm1, vifs = TRUE, robust = "HC3")
If we compare the standard errors in the three models, they are the highest (i.e. most conservative) with HC3 robust correction. Both robust arguments cause a 0.01 increase in the p-value but this is so small that it do not affect the eventual p-value significance level (both under 0.05!)
Next, to graph a map to look at colonialism in Asia, we can extract countries according to the subregion variable from the rnaturalearth package and graph.
In this post, we can compare countries on the left – right political spectrum and graph the trends.
In the European Social Survey, they ask respondents to indicate where they place themselves on the political spectrum with this question: “In politics people sometimes talk of ‘left’ and ‘right’. Where would you place yourself on this scale, where 0 means the left and 10 means the right?”
library(ggthemes, ggimage)
lrscale_graph <- round_df %>%
dplyr::filter(country == "IE" | country == "GB" | country == "FR" | country == "DE") %>%
ggplot(aes(x= round, y = mean_lr, group = country)) +
geom_line(aes(color = factor(country)), size = 1.5, alpha = 0.5) +
ggimage::geom_flag(aes(image = country), size = 0.04) +
scale_color_manual(values = my_palette) +
scale_x_discrete(name = "Year", limits=c("2002","2004","2006","2008","2010","2012","2014","2016","2018")) +
labs(title = "Where would you place yourself on this scale,\n where 0 means the left and 10 means the right?",
subtitle = "Source: European Social Survey, 2002 - 2018",
fill="Country",
x = "Year",
y = "Left - Right Spectrum")
lrscale_graph + guides(color=guide_legend(title="Country")) + theme_economist()
The European Social Survey (ESS) measure attitudes in thirty-ish countries (depending on the year) across the European continent. It has been conducted every two years since 2001.
The survey consists of a core module and two or more ‘rotating’ modules, on social and public trust; political interest and participation; socio-political orientations; media use; moral, political and social values; social exclusion, national, ethnic and religious allegiances; well-being, health and security; demographics and socio-economics.
So lots of fun data for political scientists to look at.
install.packages("essurvey")
library(essurvey)
The very first thing you need to do before you can download any of the data is set your email address.
set_email("rforpoliticalscience@gmail.com")
Don’t forget the email address goes in as a string in “quotations marks”.
Show what countries are in the survey with the show_countries() function.
It’s important to know that country names are case sensitive and you can only use the name printed out by show_countries(). For example, you need to write “Russian Federation” to access Russian survey data; if you write “Russia”…
Using these country names, we can download specific rounds or waves (i.e survey years) with import_country. We have the option to choose the two most recent rounds, 8th (from 2016) and 9th round (from 2018).
ire_data <- import_all_cntrounds("Ireland")
The resulting data comes in the form of nine lists, one for each round
These rounds correspond to the following years:
ESS Round 9 – 2018
ESS Round 8 – 2016
ESS Round 7 – 2014
ESS Round 6 – 2012
ESS Round 5 – 2010
ESS Round 4 – 2008
ESS Round 3 – 2006
ESS Round 2 – 2004
ESS Round 1 – 2002
I want to compare the first round and most recent round to see if Irish people’s views have changed since 2002. In 2002, Ireland was in the middle of an economic boom that we called the “Celtic Tiger”. People did mad things like buy panini presses and second house in Bulgaria to resell. Then the 2008 financial crash hit the country very hard.
Irish people during the Celtic Tiger:
Irish people after the Celtic Tiger crash:
Ireland in 2018 was a very different place. So it will be interesting to see if these social changes translated into attitude changes.
First, we use the import_country() function to download data from ESS. Specify the country and rounds you want to download.
The resulting ire object is a list, so we’ll need to extract the two data.frames from the list:
ire_1 <- ire[[1]]
ire_9 <- ire[[2]]
The exact same questions are not asked every year in ESS; there are rotating modules, sometimes questions are added or dropped. So to merge round 1 and round 9, first we find the common columns with the intersect() function.
All the variables in the dataset are a special class called “haven_labelled“. So we must convert them to numeric variables with a quick function. We exclude the first variable because we want to keep country name as a string character variable.
We can look at the distribution of our variables and count how many missing values there are with the skim() function from the skimr package
library(skimr)
skim(att_df)
We can run a quick t-test to compare the mean attitudes to immigrants on the statement: “Immigrants make country worse or better place to live” across the two survey rounds.
Lower scores indicate an attitude that immigrants undermine Ireland’ quality of life and higher scores indicate agreement that they enrich it!
t.test(att_df$imm_qual_life ~ att_df$round)
In future blog, I will look at converting the raw output of R into publishable tables.
The results of the independent-sample t-test show that if we compare Ireland in 2002 and Ireland in 2018, there has been a statistically significant increase in positive attitudes towards immigrants and belief that Ireland’s quality of life is more enriched by their presence in the country.
As I am currently an immigrant in a foreign country myself, I am glad to come from a country that sees the benefits of immigrants!
If we load the ggpubr package, we can graphically look at the difference in mean attitude scores.
library(ggpubr)
box1 <- ggpubr::ggboxplot(att_df, x = "round", y = "imm_qual_life", color = "round", palette = c("#d11141", "#00aedb"),
ylab = "Attitude", xlab = "Round")
box1 + stat_compare_means(method = "t.test")
It’s not the most glamorous graph but it conveys the shift in Ireland to more positive attitudes to immigration!
I suspect that a country’s economic growth correlates with attitudes to immigration.
The geom_rect() function graphs the coloured rectangles on the plot. I take colours from this color-hex website; the green rectangle for times of economic growth and red for times of recession. Makes sure the geom-rect() comes before the geom_line().
And we can see that there is a relationship between attitudes to immigrants in Ireland and Irish GDP growth. When GDP is growing, Irish people see that immigrants improve quality of life in Ireland and vice versa. The red section of the graph corresponds to the financial crisis.
We can now add the the geom_flag() function to the graph. The y = -50 prevents the flags overlapping with the bars and places them beside their name label. The image argument takes the iso2 variable.
Quick tip: with the reorder argument, if we wanted descending order (rather than ascending order of ODA amounts, we would put a minus sign in front of the oda_per_capita in the reorder() function for the x axis value.
oda_bar <- oda %>%
ggplot(aes(x = reorder(donor, oda_per_capita), y = oda_per_capita, fill = continent)) +
geom_flag(y = -50, aes(image = iso2)) +
geom_bar(stat = "identity") +
labs(title = "ODA donor spending ",
subtitle = "Source: OECD's Development Assistance Committee, 2019 ",
x = "Donor Country",
y = "ODA per capita")
The fill argument categorises the continents of the ODA donors. Sometimes I take my hex colors from https://www.color-hex.com/ website.
We can all agree that Wikipedia is often our go-to site when we want to get information quick. When we’re doing IR or Poli Sci reesarch, Wikipedia will most likely have the most up-to-date data compared to other databases on the web that can quickly become out of date.
So in R, we can scrape a table from Wikipedia and turn into a database with the rvest package .
First, we copy and paste the Wikipedia page we want to scrape into the read_html() function as a string:
Next we save all the tables on the Wikipedia page as a list. Turn the header = TRUE.
nato_tables <- nato_members %>% html_table(header = TRUE, fill = TRUE)
The table that I want is the third table on the page, so use [[two brackets]] to access the third list.
nato_exp <- nato_tables[[3]]
The dataset is not perfect, but it is handy to have access to data this up-to-date. It comes from the most recent NATO report, published in 2019.
Some problems we will have to fix.
The first row is a messy replication of the header / more information across two cells in Wikipedia.
The headers are long and convoluted.
There are a few values in as N/A in the dataset, which R thinks is a string.
All the numbers have commas, so R thinks all the numeric values are all strings.
There are a few NA values that I would not want to impute because they are probably zero. Iceland has no armed forces and manages only a small coast guard. North Macedonia joined NATO in March 2020, so it doesn’t have all the data completely.
So first, let’s do some quick data cleaning:
Clean the variable names to remove symbols and adds underscores with a function from the janitor package
Next turn all the N/A value strings to NULL. The na_strings object we create can be used with other instances of pesky missing data varieties, other than just N/A string.
Remove all the commas from the number columns and convert the character strings to numeric values with a quick function we apply to all numeric columns in the data.frame.
Next, we can calculate the average NATO score of all the countries (excluding the member_state variable, which is a character string).
We’ll exclude the NATO total column (as it is not a member_state but an aggregate of them all) and the data about Iceland and North Macedonia, which have missing values.
library(WDI)
library(tidyverse)
library(magrittr) # for pipes
library(ggthemes)
library(rnaturalearth)
# to create maps
library(viridis) # for pretty colors
We will use this package to really quickly access all the indicators from the World Bank website.
Below is a screenshot of the World Bank’s data page where you can search through all the data with nice maps and information about their sources, their available years and the unit of measurement et cetera.
In R when we download the WDI package, we can download the datasets directly into our environment.
With the WDIsearch() function we can look for the World Bank indicator.
For this blog, we want to map out how dependent countries are on oil. We will download the dataset that measures oil rents as a percentage of a country’s GDP.
WDIsearch('oil rent')
The output is:
indicator name
"NY.GDP.PETR.RT.ZS" "Oil rents (% of GDP)"
Copy the indicator string and paste it into the WDI() function. The country codes are the iso2 codes, which you can input as many as you want in the c().
If you want all countries that the World Bank has, do not add country argument.
We can compare Iran and Saudi Arabian oil rents from 1970 until the most recent value.
data = WDI(indicator='NY.GDP.PETR.RT.ZS', country=c('IR', 'SA'), start=1970, end=2019)
And graph out the output. All only takes a few steps.
my_palette = c("#DA0000", "#239f40")
#both the hex colors are from the maps of the countries
oil_graph <- ggplot(oil_data, aes(year, NY.GDP.PETR.RT.ZS, color = country)) +
geom_line(size = 1.4) +
labs(title = "Oil rents as a percentage of GDP",
subtitle = "In Iran and Saudi Arabia from 1970 to 2019",
x = "Year",
y = "Average oil rent as percentage of GDP",
color = " ") +
scale_color_manual(values = my_palette)
oil_graph +
ggthemes::theme_fivethirtyeight() +
theme(
plot.title = element_text(size = 30),
axis.title.y = element_text(size = 20),
axis.title.x = element_text(size = 20))
For some reason the World Bank does not have data for Iran for most of the early 1990s. But I would imagine that they broadly follow the trends in Saudi Arabia.
I added the flags myself manually after I got frustrated with geom_flag() . It is something I will need to figure out for a future blog post!
It is really something that in the late 1970s, oil accounted for over 80% of all Saudi Arabia’s Gross Domestic Product.
Now we see both countries rely on a far smaller percentage. Due both to the fact that oil prices are volatile, climate change is a new constant threat and resource exhaustion is on the horizon, both countries have adjusted policies in attempts to diversify their sources of income.
Next we can use the World Bank data to create maps and compare regions on any World Bank scores.
We will compare all Asian and Middle Eastern countries with regard to all natural rents (not just oil) as a percentage of their GDP.
library(Quandl)
library(forecast) #for time series analysis and graphing
The website Quandl.com is a great resource I came across a while ago, where you can download heaps of datasets for variables such as energy prices, stock prices, World Bank indicators, OECD data other random data.
In order to download the data from the site, you need to first set up an account on the website, and indicate your intended use for the data.
Back on R, you call the API key with Quandl.api_key() function and now you can directly download data!
Quandl.api_key("StRiNgOfNuMbErSaNdLeTtErs")
Now, I click to search only through the free datasets. I have no idea how much a subscription costs but I imagine it is not cheap.
You can browse through the database and when you find the dataset you want, you copy and paste the string code into Quandl() function.
We can choose the class of the time series object we will download with the type = argument.
We also toggle the start_date and end_data of the time series.
So I will download employment data for Ireland from 1980 until 2019 as a zoo object. We can check the Quandl page for the Irish employment data to learn about the data source and the unit of measurement
Click here to check out the Quandl CRAN pdf documentation and learn more about the differen arguments you can use with this function. Here is the generic arguments you can play with when downloading your dataset:
autoplot(emp[,"V1"]) +
ggtitle("Employment level in Ireland") +
labs("Source: International Monetary Fund data") +
xlab("Year") +
ylab("Employed people (in millions)")
The 1980s were a rough time for unemployment in Ireland. Also the 2008 recession had a sizeable impact on unemployment too. I am afraid how this graph will look with 2020 data.
Visual representation of this year.
Next, we can visually examine the autocorrelation plot.
With time series data, it is natural to assume that the value at the current time period is highly related (i.e. serially correlated) to its value in the previous period. This is autocorrelation, and it can be problematic when we want to forecast or run statistics. This is because it violates the assumption that values are independent of each other.
emp_ts <- ts(emp)
forecast::ggAcf(emp_ts)
There is very high autocorrelation in employment level in Ireland over the years.
In next blog, we will look at how to correct for autocorrelation in time series data.
What is a shiny app, you ask? Click to look at a quick Youtube explainer. It’s basically a handy GUI for R.
When we feed a panel data.frame into the ExPanD() function, a new screen pops up from R IDE (in my case, RStudio) and we can interactively toggle with various options and settings to run a bunch of statistical and visualisation analyses.
Click here to see how to convert your data.frame to pdata.frame object with the plm package.
Be careful your pdata.frame is not too large with too many variables in the mix. This will make ExPanD upset enough to crash. Which, of course, I learned the hard way.
Also I don’t know why there are random capitalizations in the PaCkaGe name. Whenever I read it, I think of that Sponge Bob meme.
If anyone knows why they capitalised the package this way. please let me know!
So to open up the new window, we just need to feed the pdata.frame into the function:
ExPanD(mil_pdf)
For my computer, I got error messages for the graphing sections, because I had an old version of Cairo package. So to rectify this, I had to first install a source version of Cairo and restart my R session. Then, the error message gods were placated and they went away.
install.packages("Cairo", type="source")
Then press command + shift + F10 to restart R session
library(Cairo)
You may not have this problem, so just ignore if you have an up-to-date version of the necessary packages.
When the new window opens up, the first section allows you to filter subsections of the panel data.frame. Similar to the filter() argument in the dplyr package.
For example, I can look at just the year 1989:
But let’s look at the full sample
We can toggle with variables to look at mean scores for certain variables across different groups. For example, I look at physical integrity scores across regime types.
Purple plot: closed autocracy
Turquoise plot: electoral autocracy
Khaki plot: electoral democracy:
Peach plot: liberal democracy
The plots show that there is a high mean score for physical integrity scores for liberal democracies and less variance. However with the closed and electoral autocracies, the variance is greater.
We can look at a visualisation of the correlation matrix between the variables in the dataset.
Next we can look at a scatter plot, with option for loess smoother line, to graph the relationship between democracy score and physical integrity scores. Bigger dots indicate larger GDP level.
Last we can run regression analysis, and add different independent variables to the model.
We can add fixed effects.
And we can subset the model by groups.
The first column, the full sample is for all regions in the dataset.
“Nodes” designate the vertices of a network, and “edges” designate its ties. Vertices are accessed using the V() function while edges are accessed with the E(). This igraph object has 232 edges and 16 vertices over the four years.
Furthermore, the igraph object has a name attribute as one of its vertices properties. To access, type:
V(countries_ig)$name
Which prints off all the countries in the ww1 dataset; this is all the countries that engaged in militarized interstate disputes between the years 1914 to 1918.
Next we can fit an algorithm to modify the graph distances. According to our pal Wikipedia, force-directed graph drawing algorithms are a class of algorithms for drawing graphs in an aesthetically-pleasing way. Their purpose is to position the nodes of a graph in two-dimensional or three-dimensional space so that all the edges are of more or less equal length and there are as few crossing edges as possible, by assigning forces among the set of edges and the set of nodes, based on their relative positions!
We can do this in one simple step by feeding the igraph into the algorithm function.
First, we access and store a map object from the rnaturalearth package, with all the spatial information in contains. We specify returnclass = "sf", which will return a dataframe with simple features information.
SF?
Simple features or simple feature access refers to a formal standard (ISO 19125-1:2004) that describes how objects in the real world can be represented in computers, with emphasis on the spatial geometry of these objects. Our map has these attributes stored in the object.
With the ne_countries() function, we get the borders of all countries.
This map object comes with lots of information about 241 countries and territories around the world.
In total, it has 65 columns, mostly with different variants of the names / locations of each country / territory. For example, ISO codes for each country. Further in the dataset, there are a few other variables such as GDP and population estimates for each country. So a handy source of data.
However, I want to use values from a different source; I have a freedom_df dataframe with a freedom of association variable.
The freedom of association index broadly captures to what extent are parties, including opposition parties, allowed to form and to participate in elections, and to what extent are civil society organizations able to form and to operate freely in each country.
So, we can merge them into one dataset.
Before that, I want to only use the scores from the most recent year to map out. So, take out only those values in the year 2019 (don’t forget the comma sandwiched between the round bracket and the square bracket):
We’re all ready to graph the map. We can add the freedom of association variable into the aes() argument of the geom_sf() function. Again, the sf refers to simple features with geospatial information we will map out.
assoc_graph <- ggplot(data = map19) +
geom_sf(aes(fill = freedom_association_index),
position = "identity") +
labs(fill='Freedom of Association Index') +
scale_fill_viridis_c(option = "viridis")
The scale_fill_viridis_c(option = "viridis") changes the color spectrum of the main variable.
Other options include:
"viridis"
"magma"
"plasma“
And various others. Click here to learn more about this palette package.
Finally we call the new graph stored in the assoc_graph object.
I use the theme_map() function from the ggtheme package to make the background very clean and to move the legend down to the bottom left of the screen where it takes up the otherwise very empty Pacific ocean / Antarctic expanse.
And there we have it, a map of countries showing the Freedom of Association index across countries.
The index broadly captures to what extent are parties, including opposition parties, allowed to form and to participate in elections, and to what extent are civil society organizations able to form and to operate freely.
Yellow colors indicate more freedom, green colors indicate middle scores and blue colors indicate low levels of freedom.
Some of the countries have missing data, such as Germany and Yemen, for various reasons. A true perfectionist would go and find and fill in the data manually.