Correct for heteroskedasticity in OLS with sandwich package in R

Packages we will need:


If our OLS model demonstrates high level of heteroskedasticity (i.e. when the error term of our model is not randomly distributed across observations and there is a discernible pattern in the error variance), we run into problems.

Why? Because this means OLS will use sub-optimal estimators based on incorrect assumptions and the standard errors computed using these flawed least square estimators are more likely to be under-valued.

Since standard errors are necessary to compute our t – statistic and arrive at our p – value, these inaccurate standard errors are a problem.

Click here to check for heteroskedasticity in your model with the lmtest package.

To correct for heteroskedastcity in your model, you need the sandwich package and the lmtest package to employ the vcocHC argument.

Gordon Ramsey Idiot GIF - Find & Share on GIPHY

First, let’s fit a simple OLS regression.

summary(free_express_model <- lm(freedom_expression ~ free_elections + deliberative_index, data = data_1990))

We can see that there is a small star beside the main dependent variable of interest! Success!

Happy So Excited GIF - Find & Share on GIPHY

We have significance.

Thus, we could say that the more free and fair the elections a country has, this increases the mean freedom of expression index score for that country.

This ties in with a very minimalist understanding of democracy. If a country has elections and the populace can voice their choice of leadership, this will help set the scene for a more open society.

However, it is naive to look only at the p – value of any given coefficient in a regression output. If we run some diagnostic analyses and look at the relationship graphically, we may need to re-examine this seemingly significant relationship.

Can we trust the 0.087 standard error score that our OLS regression calculated? Is it based on sound assumptions?

Worried Its Always Sunny In Philadelphia GIF by HULU - Find & Share on GIPHY

First let’s look at the residuals. Can we assume that the variance of error is equal across all observations?

If we examine the residuals (the first graph), we see that there is actually a tapered fan-like pattern in the error variance. As we move across the x axis, the variance along the y axis gets continually smaller and smaller.

The error does not look random.

Panicking Oh No GIF by HULU - Find & Share on GIPHY

Let’s run a Breush-Pagan test (from the lmtest package) to check our suspicion of heteroskedasticity.


We can reject the null hypothesis that the error variance is homoskedastic.

So the model does suffer from heteroskedasticty. We cannot trust those stars in the regression output!

Season 1 Omg GIF by Friends - Find & Share on GIPHY

In order to fix this and make our p-values more accuarate, we need to install the sandwich package to feed in the vcovHC adjustment into the coeftest() function.

vcovHC stands for variance covariance Heteroskedasticity Consistent.

With the stargazer package (which prints out all the models in one table), we can compare the free_exp_model alone with no adjustment, then four different variations of the vcovHC adjustment using different formulae (as indicated in the type argument below).

          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC0")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC1")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC2")),
          coeftest(free_exp_model, vcovHC(free_exp_model, type = "HC3")),
          type = "text")

Looking at the standard error in the (brackets) across the OLS and the coeftest models, we can see that the standard error are all almost double the standard error from the original OLS regression.

There is a tiny difference between the different types of Heteroskedastic Consistent (HC) types.

The significant p – value disappears from the free and fair election variable when we correct with the vcovHC correction.

Season 2 Friends GIF - Find & Share on GIPHY

The actual coefficient stays the same regardless of whether we use no correction or any one of the correction arguments.

Which HC estimator should I use in my vcovHC() function?

The default in the sandwich package is HC3.

STATA users will be familiar with HC1, as it is the default robust standard error correction when you add robust at the end of the regression command.

The difference between them is not very large.

The estimator HC0 was suggested in the econometrics literature by White in 1980 and is justified by asymptotic arguments.

For small sample sizes, the standard errors from HC0 are quite biased, usually downward, and this results in overly liberal inferences in regression models (Bera, Suprayitno & Premaratne, 2002). But with HC0, the bias shrinks as your sample size increases.

The estimator types HC1, HC2 and HC3 were put forward by MacKinnon and White (1985) to improve the performance in small samples.

Long and Ervin (2000) furthermore argue that HC3 provides the best performance in small samples as it gives less weight to influential observations in the model

In our freedom of expression regression, the HC3 estimate was the most conservative with the standard error calculations. however the difference between the approaches did not change the conclusion; ultimately the main independent variable of interest in this analysis – free and fair elections – can explain variance in the dependent variable – freedom of expression – does not find evidence in the model.

Click here to read an article by Hayes and Cai (2007) which discusses the matrix formulae and empirical differences between the different calculation approaches taken by the different types. Unfortunately it is all ancient Greek to me.


Bera, A. K., Suprayitno, T., & Premaratne, G. (2002). On some heteroskedasticity-robust estimators of variance–covariance matrix of the least-squares estimators. Journal of Statistical Planning and Inference108(1-2), 121-136.

Hayes, A. F., & Cai, L. (2007). Using heteroskedasticity-consistent standard error estimators in OLS regression: An introduction and software implementation. Behavior research methods39(4), 709-722.

Long, J. S., & Ervin, L. H. (2000). Using heteroscedasticity consistent standard errors in the linear regression model. The American Statistician54(3), 217-224.

MacKinnon, J. G., & White, H. (1985). Some heteroskedasticity-consistent covariance matrix estimators with improved finite sample properties. Journal of econometrics29(3), 305-325.

Check for heteroskedasticity in OLS with lmtest package in R

One core assumption when calculating ordinary least squares regressions is that all the random variables in the model have equal variance around the best fitting line.

Essentially, when we run an OLS, we expect that the error terms have no fan pattern.

Example of homoskedasticiy

So let’s look at an example of this assumption being satisfied. I run a simple regression to see whether there is a relationship between and media censorship and civil society repression in 178 countries in 2010.

ggplot(data_010, aes(media_censorship, civil_society_repression)) 
      + geom_point() + geom_smooth(method = "lm") 
      + geom_text(size = 3, nudge_y = 0.1, aes(label = country))

If we run a simple regression

summary(repression_model <- lm(media_censorship ~ civil_society_repression, data = data_2010))
stargazer(repression_model, type = "text")

This is pretty common sense; a country that represses its citizens in one sphere is more likely to repress in other areas. In this case repressing the media correlates with repressing civil society.

We can plot the residuals of the above model with the autoplot() function from the ggplotigfy package.


Nothing unusual appears to jump out at us with regard to evidence for heteroskedasticity!

In the first Residuals vs Fitted plot, we can see that blue line does not drastically diverge from the dotted line (which indicates residual value = 0).

The third plot Scale-Location shows again that there is no drastic instances of heteroskedasticity. We want to see the blue line relatively horizontal. There is no clear pattern in the distribution of the residual points.

In the Residual vs. Leverage plot (plot number 4), the high leverage observation 19257 is North Korea! A usual suspect when we examine model outliers.

While it is helpful to visually see the residuals plotted out, a more objective test can help us find whether the model is indeed free from heteroskedasticity problems.

For this we need the Breusch-Pagan test for heteroskedasticity from the lmtest package.


The default in R is the studentized Breusch-Pagan. However if you add the studentize = FALSE argument, you have the non-studentized version

The null hypothesis of the Breusch-Pagan test is that the variance in the model is homoskedastic.

With our repression_model, we cannot reject the null, so we can say with confidence that there is no major heteroskedasticity issue in our model.

The non-studentized Breusch-Pagan test makes a very big assumption that the error term is from Gaussian distribution. Since this assumption is usually hard to verify, the default bptest() in R “studentizes” the test statistic and provide asymptotically correct significance levels for distributions for error.

Why do we care about heteroskedasticity?

If our model demonstrates high level of heteroskedasticity (i.e. the random variables have non-random variation across observations), we run into problems.


  • OLS uses sub-optimal estimators based on incorrect assumptions and
  • The standard errors computed using these flawed least square estimators are more likely to be under-valued. Since standard errors are necessary to compute our t – statistics and arrive at our p – value, these inaccurate standard errors are a problem.

Example of heteroskedasticity

Let’s look at an example of this homoskedasticity assumption NOT being satisfied.

I run a simple regression to see whether there is a relationship between democracy score and respect for individuals’ private property rights in 178 countries in 2010.

When you are examining democracy as the main dependent variable, heteroskedasticity is a common complaint. This is because all highly democratic countries are all usually quite similar. However, when we look at autocracies, they are all quite different and seem to march to the beat of their own despotic drum. We cannot assume that the random variance across different regime types is equally likely.

First, let’s have a look at the relationship.

prop_graph <- ggplot(vdem2010, aes(v2xcl_prpty, v2x_api)) 
                     + geom_point(size = 3, aes(color = factor(regime_type))) 
                     + geom_smooth(method = "lm")
prop_graph + scale_colour_manual(values = c("#D55E00", "#E69F00", "#009E73", "#56B4E9"))

Next, let’s fit the model to examine the relationship.

summary(property_model <- lm(property_score ~ democracy_score, data = data_2010))
stargazer(property_model, type = "text")

To plot the residuals (and other diagnostic graphs) of the model, we can use the autoplot() function to look at the relationship in the model graphically.


Graph number 1 plots the residuals against the fitted model and we can see that lower values on the x – axis (fitted values) correspond with greater spread on the y – axis. Lower democracy scores relate to greater error on property rights index scores. Plus the blue line does not lie horizontal and near the dotted line. It appears we have non-random error patterns.

Examining the Scale – Location graph (number 3), we can see that the graph is not horizontal.

Again, interpreting the graph can be an imprecise art. So a more objective approach may be to run the bptest().


Since the p – value is far smaller than 0.05, we can reject the null of homoskedasticity.

Rather, we have evidence that our model suffers from heteroskedasticity. The standard errors in the regression appear smaller than they actually are in reality. This inflates our t – statistic and we cannot trust our p – value.

In the next blog post, we can look at ways to rectify this violation of homoskedasticity and to ensure that our regression output has more accurate standard errors and therefore more accurate p – values.

Click here to use the sandwich package to fix heteroskedasticity in the OLS regression.

Impute missing values with MICE package in R

Political scientists are beginning to appreciate that multiple imputation represents a better strategy for analysing missing data to the widely used method of listwise deletion.

A very clear demonstration of this was a 2016 article by Ranjit Lall, an political economy professor in LSE. He essentially went back and examined the empirical results of multiple imputation in comparison to the commonplace listwise deletion in political science.

He did this by re-running comparative political economy studies published over a five-year period in International Organization and World Politics.

Shockingly, in almost half of the studies he re-ran, Lall found that most key results “disappeared” (by conventional statistical standards) when reanalyzed with multiple imputations rather than listwise deletion.

This is probably due to the fact that it is erroneous to assume that missing data is random and equally distributed among the overall data.

Listwise deletion involves omitting observations with missing values on any variable. This ultimately produces inefficient inferences as it is difficult to believe the assumption that the pattern of missing data is actually completely random.

This blog post will demonstrate a package for imputing missing data in a few lines of code.

Unlike what I initially thought, the name has nothing to do with the tiny rodent, MICE stands for Multivariate Imputation via Chained Equations.

Rather than abruptly deleting missing values, imputation uses information given from the non-missing predictors to provide an estimate of the missing values.

The mice package imputes in two steps. First, using mice() to build the model and subsequently call complete() to generate the final dataset.

The mice() function produces many complete copies of a dataset, each with different imputations of the missing data. Then the complete() function returns these data sets, with the default being the first.

So first install and load the package:


You can check whether any variables in your potential model have an NAs (i.e. missing values) with anyNA() function.


If there are missing values, then you can go on ahead with imputing them. First create a new object to store the multiple imputed versions of your dataset.

This iteration process takes a while, depending on how many variables you have in your data.frame. My data data.frame had about six variables so this stage took about three or four minutes to complete. I was distracted by Youtube for a bit, so I am not exactly sure. I imagine a very large dataset with hundreds of variables would make my computer freak out.

All the variables with missing values in my data.frame were continuous numerical values. I chose the method = "cart", which stands for classification and regression trees which appears quite versatile.

imputed_data <-  mice(data, method="cart")

A CART is a predictive algorithm that determines how a given variable’s values can be predicted based on other values.

It is composed of decision trees where each fork is a split in a predictor variable and each node at the end has a prediction for the target variable.

After this iterative process is complete and the command has finished running, we then use the complete() function and assign the resulting data.frame to a new object. I call it full_data

full_data <- complete(imputed_data) 

I ran a quick regression to see what effect the new fully imputed data.frame had on the relationship. I could have taken a bit longer and found a result that changed as a result of the data imputation step ( as was shown in the above mentioned Lall (2016) paper) but I decided to just stick with my first shot.

We can see that the model with the imputed values have increased the total number of values by about 3,000 or so.

Given that I already have a very large n size, it is not expected that many of thecoefficients would change drastically by adding a small percentage of imputed values. However, we see that the standard error (yay) and the coefficient value decreased (meh). Additionally the R2 (by a tiny amount) decreased (weh).

I chose the cart method but there are many of method options, depending on the characteristics of the data with missing values.

Built-in univariate imputation methods are:

pmmanyPredictive mean matching
midastouchanyWeighted predictive mean matching
sampleanyRandom sample from observed values
cartanyClassification and regression trees
rfanyRandom forest imputations
meannumericUnconditional mean imputation
normnumericBayesian linear regression
norm.nobnumericLinear regression ignoring model error
norm.bootnumericLinear regression using bootstrap
norm.predictnumericLinear regression, predicted values
quadraticnumericImputation of quadratic terms
rinumericRandom indicator for nonignorable data
logregbinaryLogistic regression
logreg.bootbinaryLogistic regression with bootstrap
polrorderedProportional odds model
polyregunorderedPolytomous logistic regression
ldaunorderedLinear discriminant analysis
2l.normnumericLevel-1 normal heteroscedastic
2l.lmernumericLevel-1 normal homoscedastic, lmer
2l.pannumericLevel-1 normal homoscedastic, pan
2l.binbinaryLevel-1 logistic, glmer
2lonly.meannumericLevel-2 class mean
2lonly.normnumericLevel-2 class normal

Make Wes Anderson themed graphs with wesanderson package in R

Well this is just delightful!


After you install the wesanderson package, you can

  1. create a ggplot2 graph object
  2. choose the Wes Anderson color scheme you want to use and create a palette object
  3. add the graph object and and the palette object and behold your beautiful data
Wes Anderson Trailer GIF - Find & Share on GIPHY

I want to examine the breakdown of how each head of state was appointed to rule the country and the type of regime. First I’ll examine the break down in the 18th century.

To generate a vector of colors, the wes_palette() function requires:

wes_palette(name, n, type = c("discrete", "continuous"))
  • name: Name of desired palette
  • n: Number of colors desired (i.e. how many categories. In my case, there are four regime types so n = 4).
  • type: Either “continuous” or “discrete”. Use continuous if you want to automatically interpolate between colors.
Wes Anderson Trailer GIF - Find & Share on GIPHY

eighteenth_century <- data_1880s %>%
filter(! %>%
filter(! %>%
ggplot(aes(appointment)) + geom_bar(aes(fill = factor(regime)), position = position_stack(reverse = TRUE)) + theme(legend.position = "top", text = element_text(size=15), axis.text.x = element_text(angle = -30, vjust = 1, hjust = 0))

Both the regime variable and the appointment variable are discrete categories so we can use the geom_bar() function. When adding the palette to the barplot object, we can use the scale_fill_manual() function.

eighteenth_century + scale_fill_manual(values = wes_palette("Darjeeling1", n = 4)

Now to compare the breakdown with countries in the 21st century (2000 to present)

The names of all the palettes you can enter into the wes_anderson() function

Wes Anderson GIF - Find & Share on GIPHY

Include country labels to a regression plot with ggplot2 package in R

Sometimes the best way to examine the relationship between our variables of interest is to plot it out and give it a good looking over. For me, it’s most helpful to see where different countries are in relation to each other and to see any interesting outliers.

For this, I can use the geom_text() function from the ggplot2 package.

I will look at the relationship between economic globalization and social globalization in OECD countries in the year 2000.

The KOF Globalisation Index, introduced by Dreher (2006) measures globalization along the economicsocial and political dimension for most countries in the world

First, as always, we install and load the necessary package. This time, it is the ggplot2 package


Next add the following code:

fin <- ggplot(oecd2000, aes(economic_globalization, social_globalization)) 
        + ggtitle("Relationship between Globalization Index Scores among OECD countries in 2000")
        + scale_x_continuous("Economic Globalization Index")
        + scale_y_continuous("Social Globalization Index") 
        + geom_smooth(method = "lm") 
        + geom_point(aes(colour = polity_score), size = 2) + labs(color = "Polity Score")
        + geom_text(hjust = 0, nudge_x = 0.5, size = 4, aes(label = country)) 


In the aes() function, we enter the two variables we want to plot.

Then I use the next three lines to add titles to axes and graph

I use the geom_smooth() function with the “lm” method to add a best fitting regression line through the points on the plot. Click here to learn more about adding a regression line to a plot.

I add a legend to examine where countries with different democracy scores (taken from the Polity Index) are located on the globalization plane. Click here to learn about adding legends.

The last line is the geom_text() function that I use to specify that I want to label each observation (i.e. each OECD country) with its name, rather than the default dataset number.

Some geom_text() commands to use:

  • nudge_x (or nudge_y) slightly “nudge” the labels from their corresponding points to help minimise messy overlapping.
  • hjust and vjust move the text label “left”, “center”, “right”, “bottom”, “middle” or “top” of the point.

Yes, yes! There is a package that uses the color palettes of Wes Anderson movies to make graphs look just beautiful. Click here to use different Wes Anderson aesthetic themed graphs!

zissou_colors <- wes_palette("Zissou1", 100, type = "continuous")

fin + scale_color_gradientn(colours = zissou_colors)

Which outputs:

Interestingly, it seems that at the very bottom left hand corner of the plot (which shows the countries that are both low in economic globalization and low in social globalization), we have two OECD countries that score high on democracy – Japan and South Korea- right next to two countries that score the lowest in the OECD on democracy, Turkey and Mexico.

So it could be interesting to further examine why these countries from opposite ends of the democracy spectrum have similar pattern of low globalization. It puts a spanner in the proverbial works with my working theory that countries higher in democracy are more likely to be more globalized! What is special about these two high democracy countries that gives them such low scores on globalization.

Create facetted scatterplots with the ggplot2 package in R

If I want to graphically display the relationship between two variables, the ggplot2 package is a very handy way to produce graphs.

For example, I can use the ggplot2 package to graphically examine the relationship between civil society strength and freedom of citizens from torture. Also I can see whether this relationship is the same across regime types.

I choose one year from my dataframe to examine.

data2000 <- myPanel[which(myPanel$year == "2000"),]

Next, I install the ggplot2 package


The grammar of ggplot2 includes:

  • aes() indicates how variables are mapped to visual properties or aesthetics. The first variable goes on the x-axis and the second variable goes on the y-axis.
  • geom_point() creates a scatterplot style graph. Alternatives to this are geom_line(), which creates a line plot and geom_histogram() which creates a histogram plot.

ggplot(data2000, aes(v2xcs_ccsi, v2cltort)) + geom_point() +
xlab("Civil society robustness") +
ylab("Freedom from torture")

Next we can add information on regime types, a categorical variable with four levels.

0 = closed autocracy

1 = electoral autocracy

2 = electoral democracy

3 = liberal democracy

In the aes() function, add colour = regime to differentiate the four categories on the graph

ggplot(data2000, aes(v2xcs_ccsi, v2x_clphy, colour = regime)) +

Alternatively we can use the facet_wrap( ~ regime) function to create four separate scatterplots and examine the relationship separately.

ggplot(data2000, aes(v2xcs_ccsi, v2x_clphy, colour = regime)) +
geom_point() +
facet_wrap(~regime) +
xlab("Civil society robustness") +
ylab("Freedom from torture")

Lastly, we can add a linear model line (method = "lm") with a grey standard error bar (se = TRUE) in the geom_smooth() function.

ggplot(data2000, aes(v2xcs_ccsi, v2x_clphy, colour = regime)) +
geom_point() +
facet_wrap(~regime) +
geom_smooth(method = "lm", se = TRUE) +
xlab("Civil society robustness") +
ylab("Freedom from torture")

In these graphs, we can see that as civil society robustness score increases, the likelihood of a life free from torture increases! Pretty intuitive result and we could argue that there is a third variable – namely strong democratic institutions – that drives this positive relationship.

The graphs break down this relationship across four different regime types, ranging from the most autocratic in the top left hand side to the most democratic in the bottom right. There is more variety in this relationship with closed autocracies (i.e. the red points), with some points deviating far from the line.

The purple graph – liberal democracies – shows a tiny amount of variance. In liberal democracies, it appears that all countries score highly in both civil society robustness and freedom from torture!

Correlates of War codes

Click here to learn how to use countrycode( ) R package to add the Correlates of War code variable to country dataset.

AbbreviationCOW CodeCountry Name
USA2United States of America
DOM42Dominican Republic
TRI52Trinidad and Tobago
SLU56St. Lucia
SVG57St. Vincent and the Grenadines
AAB58Antigua & Barbuda
SKN60St. Kitts and Nevis
SAL92El Salvador
COS94Costa Rica
UKG200United Kingdom
GFR260German Federal Republic
GDR265German Democratic Republic
HSE273Hesse Electoral
HSG275Hesse Grand Ducal
MEC280Mecklenburg Schwerin
CZR316Czech Republic
PAP327Papal States
SIC329Two Sicilies
SNM331San Marino
BOS346Bosnia and Herzegovina
CAP402Cape Verde
STP403Sao Tome and Principe
EQG411Equatorial Guinea
CDI437Ivory Coast
BFO439Burkina Faso
SIE451Sierra Leone
CEN482Central African Republic
DRC490Democratic Republic of the Congo
SAF560South Africa
SSD626South Sudan
SAU670Saudi Arabia
YAR678Yemen Arab Republic
YPR680Yemen People’s Republic
UAE696United Arab Emirates
PRK731North Korea
ROK732South Korea
SRI780Sri Lanka
RVN817Republic of Vietnam
ETM860East Timor
PNG910Papua New Guinea
NEW920New Zealand
SOL940Solomon Islands
MSI983Marshall Islands
FSM987Federated States of Micronesia

Add Correlates of War codes with countrycode package in R

One problem with merging two datasets by country is that the same countries can have different names. Take for example, America. It can be entered into a dataset as any of the following:

  • USA
  • U.S.A.
  • America
  • United States of America
  • United States
  • US
  • U.S.

This can create a big problem because datasets will merge incorrectly if they think that US and America are different countries.

Correlates of War (COW) is a project founded by Peter Singer, and catalogues of all inter-state war since 1963. This project uses a unique code for each country.

For example, America is 2.

When merging two datasets, there is a helpful R package that can convert the various names for a country into the COW code:


To read more about the countrycode package in the CRAN PDF, click here.

First create a new name for the variable I want to make; I’ll call it COWcode in the dataset.

Then use the countrycode() function. First type in the brackets the name of the original variable that contains the list of countries in the dataset. Then finally add "", "cown". This turns the word name for each country into the numeric COW code.

dataset$COWcode <- countrycode(dataset$countryname, "", "cown")

If you want to turn into a country name, swap the "" and "cown"

dataset$countryname <- countrycode(dataset$COWcode, "", "cown")

Now the dataset is ready to merge more easily with my other dataset on the identical country variable type!

There are many other types of codes that you can add to your dataset.

A very popular one is the ISO-2 and ISO-3 codes. For example, if you want to add flags to your graph, you will need a two digit code for each country (for example, Ireland is IE).

To see the list of all the COW codes, click here.

To check out the COW database website, click here.

Alternative codes than the and the cown options include:

• ccTLD: IANA country code top-level domain
• country name (English)
• country name (German)
• cowc: Correlates of War character
• cown: Correlates of War numeric
• dhs: Demographic and Health Surveys Program
• ecb: European Central Bank
• eurostat: Eurostat
• fao: Food and Agriculture Organization of the United Nations numerical code
• fips: FIPS 10-4 (Federal Information Processing Standard)
• gaul: Global Administrative Unit Layers
• genc2c: GENC 2-letter code
• genc3c: GENC 3-letter code
• genc3n: GENC numeric code
• gwc: Gleditsch & Ward character
• gwn: Gleditsch & Ward numeric
• imf: International Monetary Fund
• ioc: International Olympic Committee
• iso2c: ISO-2 character
• iso3c: ISO-3 character
• iso3n: ISO-3 numeric
• p4n: Polity IV numeric country code
• p4c: Polity IV character country code
• un: United Nations M49 numeric codes
4 codelist
• unicode.symbol: Region subtag (often displayed as emoji flag)
• unpd: United Nations Procurement Division
• vdem: Varieties of Democracy (V-Dem version 8, April 2018)
• wb: World Bank (very similar but not identical to iso3c)
• wvs: World Values Survey numeric code

Turn wide to long format with reshape2 package in R

A simple feature to turn wide format into long format in R.

I have a dataset with the annual per capita military budget for 171 countries.

The problem is that it is in completely wrong format to use for panel data (i.e. cross-sectional time-series analysis).

So here is simple way I found to fix this problem and turn this:

WIDE FORMAT : a separate column for each year

into this:

LONG FORMAT : one single “year” column and one single “value” column

It’s like magic.

First install and load the reshape2 package


I name my new long form dataframe; in this case, the imaginatively named mil_long.

I use the melt() function and first type in the name of the original I want to change; in this case it is mil_wide

id.vars tells R the unique ID for each new variable. Since I am looking at military budgets for each country, I’ll use Country variable as my ID. for me is the year variable which, in wide format, is the name of every column. For me, I want to compress all the year columns into this new variable. is the new variable I make to hold the value that in my dataset is the per capita military budget amount per country per year. I name this new variable … you guessed it, value.

mil_long <- melt(mil_wide, id.vars= "Country", = "year", = "value"))

So simple, it’s hard to believe.

Looking at my new mil_long dataset, my new long format dataframe has only three columns = “Country”, “year” and “value” and 5,504 rows for each country-year observation across the 32 years.

Now, my dataframe is ready to be transformed into a panel data frame!

reshape2 has two main functions which I think have quite memorable names:  melt and cast.

melt is for wide-format dataframes that you want to “melt” into long-format.

cast for dataframes in long-format data which you figuratively “cast” into a wide-format dataframe.

As a poli-sci person, I have so far only turned my dataframe in long form, for eventual panel data analysis with "plm" package.

Click here to see how to transform dataframes into panel dataframes with the plm package.

Click here to read the full reshape2 package documentation on CRAN