I am an Associate Professor in Economics and Social Data Science at University of Copenhagen. The majority of my research focuses on education related behavior and policies, using methods from economics and data science. Some areas of within education is school choice, use of digital technology, prediction tools for intervention as well as peer effects/social interactions. Other research topics include using tools from econometrics and data science to investigate human behavior more broadly.
I supervise students in the core areas of my research at the intersection of econometrics and data science.
PhD in Economics, 2016
University of Copenhagen
What is the efficacy of redrawing school attendance boundaries as a desegregation policy? To provide causal evidence on this question we employ novel data with unprecedented detail on the universe of Danish children and exploit changes in attendance boundaries over time. Households defy reassignments to schools with lower socioeconomic status. There is a strong social gradient in defiance, as resourceful households are more sensitive to the student composition of new schools. We simulate school assignment policies and find that boundary changes that reassign areas to a highly disadvantaged school are ineffective at altering the socioeconomic composition at the disadvantaged school.
Increasingly, human behavior can be monitored through the collection of data from digital devices revealing information on behaviors and locations. In the context of higher education, a growing number of schools and universities collect data on their students with the purpose of assessing or predicting behaviors and academic performance, and the COVID-19–induced move to online education dramatically increases what can be accumulated in this way, raising concerns about students’ privacy. We focus on academic performance and ask whether predictive performance for a given dataset can be achieved with less privacy-invasive, but more task-specific, data. We draw on a unique dataset on a large student population containing both highly detailed measures of behavior and personality and high-quality third-party reported individual-level administrative data. We find that models estimated using the big behavioral data are indeed able to accurately predict academic performance out of sample. However, models using only low-dimensional and arguably less privacy-invasive administrative data perform considerably better and, importantly, do not improve when we add the high-resolution, privacy-invasive behavioral data. We argue that combining big behavioral data with “ground truth” administrative registry data can ideally allow the identification of privacy-preserving task-specific features that can be employed instead of current indiscriminate troves of behavioral data, with better privacy and better prediction resulting.
In this study, we monitored 470 university students’ smartphone usage continuously over 2 years to assess the relationship between in-class smartphone use and academic performance. We used a novel data set in which smartphone use and grades were recorded across multiple courses, allowing us to examine this relationship at the student level and the student-in-course level. In accordance with the existing literature, our results showed that students’ in-class smartphone use was negatively associated with their grades, even when we controlled for a broad range of observed student characteristics. However, the magnitude of the association decreased substantially in a fixed-effects model, which leveraged the panel structure of the data to control for all stable student and course characteristics, including those not observed by researchers. This suggests that the size of the effect of smartphone usage on academic performance has been overestimated in studies that controlled for only observed student characteristics.