As social media platforms become ever more ubiquitous, policymakers are grappling with how to counter their effects on political attitudes and electoral outcomes, addiction and mental health, misinformation and toxic content. This column suggests practical ways that academic researchers can help guide the design of government regulations aiming to address these issues. The authors explain how to run experiments that use social media platforms to recruit subjects, how to harness platform features and technologies to collect data and generate variation, and limitations to consider when conducting such experiments.
Social media platforms have become ubiquitous in modern economies. As of 2023, there were more than five billion active social media users worldwide, representing over 60% of the world population. In the US, the average user spent 10.5% of their lives on these services (Kemp 2024). Partially due to the increasing share of time that users spend on social media, policymakers have raised concerns that these platforms can influence political attitudes and electoral outcomes (Fujiwara et al. 2020), lead to significant mental health and addiction challenges (Braghieri et al. 2022), and expose consumers to misinformation and toxic content (Jiménez-Durán et al. 2022). In addition, the dominant social media platforms have considerable market power; as such, it is not clear that market competition can help resolve these policy concerns. Regulators in the EU have implemented several policies to deal with these issues – such as the Digital Markets Act (DMA), the General Data Protection Regulation (GDPR), and the Digital Services Act (DSA) – and governments around the world are actively drafting legislation to address them.
How can the research community provide evidence to help guide the design of such regulations? One option is to empirically evaluate policies after they have been implemented – as has been the case for the EU’s GDPR and DMA, Apple’s ATT, and the German NetzDG Law – which can help guide future amendments to these regulations as well as their adoption in other jurisdictions (Jiménez-Durán et al. 2022, Aridor et. al 2024, Johnson 2024, Pape et. al 2025). This provides policymakers with meaningful evidence only after years of implementation and only by evaluating policies that were actually implemented, not counterfactual policies that were considered.
Another option is to have platforms explicitly conduct experiments simulating the effects of proposed policy interventions (Guess et. al 2023, Nyhan et. al 2023, Wernerfelt et. al 2025). This option comes with its own set of challenges, as it provides platforms with outsized influence on the type of questions and interventions that can be studied, as the platforms are not impartial agents in the policy debate (Hendrix et. al 2023a, b).
In a forthcoming chapter in the Handbook of Experimental Methods in the Social Sciences (Aridor et al. 2025), we provide a practical guide to a third option that exploits how third-party technologies and platform features can be used for researcher-generated experimental variation. Our method combines the best of the two aforementioned options: it is accessible to researchers without requiring explicit platform cooperation, and it allows for counterfactual policy evaluation before the implementation of a chosen policy. Our paper provides detailed documentation for running such experiments: starting from using social media platforms for recruitment of experimental subjects, documenting how to use a combination of platform features and technologies such as Chrome Extensions and mobile apps to collect data and generate experimental variation, and concluding with considerations and limitations to such experiments. Overall, this methodology serves as a powerful toolkit to study policy issues not only on social media platforms, but also on platforms such as Amazon (Farronato et. al 2024), Google Search (Allcott et. al 2025), and YouTube (Aridor forthcoming). We document several experiments that we conducted and explain how they relate to policy challenges.
First, how does social media impact political attitudes? Levy (2021) tackles this question by directly recruiting a set of participants from Facebook and exploiting the structure of the news feed, which sourced content from the pages users followed. Levy randomly nudges participants to follow conservative or liberal news outlets on Facebook, leading to experimental variation in the type of content that shows up in participants’ news feeds. This treatment allows Levy to quantify the causal effect of changes to the news feed algorithm on downstream outcomes, including news consumption, political opinions, and affective polarisation.
Second, is social media addictive? What does addiction imply for the large market power of these platforms? Like Allcott et al. (2022), our work (Aridor forthcoming) tackles these questions by inducing exogenous variation in social media use via third-party mobile applications. Importantly, unlike many other markets studied by economists, having individuals install this software allows for personalised variation. This feature allows Allcott et al. (2022) to quantify the role of self-control problems in driving usage, and allows us to characterise second-choice diversion ratios between social media applications as well as quantifying the role of consumer inertia in usage.
Third, how does the hateful and toxic content encountered by consumers on such platforms impact overall engagement, and do platforms have an incentive to remove this content? Beknazar et al. (2024) conduct a browser experiment with social media users – randomly decreasing their exposure to toxic content on Facebook, Twitter, and YouTube – allowing them to quantify the impact of toxic content on the time users spend on social media, their engagement with ads, and the toxicity of content that participants subsequently produce.
These examples illustrate the type of variation that research can generate and the data they can collect. In addition to contributing to ongoing policy debates, we argue that social media can be used to answer a wide variety of questions in economics that are not directly related to digital platforms. For example, social media can be used to recruit specific participants for an experiment (Trachtman 2024), interventions on social media can be used to study the effect of political campaigns (Enrquez et al. 2024), and behaviour on social media can be used as an outcome for studies on discrimination (Ajzenman et al. 2023, Angeli and Lowe 2023).
While social media experiments serve as a powerful tool, there are important limitations. First, the effect sizes of interventions are often small, requiring large samples or within-subject designs to detect a meaningful impact. At the same time, these experiments often lack the scale typically associated with large-scale policy changes or platform-level experiments. Yet, longitudinal approaches may suffer from attrition (such as participants closing their social media accounts) while noncompliance (like reactivating accounts during a deactivation study) can further bias results. Another common challenge is that interference across users can violate the Stable Unit Treatment Value Assumption (SUTVA), and both user and algorithmic responses may adapt in equilibrium, complicating interpretation. Finally, ethical considerations often constrain study design and limit replicability. We discuss these limitations and ways to deal with them in our chapter.
The handbook chapter complements our review of the economic literature on social media (Aridor et al. 2024). It provides a methodological guide for exploring fundamental economic questions and generating insights relevant to policy discussions.
Source : VOXeu