Citizens need clear, accessible information to make informed decisions, but today’s digital landscape can be overwhelming. This column tests whether an AI chatbot can ease election information overload for 1,500 Californian voters. The chatbot reduces the voters’ perception of the effort needed to get more information, increases voter engagement, and improves the accuracy of their understanding – especially among those who initially know less about the issues. For policymakers, this suggests transparent AI tools can help reduce inequalities in representation, but only if users trust their performance.
In a well-functioning democracy, citizens require clear, accessible information to make informed decisions. However, today’s digital information landscape can overwhelm voters, increasing cognitive costs and leading to reliance on incomplete or biased information, weakening democratic accountability (Matějka and Tabellini 2020).
In recent work (Ash et al. 2025), we argue that large language models (LLMs) and AI-powered chatbots can help address information overload in democracies. Our analysis complements existing evidence demonstrating LLMs’ potential in various fields, such as productivity gains for freelance writers (Noy and Zhang 2023), customer support (Brynjolfsson et al. 2025), and consulting tasks (Dell’Acqua et al. 2023).
To investigate whether LLMs can provide similar productivity benefits to voters navigating dense political information, we developed BallotBot, a custom chatbot designed specifically for California ballot initiatives from the November 2024 election. BallotBot employs retrieval-augmented generation, drawing directly from the official voter guide to simplify and clarify information about three major propositions. To evaluate BallotBot’s effectiveness compared to traditional voter guides, we conducted a pre-registered randomised online experiment. Specifically, we examined whether the chatbot could reduce voters’ information costs and boost overall engagement while providing access to reliable information.
An experiment with AI for voter information
BallotBot offers voters an interactive way to explore information about California’s ballot propositions. Unlike standard chatbots (e.g. ChatGPT or Claude), it uses retrieval-augmented generation, directly incorporating proposition-specific content from California’s official voter guide to improve the accuracy of its responses. We developed three versions of BallotBot, each tailored to one of the selected ballot initiatives:
- Proposition 32, which proposes increasing California’s minimum wage to $18 per hour by 2026, indexed to inflation
- Proposition 33, which would expand local governments’ authority to enact rent control by repealing the Costa-Hawkins Act
- Proposition 36, which aims to modify felony charges for certain drug and theft-related crimes
After collecting initial demographic information, approximately 1,500 participants recruited through Prolific were randomly assigned to one of two experimental groups, each with access to an informational tool: one group accessed BallotBot, while the other accessed the traditional voter guide.
In the initial phase of the experiment, participants completed an incentivised quiz on one of three California ballot propositions, consisting of basic and in-depth questions. Basic questions were answerable using introductory content from the voter guide, while in-depth questions required deeper reading. Participants earned 20 cents for each correct answer and could use the assigned informational tools during the quiz (i.e. either BallotBot or the traditional voter guide). Afterwards, a randomly selected subset received feedback on quiz performance to induce varying confidence in the informational tools.
We recorded participants’ probability of providing correct answers, the time spent on each question, and their confidence in their answers.
To measure the perceived cost (effort) associated with obtaining political information, we used an incentive-compatible method by asking participants how much additional compensation they would require (i.e. their ‘bid’) to answer one more quiz question; a lower bid indicated a lower perceived cost. After the quiz, participants had the opportunity to freely interact with their assigned resource, exploring any further questions they had about the proposition. We tracked their engagement by measuring the amount of time they spent interacting with the source. Participants retained access to their assigned resource throughout the following week.
In the second wave, conducted one week later, we reassessed participants’ political knowledge using a similar quiz, but this time without access to external resources. We also asked participants to explain their voting intentions through an open-ended question. Finally, in the third wave, conducted after election day, we collected self-reported data on voter turnout and voting behaviour.
Results
First, we find that participants who had access to BallotBot performed significantly better on in-depth questions, with accuracy improving by approximately 18% compared to those using the traditional voter guide (Figure 1, panel a). They also took about 10% less time to answer these challenging questions (panel b). However, participants with BallotBot access did not show improved accuracy on basic questions and actually spent slightly more time answering these simpler questions. BallotBot did not significantly affect overall confidence in answers (basic or in-depth; panel c).
Figure 1 Participants with BallotBot access performed better on in-depth questions
(a) Accuracy: Share of correct answers


(b) Response time: Time to answer questions


(c) Confidence: Self-reported confidence in answers


Notes: Points show average treatment effects of BallotBot relative to the voter guide; bars indicate 95% confidence intervals. Estimates are shown separately by question difficulty. Outcomes are: (a) share of correct answers, (b) time to answer, and (c) self-reported confidence (scale units).
Source: Ash et al. (2025).
A key finding from our analysis is that BallotBot effectively reduces the perceived effort required to learn about ballot initiatives (Figure 2). Focusing on participants who received feedback, we asked them how much money they would require to answer an additional quiz question, with lower amounts indicating lower perceived effort. Although BallotBot did not significantly lower the average declared amount for all participants, it notably reduced the amount among individuals with less prior knowledge about the propositions and/or lower education (Figure 2, panel b). This result is consistent with recent studies suggesting generative AI tools provide greater benefits to users with lower initial knowledge or skill levels (Brynjolfsson et al. 2025, Cui et al. 2024).
Interestingly, this effect is not observed if respondents do not receive feedback on the number of correct answers, consistent with earlier research highlighting that feedback and transparency are essential for increasing trust and acceptance of AI-driven recommendations (Ahn et al. 2024).
Figure 2 BallotBot reduces perceived effort required to learn about ballot initiatives
(a) Perceived cost


(b) Perceived cost, by prior knowledge


Notes: Points show average treatment effects of BallotBot relative to the voter guide; bars indicate 95% confidence intervals. Panel (a) reports the overall effect on the perceived cost of absorbing information (stated payment required to answer one more quiz question; lower values indicate lower cost). Panel (b) reports treatment-effect heterogeneity by prior knowledge (low versus high).
Source: Ash et al. (2025).
Another new finding is that BallotBot boosts curiosity and engagement with ballot initiatives. When given the chance to explore information they judged important for their vote, participants using BallotBot spent, on average, about 70% more time gathering that information (Figure 3, panel a). They were also 75% more likely to access optional bots for additional propositions (panel b). Beyond exploration, BallotBot modestly improved perceived information quality – about a 17% higher likelihood of rating the resource as better than usual (panel c) – but it did not significantly increase mid-week use of the resource (panel d).
Figure 3 BallotBot boosts curiosity and engagement with ballot initiatives
(a) Time spent collecting information


(b) Probability of checking other propositions


(c) Resource is better than the usual means of information


(d) Used resource during the week


Notes: Points show average treatment effects of BallotBot relative to the voter guide; bars indicate 95% confidence intervals. Outcomes are: (a) time spent collecting information, (b) probability of checking other propositions, (c) rating the resource as better than participants’ usual information sources, and (d) use of the resource during the week between survey waves.
Source: Ash et al. (2025).
Next, we find that the effects of BallotBot do not persist over time (Figure 4). After the first wave, participants maintained access to BallotBot or the voter guide, though only around 10% used them during the interim week. One week later, participants completed a second quiz without access to the tool. We observed no significant difference in quiz scores between the BallotBot and voter guide groups. Additionally, we asked participants to write brief statements explaining their voting intentions. AI-based evaluation found no differences in the reasoning quality between the two groups.
However, descriptive analysis offered further insights. Participants who liked BallotBot and accessed it during the interim week showed improved quiz performance, suggesting enhanced factual understanding. Conversely, participants who accessed the voter guide again produced higher-quality written explanations, indicating better articulation and reasoning. These findings suggest that each tool may support voter learning in different, potentially complementary ways.
Figure 4 Effects of BallotBot do not persist over time
(a) Effect of BallotBot on share of correct answers and quality of written voting motivation


(b) Effect of long usage


(c) Effect of long usage, voter guide users


(d) Effect of long usage, BallotBot users


Notes: Panel (a) reports the treatment effects of BallotBot relative to the voter guide on two follow-up outcomes: (i) share of correct answers and (ii) quality of written voting motivation. Panel (b) reports the effect of ‘long usage’ – accessing the tool during the week between survey waves – on the same outcomes in the full sample (relative to non-users). Panels (c) and (d) report the same long-usage effect estimated separately within the voter guide group and the BallotBot group.
Source: Ash et al. (2025).
Finally, we examined whether BallotBot influenced declared voting behaviour. We found no statistically significant effect on self-reported voter turnout nor on the likelihood of voting ‘yes’ on any of the assigned propositions.
Policy implications and conclusions
Our findings demonstrate that BallotBot reduces the perceived effort required to obtain political information, increases voter engagement, and improves the accuracy of voters’ understanding – especially among those who initially know less about the issues. Importantly, these positive effects require that users clearly see how well the tool performs; we have to build users’ trust in AI recommendations.
For policymakers, these results suggest that integrating transparent AI tools into voter education could support voters who stand to gain the most, potentially reducing inequalities in representation. Ensuring users can directly assess the quality and reliability of AI-generated information will be essential for users to fully benefit from these technologies.
We also provide suggestive evidence that AI tools like BallotBot complement, rather than substitute, traditional voter guides. Policymakers should therefore combine AI with existing voter information methods.
Source : VOXeu