Skip to main content
ABC News
The Tangled Story Behind Trump’s False Claims Of Voter Fraud

UPDATE (Nov. 5, 2020, 7:00 p.m.): President Trump has made baseless claims that voter fraud will likely cost him the 2020 election, but he hasn’t cited any evidence that shows that to be the case. We’ve covered Trump’s past claims of voter fraud, and while his allegations in 2016 were different, it shows that Trump has readily claimed the existence of voter fraud for years.


Three-thousand Wisconsinites were chanting Donald Trump’s name. It was Oct. 17, 2016, just after the candidate’s now-infamous “locker-room” chat with Billy Bush became public knowledge. But the crowd was unfazed. They were happy. And they were rowdy, cheering for Trump, cheering for the USA, cheering for Hillary Clinton to see the inside of a jail cell. The extended applause lines meant it took Trump a good 20 minutes to get through the basics — thanks for having me, you are wonderful, my opponent is bad — and on to a rhetorical point that was quickly becoming a signature of his campaign: If we lose in November, Trump told the supporters in Green Bay, it’ll be because the election is rigged by millions of fraudulent voters — many of them illegal immigrants.

That night wasn’t the first time Trump had made this accusation, but now he had statistics to support it. His campaign had recently begun to send the same data to reporters, as well. In both cases, one of the chief pieces of evidence was a peer-reviewed research paper published in 2014 by political scientists at Virginia’s Old Dominion University. The research showed that 14 percent of noncitizens were registered to vote, Trump told the crowd in Green Bay, enough of a margin to give the Democrats control of the Senate. Enough, he claimed, to have given North Carolina to Barack Obama in 2008.

“You don’t read about this, right? Your politicians don’t tell you about this when they tell you how legitimate all of those elections are. They don’t want to tell you about this,” Trump said. The crowd cried out in shock and anger.

But that’s not what the research showed. The 14 percent figure quoted by Trump was actually the upper end of the paper’s confidence interval — there’s a 97.5 percent chance that the true percentage of noncitizens registered to vote is lower than that. And, just as he got the data wrong, Trump also failed to tell his audience the full story behind the study.

By the time it got into his hands, that Old Dominion paper had already been heavily critiqued in the scientific community. The analysis hinged on a single, easy-to-make data error that can completely upend attempts to understand the behavior of minority groups, such as noncitizen residents of the United States. And even the paper’s authors say Trump misinterpreted their research. A couple of days after the Green Bay speech, one of them wrote up a blog post that countered much of what Trump had said, but it was a whisper in a roaring stadium. Months later — and despite having won the ostensibly rigged election — the Trump administration is still citing that paper as proof that fraudulent voting (especially by noncitizen immigrants) is pervasive, is widespread and alters electoral outcomes. On Thursday, the administration confirmed to several news outlets that it would be establishing a commission to investigate this fraud.

The back and forth on this single study has been seen as a liberal-vs.-conservative smackdown where both sides — Voter fraud is a myth! Voter fraud is rampant! — claim to have the backing of absolute scientific fact. But that wasn’t what it was meant to be about. Instead, the paper that is probably Trump’s most celebrated evidence of undocumented immigrants voting began as the work of an undergraduate Pakistani immigrant, who just wanted to know why people in her community who could vote, didn’t. Neither she, nor her adviser and co-author, ever expected their work to end up in the mouth of the president. The political scientists who have rebutted the paper didn’t foresee how easily data on voter behavior could lead research astray. And everyone involved says their work and words have been misinterpreted and misused, twisted toward political ends they don’t support. It’s a glaring example of just how easy it is, in a polarized political climate, for scientists to lose control of scientific results.

Getty Images

 

The Author

Gulshan Chattha would end up with a precocious scientific publication that went viral through the popular media, her words morphed into talking points that screamed from headlines across the country. But, before any of that happened, her research on immigrants and voting was just about her dad, herself and the threads of love and civic responsibility that tied them to each other.

Her father, Mohammad Afzal Chattha, had grown up in Pakistan, where he dropped out of elementary school to take a job that helped support his parents and siblings and, later, his wife and their children. In 1993, the family moved to the United States. Gulshan was just a year old. Her father opened a gas station and would eventually send his three kids to college. Through duty and perseverance, he created an entirely different existence from the one he was born into.

And part of Mohammad Chattha’s American dream was voting. He became a U.S. citizen the year he immigrated, and, his daughter said, he has spent the 24 years since then defending his role in the electoral process — against the statistics that show most naturalized citizens don’t exercise their right to vote, against the doubts of family members who think voting makes no real difference to political outcomes, against forgetfulness and complacency and the long list of bureaucratic and administrative hurdles that make it easier to just stay home. “He said, ‘You know what, you became a U.S. citizen. It’s your duty,’” Gulshan Chattha told me.

So when she got the chance to study the voting habits of naturalized citizens, the younger Chattha jumped at the opportunity. In 2013, she was an undergraduate in political science at Old Dominion. She wanted to know why some people came to this country with a mindset like her father’s while many others felt voting was optional. But her research ended up leading her in a different direction. “Do non-citizens vote in US elections?,” the paper Chattha published in 2014 with her professor, Jesse Richman, and David Earnest, a dean of research at Old Dominion, focused on evidence from a massive data set of voter surveys that, to Richman and Chattha, suggested noncitizen immigrants might be voting at a higher rate than most experts thought.

The path from parental tribute to Trump talking point began with that pivotal shift in Chattha’s research focus — and put her on a path toward an understandable, but crucial, data analysis error. The problem starts with the sample population. About 7 percent of the people who live in the United States are noncitizens, roughly half of whom are undocumented. They come here on student visas, they come as refugees. They teach, they heal the sick, they build houses and care for children. They commit crimes — though, research suggests, at a rate lower than that for citizens. Some are immigrants who just haven’t quite yet gotten around to the process of naturalization. Others travel as migrants, working to support families thousands of miles away. None of them are supposed to vote in our elections. But we know, on rare occasions, a few do.

The question is, really, how rare is “rare” — and nobody knows for sure. That’s something you need to understand up front. It may seem like this should be easily solvable, but one thing this paper does, if it does nothing else, is demonstrate how quickly apparently straightforward answers can fall apart.

 

The Error

Noncitizens who vote represent a tiny subpopulation of both noncitizens in general and of the larger community of American voters. Studying them means zeroing in on a very small percentage of a much larger sample. That massive imbalance in sample size makes it easier for something called measurement error to contaminate the data. Measurement error is simple: It’s what happens when people answer a survey or a poll incorrectly.1 If you’ve ever checked the wrong box on a form, you know how easy it can be to screw this stuff up. Scientists are certainly aware this happens. And they know that, most of the time, those errors aren’t big enough to have much impact on the outcome of a study. But what constitutes “big enough” will change when you’re focusing on a small segment of a bigger group. Suddenly, a few wrongly placed check marks that would otherwise be no big deal can matter a lot.

DISCOVERY OR DISASTER?

This is an issue that can affect all kinds of studies. Say you have a 3,000-person presidential election survey from a state where 3 percent of the population is black. If your survey is exactly representative of reality, you’d end up with 90 black people out of that 3,000. Then you ask them who they plan to vote for (for our purposes, we’re assuming they’re all voting). History suggests the vast majority will go with the Democrat. Over the last five presidential elections, Republicans have earned an average of only 7 percent of the black vote nationwide. However, your survey comes back with 19.5 percent of black voters leaning Republican. Now, that’s the sort of unexpected result that’s likely to draw the attention of a social scientist (or a curious journalist). But it should also make them suspicious. That’s because when you’re focusing on a tiny population like the black voters of a state with few black citizens, even a measurement error rate of 1 percent can produce an outcome that’s wildly different from reality. That error could come from white voters who clicked the wrong box and misidentified their race. It could come from black voters who meant to say they were voting Democratic. In any event, the combination of an imbalanced sample ratio and measurement error can be deadly to attempts at deriving meaning from numbers — a grand piano dangling from a rope above a crenulated, four-tiered wedding cake. Just a handful of miscategorized people and — crash! — your beautiful, fascinating insight collapses into a messy disaster.

Try playing around with this hypothetical in our interactive. There are a few things you should notice. First, the difference between reality and the survey results gets bigger the larger your measurement error is — that’s obvious enough. (We’re assuming here that 60 percent of the voters miscategorized as black will vote Republican.) But you can make that error rate matter less by making your sample population larger, in comparison to the overall poll numbers. If you’re studying a group that makes up 30 percent of the survey, instead of 3 percent, then that same 1 percent rate of measurement error barely registers. Finally, if your sample population stays at a minuscule 3 percent of the total, you can’t make the reality and the poll results match just by surveying more people. That’s a really important point. Scientists are conditioned to think of larger polls as better polls. But if what you’re studying is a handful of outliers, even an 80,000-person survey can’t save your results from the risks of measurement error. And if the error rate is big enough — which could be, relatively speaking, still very small — you can end up saying something with a lot of statistical confidence … and still be wrong.

That’s what makes risks posed by a skewed sample ratio and measurement error hard to spot, even for scientists who work with data every day. The combination distorts the statistical realities researchers are used to dealing with and the mistakes they’re used to preventing. Numerous people in political science — professionals with tenure, not just students such as Chattha — told me that few of their peers are really aware enough of the baby grand swinging over their heads.

Chattha’s paper has been misinterpreted by Trump and his surrogates, and it’s been wielded as a political weapon. That blowback changed Chattha’s understanding of how her fellow Americans see her and her community. But at the heart of this story is a mistake — a mistake that almost no one in the political science world was watching out for.

 

The Survey

T

o study the voting behavior of naturalized citizens, Chattha began working with data from the Cooperative Congressional Election Study, a national survey that has been administered online every year since 2006 to tens of thousands of people who live in America. The CCES asks people for their basic demographic information, their political opinions and their voting habits. Its size is what makes it special to political scientists. Historically, those researchers have had to draw on surveys that had something on the order of 1,000 respondents. In those voter surveys, noncitizens were unlikely to show up at all or, if they did, they were present in numbers so small that researchers couldn’t draw any statistically significant conclusions.

The larger CCES data set changed that. Suddenly, the group of noncitizens swept up in a survey of voters was large enough to be useful to research. And it was large enough that Chattha was able to spot that even smaller subpopulation that she didn’t expect to find: respondents who reported not being citizens but who also reported voting. That seemed unbelievable to her. She knew that when there isn’t a presidency on the line, most people who can vote, don’t. And that even in presidential election years, only about half of the voting-age population casts a ballot. “I wonder if they even know,” she remembered thinking. “I wonder if they know they don’t have citizenship.”

The question is, really, how rare is “rare” — and nobody knows for sure. That’s something you need to understand up front. It may seem like this should be easily solvable, but one thing this paper does, if it does nothing else, is demonstrate how quickly apparently straightforward answers can fall apart.

That thought was grounded in personal experience. Chattha came to the U.S. as a baby. She can’t remember a time when her father wasn’t a citizen. Her younger brother was born into his citizenship. It wasn’t until she started taking civics classes that Chattha realized that her father and brother were members of a club she didn’t belong to. She was in seventh grade and got her citizenship that same year.

Her story isn’t unique; some other people are equally confused about the details of their immigration status and what it means for voting rights — though few get all the way to the ballot box. In 2012, a group of college student journalists reviewed the previous 12 years’ worth of voting fraud cases. Out of 2,068 incidents, they found 56 that involved noncitizen voters, and they were told by their sources that confusion about status was a major factor. You could see that confusion in action in February, when a 37-year-old Texas woman, who had legal permanent residency status, was convicted of illegally voting in two elections. She was sentenced to eight years in prison. Her lawyer told reporters that she had a sixth-grade education and didn’t know she wasn’t supposed to vote.

Of the 32,800 people surveyed by CCES in 2008 and the 55,400 surveyed in 2010, 339 people and 489 people, respectively, identified themselves as noncitizens.2 Of those, Chattha found 38 people in 2008 who either reported voting or who could be verified through other sources as having voted. In 2010, there were just 13 of these people, all self-reported. It was a very small sample within a much, much larger one. If some of those people were misclassified, the results would run into trouble fast. Chattha and Richman tried to account for the measurement error on its own, but, like the rest of their field, they weren’t prepared for the way imbalanced sample ratios could make those errors more powerful. Stephen Ansolabehere and Brian Schaffner, the Harvard and University of Massachusetts Amherst professors who manage the CCES, would later tell me that Chattha and Richman underestimated the importance of measurement error — and that mistake would challenge the validity of the paper.

But that was yet to come. In 2013, Chattha and Richman concluded that, in some states and in some tight races (and if noncitizens all voted the same way) this rate of noncitizen voting could be enough to change the outcome of an election.

Chattha was (and still is) confused about why these people would risk jail and deportation in order to vote. But she isn’t a professional scientist, and she told me in February that she was uncomfortable trying to parse the details of statistical analysis she last worked on three years earlier. She referred questions about the data to Richman, who, for his part, had seen the results as an opportunity: He hoped that this data would depolarize the debate about fraudulent voting and voter ID laws. Hindsight might make that sound a touch naive, but Richman knew something a lot of talking heads don’t seem to — it’s really difficult to study voter fraud. There isn’t much data on it in the first place. Richman knew the study wasn’t perfect, that it involved some assumptions and extrapolations that might be disproved by future research. But he reasoned that having some numbers, any numbers — even if they came with some big caveats — might help move the discussion from ideology to fact. Neither he nor Chattha intended the paper to be seen as definitive proof of voter fraud. Neither even expected many other people to read it.

 

The Media

Research that an undergrad does, even with a professor, doesn’t tend to get published or to get this much attention when it does. But Chattha’s paper, which was published in the December 2014 issue of the journal Election Studies, has been referenced by nine other papers in just over two years, according to Google Scholar. That’s more than is typical for that journal, and for political science as a whole.3 For better or worse, Chattha’s paper has been influential. And that fact is probably tied to Richman’s decision to write an essay about it for The Washington Post’s Monkey Cage blog. Multiple people I interviewed for this story, including Ansolabehere and Schaffner — who, along with their co-author Samantha Luks of the survey firm YouGov, went on to publish their own paper in Electoral Studies critiquing Chattha and Richman’s work — said they probably never would have heard of the paper without that essay.

Published in October 2014, after Chattha had graduated and two years before Trump’s rally in Green Bay, Richman’s essay was titled “Could non-citizens decide the November election?” Not only did the article turn the paper into a media sensation, it also helped to create a series of misconceptions Richman would later struggle to correct.

“That title misled people to a degree,” Richman told me. “The title suggested a ‘yes’ answer, where our ultimate conclusion was really one more that they probably wouldn’t. Maybe if there was a really, really close race they might, but otherwise [they] probably wouldn’t have much effect on the outcome of the elections.” John Sides, the editor of the Monkey Cage blog and a political scientist, said that the headline was based on the opening lines of Richman’s essay (“Could control of the Senate in 2014 be decided by illegal votes cast by non-citizens?”) and that he wasn’t aware Richman thought it was misleading. Either way, the headline, the misconceptions and the data mistake all came together to create a perfect political storm.

This isn’t the only time a single problematic research paper has had this kind of public afterlife, shambling about the internet and political talk shows long after its authors have tried to correct a public misinterpretation and its critics would have preferred it peacefully buried altogether. Even retracted papers — research effectively unpublished because of egregious mistakes, misconduct or major inaccuracies — sometimes continue to spread through the public consciousness, creating believers who use them to influence others and drive political discussion, said Daren Brabham, a professor of journalism at the University of Southern California who studies the interactions between online communities, media and policymaking. “It’s something scientists know,” he said, “but we don’t really talk about.”

These papers — I think of them as “zombie research” — can lead people to believe things that aren’t true, or, at least, that don’t line up with the preponderance of scientific evidence. When that happens — either because someone stumbled across a paper that felt deeply true and created a belief, or because someone went looking for a paper that would back up beliefs they already had — the undead are hard to kill.

In Chattha’s case, her data came directly from an established, respected source, and the mistake that undermined it was so easy to make that even the peer reviewers who critiqued the paper before publication didn’t catch it. The misinterpretation that followed was a flood Chattha could do little more than watch as it rose around her. But the classic example of zombie research is far less defensible. The idea that vaccines cause autism is a story that began with a single research paper that has since been retracted because of fraud and conflicts of interest, its author stripped of his medical license. Although it has been discredited in the scientific community, believers keep on believing. Social media keeps on sharing.

The stickiness of erroneous beliefs such as a connection between autism and vaccines is often cited as proof of a growing mistrust of science, as an institution, in American culture, but that’s probably not the most useful framing, said Dominique Brossard, professor of science and technology studies at the University of Wisconsin-Madison. Overall, Americans don’t trust science and scientists any less than they did 40 years ago — around 40 percent of us report “a great deal of confidence” in science, according to the National Science Foundation’s science and engineering indicators. That’s enough to make science the second-most trusted institution in America, after the military. Add in the people who have at least “some confidence” in science, and you get 90 percent of Americans — a group that probably shouldn’t be framed as being at war with science.

Instead, Brossard said, these cases of people believing incorrect things have more to do with factors separate from their trust in science. “Political ideology,” she suggested, “religiosity.” Those nonscientific beliefs then get entangled with how they consume media. For instance, say you have a political belief system that leads you to be skeptical of government-mandated vaccination. If social media then handed you a paper showing those vaccination programs to be hazardous, then that science — a trusted source of information — would ring particularly true, especially devoid of context about other studies that show the opposite. Your existing sense of risk would make the paper urgent and would make you more likely to share it. The more you believe in the risk and the more your friends believe it, the more suspicious you’re likely to be of any attempts to downplay that risk. What you get, Brossard said, is a perfect social machine for amplifying an erroneous interpretation of an idea.

This is essentially what happened with Chattha’s paper, and the problem is compounded, scientists said, by the fact that most Americans don’t understand what’s going on when scientists critique each other. No single scientific research paper, no matter how well done, is supposed to serve as absolute proof of anything. Chattha was horrified when advocates of a state voter ID law contacted her in the hope that she would testify as proof of its necessity. You shouldn’t create a law based on one study, she said. “Science is this never-ending process. I don’t think science is about getting the answer right now. It’s about getting closer and closer to the answer.”

In that sense, what has happened with Chattha’s paper in the academic world is an example of how science should function. Chattha and Richman published a paper they said presented evidence for a hypothesis. Ansolabehere, Luks and Schaffner read it, saw flaws and published a critique. Richman is working on a rebuttal to their paper. Over time, this back-and-forth pushes both sides to examine their work, defend it and inch science closer to the capital-T Truth. But there are all kinds of incentives that can interfere with that, including scientists’ need to publish their own, novel work rather than spend time critiquing someone else’s. As a result, the real-world process of science usually doesn’t happen in such a textbook way as it did with Chattha’s paper, Brabham and Brossard said. “This occurrence is rare, and it’s really kind of beautiful,” Brabham said of the exchange. “But nobody [in the public] understands what is happening, so they just see ‘I’m right.’ ‘No, I’m right.’”

And you can see this in the way the media and political partisans have gone on to misinterpret the rebuttal paper Ansolabehere, Luks and Schaffner wrote, just like they misinterpreted Chattha and Richman’s. The other three researchers simply wanted to make it clear that the original paper is flawed by the combination of skewed sample ratio and measurement error, and that it can’t be used to prove that noncitizens are voting.

But that isn’t the same thing as saying illegal voting never happens, Ansolabehere said. The number of fraudulent voters isn’t zero. And he’s frustrated by people using his paper to promote that idea. “We have evidence from pieces of the electoral system that [fraudulent voting] is very small. But we don’t know how small,” he said. Other people have tried to figure this out, and they failed, too. In the bigger context of the scientific process, Chattha’s study is part of an ongoing and still unsuccessful effort to illuminate a dark hole in our knowledge.

 

The Scientists

And Chattha’s paper could still end up being very important — not for what she and Richman published but because of the mistake they made on the way there and what it means for scientists who study voter behavior.

“Science is this never-ending process. I don’t think science is about getting the answer right now. It’s about getting closer and closer to the answer.”

The data set Chattha and Richman used — that 50,000-person survey — was Ansolabehere, Luks and Schaffner’s baby. Their large survey gave Chattha and Richman the power to illuminate small subgroups that would otherwise languish in the unanalyzed darkness. But it also set up a statistical Piano of Doom-style risk. And an entire research field — with the exception of those political scientists who specialize in data methodology — was generally unprepared to deal with that risk, which is becoming more and more relevant as more large data sets make it possible to pull out statistically significant, but also itsy bitsy, subgroups for close inspection. Only when the problem was staring them in the face from the website of The Washington Post were they able to see it clearly.

Remember that when you have a situation like this, where you’re studying a tiny subpopulation within a much larger group, even an extremely small rate of measurement error can alter your results. So it becomes crucial to know the error rate in your data. How often did people accidentally click “noncitizen” when they meant “citizen”? At what rate did noncitizens unintentionally report that they’d voted when they hadn’t?

Nobody knows what the rate of measurement error was for the 2008 and 2010 CCES data Chattha and Richman analyzed. The pair tried to account for it by comparing other kinds of data collected by other surveys of noncitizens — including race, state of residence and how people answered other survey questions about immigration issues — with the noncitizens in the CCES data. Based on that, they decided the measurement error was too small to matter.

But Luks, Schaffner and Ansolabehere found evidence that, in this case, small was still significant. In particular, they noted multiple cases of people who marked themselves as citizens in 2010 but, on the 2012 edition of the survey, marked themselves as noncitizens, and vice versa. Moreover, this rate of error that we do know exists between 2010 and 2012 — just 0.1 percent — turned out, by itself, to be enough to account for all the noncitizen voters in Richman and Chattha’s 2010 sample. In other words, there might not have been any noncitizen voters that year. And the actual error rate could be even higher.

Ansolabehere, Luks and Schaffner’s paper doesn’t determine the exact rate of measurement error, but it does show that Richman’s assumptions about it were deeply wrong. Their analysis has also demonstrated how easy it is to make those wrong assumptions. So easy, it seems, that political scientists aren’t sure what to do with research that’s flawed in this way. To Schaffner, the answer is relatively simple: Someone, either the journal or a peer-reviewer, should have seen what a weird result this was and contacted him or Ansolabehere. Maybe then the article could have been corrected or simply never published. But Harold Clarke, the editor of Election Studies, said it wasn’t normal practice to contact the creator of data that a study was based on. To him, Chattha and Richman’s paper was “just a very standard sort of thing.” Controversial, yes, but still worthy of publication.

Today, while he defends his and Chattha’s findings, Richman really does buy his critics’ concern that, in some cases, measurement error can become more powerful than most scientists give it credit for. “Where I come down on this is that measurement error needs to be taken very seriously,” he said in an email. Chattha and Richman’s paper represents one of the first big examples of political scientists dealing with the statistical Piano of Doom in a very public way. And everybody involved agrees that the issue hasn’t gotten enough attention.

Schaffner is certain there are other published papers whose results are marred by the combination of imbalanced sample ratios and measurement error, it’s just that nobody has caught them yet.

He and Ansolabehere are adding warnings to the user guides for the CCES, and they’re writing a paper that they hope will also draw attention to the issue. The realization of how important measurement error can be has shifted Richman’s research focus, as well; he’s now working on a paper about the risks of spurious correlations in analyses of big data. Who knows how long they all would have gone on not noticing how dire this particular mistake could be, Schaffner said, if not for Chattha’s curiosity.

Chattha found some weird statistical results that didn’t match up with her lived experience or with what researchers know about American voting habits and prosecuted cases of voter fraud. She published those results, and they got caught up in a media-driven amplification of fears about the integrity of the electoral process. The research of an immigrant — who has spent the last year realizing, to her dismay, that many of her fellow citizens don’t think of her as a “real” American — was seized as a clobber text by people who want to make sure only “real” Americans vote.

And all of that had to happen to make political scientists aware of how easily their data could fool them.


“Politics ain’t beanbag,” Richman told me. “It’s not always about the truth, and, ideally, the scientific enterprise is about that.” There’s a fundamental disconnect here that means Gulshan Chattha is not going to be the last person to watch helplessly as politicians squeeze and stretch her findings until they take on a shape that’s no longer recognizable. In fact, the researchers I spoke to said that this was almost a natural consequence of the drive to produce science that is relevant to the real world and available to people outside the ivory tower.

But those same people also said that scientists generally aren’t prepared for this eventuality and don’t know how to deal with the fact that scientific research is a baby bird that will, at some point, hop to the edge of the nest and jump out. Or, rather, everyone is ready to see the fledgling soar. They aren’t prepared for the times when it crashes to the pavement instead.

Ansolabehere and Schaffner said they weren’t prepared for their data to inspire other researchers toward investigations that carry a big risk of failure. Meanwhile, Richman told me, he wasn’t prepared for his essay, about a topic most people would consider politically incendiary, to burst into flame. If he had it to do over again, he told me, he probably wouldn’t have written that Washington Post article. And because he wasn’t prepared for the explosion, he couldn’t protect his student.

Writing the paper initially gave Chattha a big high — it was exciting to discover that she could play a part in the scientific community, publishing and sharing ideas with other researchers. After the initial project, she took a detour from her law school plans to get a master’s degree in political science — maybe, she thought at the time, she wanted to be a scientist instead. But everything that came afterward — from the initial reaction to Richman’s essay, to Donald Trump’s fiery speeches about her data — changed her mind. Today, she’s finishing up a law degree.

“I didn’t have any control,” Chattha told me, and her feelings were echoed by her professor and by her critics. Both sides in an academic debate, burdened by the same sense that, somehow, the sharp knife of science shouldn’t have lost a gunfight with politics. But, if there’s anything we can learn from this story, Brossard said, it’s that being a part of the social world and the political world is part of a scientist’s job description — their work doesn’t exist separate from its interpretation. Or, as Chattha put it, “Once something is published, there’s no taking it back. It’s no longer yours.”

UPDATE (May 11, 10:30 p.m.): This article has been updated to include all three authors in all references to “The perils of cherry picking low frequency events in large sample surveys” by Stephen Ansolabehere, Samantha Luks and Brian Schaffner.

CORRECTION (May 12, 1:40 p.m.): A previous version of this article implied that David Earnest is the sole dean of research at Old Dominion. He is one of several.

Interactive graphic by Matthew Conlen and Andrei Scheinkman

Footnotes

  1. Or when we get incorrect information about a small subgroup in a sample by other means.

  2. One common mistake in discussions of Chattha’s paper is the conflation of “noncitizen” with “undocumented immigrant.” People with permanent residency status are noncitizens. So are people who are here on student visas, refugees and people on temporary work visas. The CCES survey doesn’t have details on what kind of noncitizen each purported noncitizen voter was.

  3. By five years after publication, papers in Election Studies have, on average, each been referenced by about two other papers. For the top political science journal, the American Journal of Political Science, that number is 5.4.

Maggie Koerth was a senior reporter for FiveThirtyEight.

Comments