How Facebook Could Rig the Election

Facebook rigs the election
Jason Hoffman/Thrillist
Jason Hoffman/Thrillist

In the latest season of House of Cards, Frank Underwood discovers the Republican presidential nominee he's running against is using a lot more than his chiseled good looks to charm his way to the top of the polls. His real secret to success -- spoiler alert, you know the drill -- is an inside connection to a fictional, Google-esque search engine. The hilariously named Pollyhop has been handing over data on its users' search histories, allowing him to better target potential voters and mold himself into the most desirable candidate. It's far from the most over-the-top or blood-soaked scene in the series, but it's disturbing nonetheless for one huge reason: it's entirely plausible.

What's stopping a real-life Silicon Valley giant -- say, Facebook -- from swinging an election for a particular candidate? Let's say Mark Zuckerberg woke up tomorrow and decided to prevent Donald Trump from landing in the White House. With an estimated 1 billion users logging into Facebook every day to scroll through their News Feeds, it could be as simple as tweaking the algorithm to display content that's favorable to Hillary Clinton, while stoking disdain for Trump or filtering out his campaign coverage altogether. Or maybe something more impactful, like triggering a button that directly skews voter turnout. Well, that button already exists

The media's sway over public opinion, especially in the heat of a presidential election, is certainly not unprecedented. But Facebook is a monolith unlike any that's ever existed before. Its power extends beyond merely influencing what we think or how we feel (which, by the way, it definitely does). Facebook's proven to have a direct, measurable effect on what we do. In fact, if it really wanted to tilt the election, it could just strategically manipulate who turns out to vote and where -- something it's 100% capable of.

voters box
Jason Hoffman/Thrillist

There are no laws to stop Facebook from influencing how people vote...

Earlier this month, Zuckerberg made headlines when he not-so-subtly denounced Trump at the Facebook developer conference. Adding fuel to the fire, Gizmodo revealed that employees within Facebook were actively questioning whether it was their responsibility to help block a Trump nomination. If indeed Facebook decided to wield its power in such a way, would it get away with it? Legally speaking, yes.

"Critics could certainly launch arguments and bring claims, but the constitutional protections afforded to a company like Facebook are significant," said James Goodnow, an attorney and legal commentator at Fennemore Craig, P.C. He told us that even if the company were accused of using its influence to affect voting, building a case against it would be tricky. Facebook could simply argue that it was exercising its right to participate in the process.

Which is actually kinda how it responded to Gizmodo's report. Facebook laid out its neutrality in no uncertain terms: "Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community. We encourage any and all candidates, groups, and voters to use our platform to share their views on the election and debate the issues. We as a company are neutral -- we have not and will not use our products in a way that attempts to influence how people vote." 
 

... and it already directly influences if people vote

On Election Day, November 2008, Facebook rolled out its "I Voted" widget. It allowed users to proudly let their friends know that they hit the ballot box, and Facebook's activated some variation of this tool every Election Day since. All in the name of civic participation, right? But in at least two instances (that we know of), the widgets were deployed as special research experiments.

In 2010, Facebook secretly tampered with 61 million random users' voting widgets. Its findings, published two years later in the journal Nature, revealed that 20% of users who saw their friends had voted also clicked the "I Voted" button, compared to just 18% of those who didn't see any "I Voted" messages from friends. The most striking discovery? This deviation actually translated into real-world votes -- by consulting voting records after the election, the study's authors determined that the gentle prodding of users actually increased turnout by 340,000, or .14% of the voting population that year. That may sound like an insignificant number, but just remember that a measly 537 votes in Florida are the reason there's no Al Gore portrait hanging in the White House right now.

"Social-pressure mail -- mailing people to let them know that their voter history is a public record -- has repeatedly been shown to reliably increase turnout," said Mac Zilber, a campaign consultant for Shallman Communications who’s worked on over a hundred different congressional and state legislative campaigns. "The problem is that it also pretty reliably creates backlash, so most campaigns don't do it." 

So Facebook has its finger on a trigger that's proven to directly influence voter turnout. All it lacks is a clear motive. Campaigns would love to influence turnout, but want to avoid the bad PR that comes with pressure tactics and voter shaming. Enter Frank Underwood.

facebook campaign
Jason Hoffman/Thrillist

What's stopping a candidate from working with Facebook behind the scenes? 

Campaigns already drop huge dollars on Facebook advertising. Why not work directly with Facebook to "sponsor" the "I Voted" feature, and strategically target certain demographics while they're at it? "There's no obvious legal reason why [a party or campaign] couldn't purchase such a feature -- if it made sure it is accompanied by the appropriate disclosure of sponsorship information," Goodnow told us. They'd have to be transparent, but if Facebook wanted to push a certain candidate, there's no legal reason an "I Voted for Hillary" badge or a Trump-sponsored News Feed would be off the table.

But it wouldn't be tough for Facebook to secretly partner with a campaign, Pollyhop-style. Look at the other research experiment it conducted in 2012 when it activated the "I Voted" button in various styles and locales on the page, to test which design encouraged more interaction on election day. According to Mother Jones, many users reported they only saw the message late in the day, or not at all -- something Facebook's VP of global business communications chalked up to "software glitches." If you were going to secretly deploy a strategic one-day-only voter turnout system, using a software glitch as a cover would be a mighty convenient excuse, especially if that glitch happened in certain communities where voting skewed right or left.

Facebook and the American flag
Jennifer Bui/Thrillist

Is Facebook's "I Voted" research experiment ethical?

All speculation and legality aside, how ethical is it to manipulate the voting public at such a grand scale, and with such incredibly high stakes, even for innocuous research purposes? The freedom to test user engagement on new layouts and features is obviously critical for Facebook's success. But where do you draw the line between internal A/B testing and collecting information on users' voting habits, or secretly manipulating their moods (which Facebook did, quite controversially, back in 2012)?

The sheer amount of data Facebook has on us is a total game-changer for social scientists. But unlike psychologists or biomedical researchers whose studies must be approved by a board, Facebook's research is conducted at its own discretion. It's also crucial to recognize that the tech industry isn't trained to think about ethics in a way that, say, psychologists do. Academics are conditioned to ask themselves about the implications of their research because of egregious ethical violations made by their forebears. Which begs the question, will big data companies define the line before they cross it?

"There's a lack of training to really question 'what are the implications of what I’m building,'" says Bonnie Tijerina, a researcher at Data & Society dedicated to tackling the social, cultural, and ethical issues around big data. "The Facebook [mood manipulation study] was big in getting people to stop and think, 'Oh, we really do need to figure out how we handle this.'"

It's entirely possible that Facebook has a robust, thorough system in place to ensure it doesn't overstep its bounds -- but we wouldn't know. The company and its data science team are notoriously hush-hush.

Political agenda or not, Facebook wields the power to affect an election -- even unintentionally -- simply by conducting voter turnout experiments at a particular moment in time. "Is it ethical to be experimenting on individuals, especially when you can show it has that effect, while these political events are taking place?" said Robyn Caplan, another researcher at Data & Society. "That there was a study that took place, which none of the users were aware of, that had a direct effect on both online and offline behavior, is incredibly problematic. When it comes to this stuff, there is no such thing as a politically neutral action."

Sign up here for our daily Thrillist email, and get your fix of the best in food/drink/fun.

Joe McGauley is a senior writer for Thrillist and pretty sure Pollyhop is a better name than Bing.