Technology

Can Facebook and Twitter put the brakes on the flood of disinformation?


First, the good news. With a US presidential election under three weeks away, Facebook and Twitter are working overtime to try to stem the flow of misinformation.

Both companies acted quickly on Wednesday against a New York Post article based on information of uncertain provenance that could be highly damaging to the Joe Biden campaign. Facebook said it was slowing the spread of the article on its network, while Twitter blocked it outright.

The problem: The more active the networks get, the greater the outrage they provoke on one side or the other. At a time of extreme partisanship, any action is an open invitation to politicians to wade in with swingeing regulation. Republicans, already baying to strip internet companies of the legal protections that allow them to control what appears on their platforms, have just been handed another piece of red meat.

The large number of interventions and policy changes announced by the networks in recent weeks has been hard to keep track of. It suggests they are on red alert. But coming this close to an election, it inevitably looks piecemeal and reactive.

Earlier in the week, for instance, Mark Zuckerberg, Facebook chief executive, decided that Holocaust denial should have no place on Facebook — reversing a controversial position he struck two years ago. This followed a week in which the platform sought to expunge the QAnon conspiracy theory from its network and extended its limits on US political advertising. Twitter, meanwhile, came up with a raft of adjustments that included forcing users to think harder before retweeting information deemed misleading.

Facebook likes to say it has spent four years learning how to deal with election misinformation. If so, then the last-minute fixes suggest that there is much it is still trying to work out.

To an extent, the reactiveness is inescapable. Social behaviour changes in response to changes in the rules. Outlaw one type of behaviour, and it will mutate into something different.

Also, the nature of the threat to be guarded again is not constant. It was only after President Donald Trump questioned the validity of the election if he lost that the risk of someone preemptively claiming victory became a pressing issue for the networks to deal with.

Putting the brakes on how fast information can spread until its validity can be established also seems a pragmatic way to protect users without heavy-handed censorship. This, though, raises other issues.

One is the lack of transparency. It’s impossible to tell how effective the limits on transmission are, or how much information is affected. Renée DiResta, a researcher at the Stanford Internet Observatory, points out that there is a precedent here: Facebook already publishes a register of political ads, so it could easily do the same with material subject to distribution restrictions, while also sharing more data with independent researchers.

Another problem with acting pre-emptively to slow the spread of material before it goes viral is that it interferes with the fundamental design of the network. It amounts to throwing sand in the gears of a machine built for velocity.

Whether Facebook has adequate incentives to take such a step is open to question. The wave of fake news aimed at undermining Hillary Clinton before the 2016 election was enthusiastically taken up and passed on by many in its audience. But the platform, for its part, claims that angry users return to the service less often, giving it strong incentives to stamp out the kind of false and divisive information that makes voters’ blood boil.

If applied over-aggressively, slowing the spread of information also risks making Facebook less effective as a real-time information network, says Ms DiResta. The company has failed to grapple with trade-offs like this in the past, but now seems intent on finding a better balance.

As for the risk of inviting a political and regulatory backlash: regulation, at this point, looks both inevitable and necessary. Enforcing broader policies now to keep misinformation in check gives Facebook its best chance of demonstrating which approaches work best, and influencing the shape of future regulation.

If the fractious 2020 election doesn’t descend into social media chaos, it could become a valuable proving ground. But that’s still a big “if”.

Richard.Waters@ft.com





READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.