Technology

Facebook joins Silicon Valley’s rush to appear responsible


In 2019, the struggles of Mark Zuckerberg were the most visible sign of Silicon Valley’s attempts to clean up its act.

For the first time, Facebook shifted from responding belatedly to the backlash against its negative impact on the world to trying to address those problems proactively.

And early in the year, the Facebook chief executive made a direct and surprising pitch to Congress: please regulate internet platforms such as us.

That matched a sea change across Silicon Valley, as big advertising-driven companies, from Google to Twitter, realised that regulation was on the horizon and began to make adjustments to their processes and policies to encourage politicians and watchdogs not to be too heavy handed.

“Silicon Valley is in a rush to appear responsible in the face of this cognitive dissonance; where they thought they were fixing the world, instead they were breaking the world,” said Siva Vaidhyanathan, media studies professor at the University of Virginia.

Column chart of Advertising revenue ($ per user) showing Facebook's strong sales

Nevertheless, by the end of the year, Mr Zuckerberg in particular had done little to persuade sceptics that his business, which posted record quarterly revenues of $17.6bn in October, has genuine aspirations to build a better society for its 2.4bn monthly active users. 

Indeed, the company’s harshest critics say Facebook’s changes have been merely “cosmetic” — ensuring that nothing jeopardises a business model that they say is based on hoovering up swaths of user data, allowing brands to narrowly target their products and promoting divisive content in the battle for users’ attention.

“Facebook’s actions . . . have been carefully crafted to address visible symptoms of very specific events in the past as an alternative to addressing the underlying causes of systemic problems,” said Roger McNamee, a former adviser to Mr Zuckerberg who has become a vocal critic of Silicon Valley.

“[But the company] has paid lip service to reform, while doing everything possible to protect a business model that benefits from hate speech, disinformation and conspiracy theories.”

So is Facebook — and by extension Silicon Valley — really serious about changing for the better?

The aftermath of Cambridge Analytica

Experts say the shift away from what some have described as Facebook’s historic “growth at all costs” mentality has been slow and stuttering. 

But in 2018, the Cambridge Analytica scandal — in which a UK data company was accused of improperly accessing Facebook user information — compelled a change of tack.

It shone a light on some of the particularities of the ad-driven business model of Facebook and other social media groups: the offer of microtargeting to marketers, allowing them to home in on small groups of users, and the use of algorithms that promote extreme and provocative content because it attracts attention.

Meanwhile, clear evidence of Russian interference in the 2016 US election — once dubbed a “crazy idea” by Mr Zuckerberg — signalled that there were gaps in Facebook’s oversight of its content.

Come 2019, the company began to do more to actively tackle concerns about its impact on the world, as US and EU regulators increasingly began to circle. As part of efforts to strip out toxic content and disinformation in particular, the group has expanded its safety and security teams — to 35,000 people today, including contractors — and begun publishing details of its moderation activities, updating its policies and employing fact-checkers. 

Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, described these as “constructive changes”, highlighting some evidence of “salutary effect”. But overall, “the picture is mixed”, he said.

Line chart of Share price ($) showing Facebook's performance

Data published by the company suggest that it has become more effective at curbing harmful content: 80 per cent of content flagged as possible “hate speech” is assessed by automated systems and human moderators before users report it, up from 53 per cent a year ago, for example.

Nonetheless, the volume of problematic posts still appears to be rising. Facebook took action on 7m examples of hate speech in the latest quarter, more than double the 2.9m in the period a year ago. Meanwhile the number of fake accounts that Facebook has taken down in 2019 is 5.4bn — several times the number of actual accounts on the entire platform.

“What we have is an elaborate PR effort. It doesn’t mean it’s insincere — I’m sure Mr Zuckerberg would want to snap his fingers and eliminate all white supremacy from the platform. [But] he has built a system that is too big to govern,” said Prof Vaidhyanathan.

Many note that the advances have also come at a cost: Facebook this year faced criticism over the working conditions of its moderators and an apparent lack of concern for the impact on mental health of the job.

A graphic with no description

Facebook has said that the limitations of the artificial intelligence often used for content moderation are a barrier to swifter action. But others in the industry say sophisticated technology is available and express surprise Facebook does not find it easier to identify problematic content.

Last month, the US Democratic National Committee wrote to chief marketing officer Sheryl Sandberg, in a letter seen by the Financial Times, urging the company to “dedicate more resources to detect inauthentic behaviour” after it found its own evidence of “domestic actors manipulating the online discourse”.

Some critics told the FT they believe Facebook has been knowingly neglectful. Several social media monitoring platforms and researchers said that when they had raised problems about content, Facebook had failed to take action or had done so only if the issue had gained public attention.

According to Gretchen Peters, founder of the Alliance to Counter Crime Online, several members of her group of academics had been given “this sort of snotty brush off” by the company upon raising examples of apparent criminality.

“[Facebook] haven’t done anything even near meaningful enough to change the way they moderate their platform,” she said. “I snort to the notion that they are getting better.”

The future may look more regulated

Facebook’s biggest mis-step this year, according to critics, and the one that made it stand out most from the rest of the social media pack, was on political advertising.

Several months ago, Mr Zuckerberg made the controversial decision not to fact check adverts placed by politicians and campaign groups. He justified the move by casting Facebook as a bastion of free speech that should not censor politics.

After a public backlash, other groups responded, with Twitter banning political ads, Snap saying it would fact-check them and Google severely limiting its microtargeting capabilities.

For many, Facebook’s attitude represents an unwillingness to disrupt the way in which it makes money.

A graphic with no description

There is further evidence of this. A long-awaited privacy tool — to allow consumers more control over how and whether their data is gathered by Facebook — did not allow users to delete their data entirely, for example.

At the start of 2019, Facebook said it would link staff bonuses in the first half of the year to new criteria including progress on “major social issues”, building tools to “improve people’s lives” and communicating “more transparently” about the business. Asked by the Financial Times for an update on these goals, it would not give any comment.

Moral Money

Moral Money is our new weekly newsletter covering sustainable business, finance and investing. Sign up here for breaking news and insightful analysis on this bubbling revolution.

In certain areas, however, the group is making changes to improve wellbeing that are likely to drive away some users and could hurt revenues. In the case of Instagram, which Facebook owns, it has begun to test removing “likes” and using pop-ups to prompt users to think twice before posting certain sorts of content.

As for the toughest action called for by its critics — that Facebook does away with microtargeting and the algorithmic amplification that results in content going viral — it is likely that only regulators could force through such fundamental changes.

“[Regulation that hits their core business model] is too awful for them to think about . . . so they are pushing for self-regulation,” said Prof Vaidhyanathan. “But self-regulation is an oxymoron. That’s why depending on Facebook to regulate Facebook is mad.”

FT Series: The responsible capitalists



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.