Security

The Silicon Valley finale’s critical failure. – Slate


Richard Henderson (Thomas Middleditch) stands in front of a white board covered in technical language.

Silicon Valley’s finale failed in an important way.

Eddy Chen/HBO

This article contains spoilers for the series finale of Silicon Valley.

I should start, perhaps, by saying that I think Silicon Valley gets a lot of things right about Silicon Valley—most especially the tech sector’s propensity to believe that it is an unequivocal force for good, that its disruptions are always desirable, that it has found fast and new solutions to old and entrenched problems. I enjoyed the final season, which concluded with the series finale on Sunday evening, particularly the writing team’s decision to could include a scene with a tech executive conducting a meeting on Rollerblades (a reference previously dismissed as “too hacky,” according to the New Yorker).

And yet, I’ll admit to being a little disappointed that the makers of the Pied Piper software decided that the only way to save the world from the stupendously powerful algorithm they have designed is to destroy it forever. This decision follows an extended, and admittedly entertaining, sequence poking fun at the notion of ethical technology when ousted tech CEO Gavin Belson, played by Matt Ross, develops a largely meaningless code of tech ethics—or, as he calls it, “tethics”—and demands tech companies sign on to it, until the code is revealed to be plagiarized jargon.

Let’s be clear: Discussions about ethical design of technology and artificial intelligence contains a lot of empty rhetoric. And who could appreciate more than I—a professor of cybersecurity policy—that after Pied Piper founder Richard Hendricks (played by Thomas Middleditch) washes out of the tech world, he ends up becoming Stanford’s Gavin Belson professor of ethics in technology?

But in dismissing tech ethics and the related academic field as a big, useless joke, populated by the likes of failed tech execs like Belson and Hendricks, the show makes its most cynical assessment—that there is no ethical way to develop technology and that the only solution to the technological challenges facing us today is to destroy technologies like the Pied Piper algorithm.

“We built a monster. We need to kill it,” Bertram Gilfoyle (Martin Starr) says of the firm’s technology in the series finale of the algorithm they’ve created that is so powerful it apparently threatens the security of all existing encryption. After discussing Robert Oppenheimer’s regrets about helping develop the atomic bomb, the co-founders ultimately decide—in what I think the show intends for us to interpret as a heroic move—to destroy their technology before it can be unleashed on the world and earn them billions of dollars.

But if someone figured out how to break the strongest existing encryption algorithms, would we actually want them to delete their code in the hopes that no one else would ever figure it out? I’d argue not: We’d want them to think about how to disclose their findings responsibly and gradually to the relevant stakeholders. We’d want them to work with governments and industry partners to ensure that when their technology was released, there were sufficient safeguards in place to protect essential services and communications. We’d want to use their work to advance the state of encryption and online security.

“But that doesn’t sound like fun television!” I hear you saying. Fair enough. I’m not actually interested in trying to challenge the particular plot points of Silicon Valley’s finale or argue about the decisions that individual characters in the show make. But, on some level, I do worry about part of the overarching message of the show’s final season—that tech, especially sophisticated machine learning algorithms, can be an existential threat. Just as it would be a mistake to write off artificial intelligence as an unmitigated force for good, or a ridiculous waste of time and money, so too, I think, would it be a mistake to cast it as an atomic bomb or Frankenstein-level disaster that must be destroyed.

Matthew Dessem argues persuasively that the show’s take on the tech sector has gotten progressively darker and more pessimistic since its first season, in line with public opinion about technology. But I find the ending a bit too negative when it comes to thinking about ways forward for improving how we develop and deploy new technologies. I, for one, would still like to believe in the power of “tethics.” I like to believe that it’s possible, with time and thought and trial and error, to move past the jargon and feel-good, high-level language so generic that it could be stolen from any (or every!) company’s mission statement and develop actual, meaningful ethical guidelines about how technology should be designed and used. I’d even like to believe (and again, I have a vested interest here) that professors of ethics in technology might be a key piece of helping us get to that point.

Silicon Valley is absolutely right to lambaste how toothless and feel-good much of the rhetoric around tech ethics is, and how quickly vague, jargon-filled principles can come to serve as a stand-in for any real change or progress in the tech world. But that’s not a reason to dismiss principles that help us design better more beneficial technologies as so much nonsense, it’s a reason to think seriously about how we can make them concrete, real, and enforceable.

Future Tense
is a partnership of
Slate,
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.





READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.