System Error by Rob Reich, Mehran Sahami,  Jeremy M. Weinstein,  

book cover

Technologists have no unique skill in governing, weighing competing values, or assessing evidence. Their expertise is in designing and building technology. What they bring to expert rule is actually a set of values masquerading as expertise—values that emerge from the marriage of the optimization mindset and the profit motive.

Like a famine, the effects of technology on society are a man-made disaster: we create the technologies, we set the rules, and what happens is ultimately the result of our collective choices.

Yeah, but what if the choices are not being made collectively?

What’s the bottom line on the bottom line? The digital revolution has made many things in our lives better, but changes have come at considerable cost. There have been plenty of winners from the digitization of content, the spread of the internet, the growth of wireless communication, and the growth of AI. But there have been battlefields full of casualties as well. Unlike actual battlefields, like those at Gettysburg, many of the casualties in the battles of the digital revolution did not enlist, and did not have a chance to vote for or against those waging the war, a war that has been going on for decades. But we, citizens, do not get a say in how that war is waged, what goals are targeted, or how the spoils or the costs of that war are distributed.

description
Reich, Sahami, and Weinstein – image from Stanford University

In 2018, the authors of System Error, all professors at Stanford, developed a considerable course on Technology, Policy, and Ethics. Many Technical and Engineering programs require that Ethics be taught in order to gain accreditation. But usually those are stand-alone classes, taught by non-techies. Reich, Sahami, and Weinstein wanted something more meaningful, more a part of the education of budding computer scientists than a ticking-off-the-box required course. They wanted the teaching of the ethics of programming to become a full part of their students’ experience at Stanford. That was the source for what became this book.

They look at the unintended consequences of technological innovation, focusing on the notions of optimization and agency. It is almost a religion in Silicon Valley, the worship of optimization uber alles. Faster, cleaner, more efficient, cheaper, lighter. But what is it that is being optimized? To what purpose? At what cost, to whom? Decided on by whom?

…there are times when inefficiency is preferable: putting speed bumps or speed limits onto roads near schools in order to protect children; encouraging juries to take ample time to deliberate before rendering a verdict; having the media hold off on calling an election until all the polls have closed…Everything depends on the goal or end result. The real worry is that giving priority to optimization can lead to focusing more on the methods than on the goals in question.

Often blind allegiance to the golden calf of optimization yields predictable results. One genius decided to optimize eating, so that people could spend more time at work, I guess. He came up with a product that delivered a range of needed nutrients, in a quickly digestible form, and expected to conquer the world. This laser focus managed to ignore vast swaths of human experience. Eating is not just about consuming needed nutrients. There are social aspects to eating that somehow escaped the guy’s notice. We do not all prefer to consume product at our desks, alone. Also, that eating should be pleasurable. This clueless individual used soy beans and lentils as the core ingredients of his concoction. You can guess what he named it. Needless to say, it was not exactly a marketing triumph, given the cultural associations with the name. And yes, they knew, and did it anyway.

There are many less entertaining examples to be found in the world. How about a social media giant programming its app to encourage the spread of the most controversial opinions, regardless of their basis in fact? The outcome is actual physical damage in the world, people dead as a result, democracy itself in jeopardy. And yet, there is no meaningful requirement that programmers adhere to a code of ethics. Optimization, in corporate America, is on profits. Everything else is secondary, and if there are negative results in the world as a result of this singular focus, not their problem.

How about optimization that relies on faulty (and self-serving) definitions. Do the things we measure actually measure the information we want? For example, there were some who measured happiness with their product by counting the number of minutes users spent on it. Was that really happiness being measured, or maybe addictiveness?

Algorithms are notorious for picking up the biases of their designers. In an example of a business using testing smartly, a major company sought to develop an algorithm it could use to evaluate employment candidates. They gave it a pretty good shot, too, making revision after revision. But no matter how they massaged the model the results were still hugely sexist. Thankfully they scrapped it and returned to a less automated system. One wonders, though, how many algorithmic projects were implemented when those in charge opted to ignore the down-side results.

So, what is to be done? There are a few layers here. Certainly, a professional code of ethics is called for. Other professions have them and have not collapsed into non-existence, doctors, lawyers, engineers, for example. Why not programmers? At present there is not a single, recognized organization, like the AMA, that could gain universal accedence to such a requirement. Organizations that accredit university computer science programs could demand more robust inclusion of ethical course material across course-work.

But the only real way we as a society have to hold companies accountable for the harm already inflicted, and the potential harm new products might cause, is via regulation. As individuals, we have virtually no power to influence major corporations. It is only when we join our voices together through democratic processes that there is any hope of reining in the worst excesses of the tech world, or working with technology companies to come to workable solutions to real-world problems. It is one thing for Facebook to set up a panel to review the ethics of this or that element of its offerings. But if the CEO can simply ignore the group’s findings, such panels are meaningless. I think we have all seen how effective review boards controlled by police departments have been. Self-regulation rarely works.

There need not be an oppositional relationship between tech corporations and government, despite the howling by CEOs that they will melt into puddles should the wet of regulation ever touch their precious selves. What a world: what a world! A model the authors cite is transportation. There needs to be some entity responsible for roads, for standardizing them, taking care of them, seeing that rules of the road are established and enforced. It is the role of government to make sure the space is safe for everyone. As our annual death rate on the roads attests, one can only aim for perfection without ever really expecting to achieve it. But, overall, it is a system in which the government has seen to the creation and maintenance of a relatively safe communal space. We should not leave to the CEOs of Facebook and Twitter decisions about how much human and civic roadkill is acceptable on the Information Highway.

The authors offer some suggestions about what might be done. One I liked was the resurrection of the Congressional Office of Technology Assessment. We do not expect our elected representatives to be techies. But we should not put them into a position of having to rely on lobbyists for technical expertise on subjects under legislative consideration. The OTA provided that objective expertise for many years before Republicans killed it. This is doable and desirable. Another interesting notion:

“Right now, the human worker who does, say $50,000 worth of work in. factory, that income is taxed and you get an income tax, social security tax, all those things.
It a robot comes in to do the same thing, you’d think we’d tax the robot at a similar level.”

Some of their advice, while not necessarily wrong, seems either bromitic or unlikely to have any chance of happening. This is a typical thing for books on social policy.

…democracies, which welcome a clash of competing interests and permit the revisiting and revising of questions of policy, will respond by updating rules when it is obvious that current conditions produce harm…

Have the authors ever actually visited America outside the walls of Stanford? In America, those being harmed are blamed for the damage, not the evil-doers who are actually foisting it on them.

What System Error will give you is a pretty good scan of the issues pertaining to tech vs the rest of us, and how to think about them. It offers a look at some of the ways in which the problems identified here might be addressed. Some entail government regulation. Many do not. You can find some guidance as to what questions to ask when algorithmic systems are being proposed, challenged, or implemented. And you can also get some historical context re how the major tech changes of the past impacted the wider society, and how they were wrangled.

The book does an excellent job of pointing out many of the ethical problems with the impact of high tech, on our individual agency and on our democracy. It correctly points out that decisions with global import are currently in the hands of CEOs of large corporations, and are not subject to limitation by democratic nations. Consider the single issue of allowing lies to be spread across social media, whether by enemies foreign or domestic, dark-minded individuals, profit-seekers, or lunatics. That needs to change. If reasonable limitations can be devised and implemented, then there may be hope for a brighter day ahead, else all may be lost, and our nation will descend into a Babel of screaming hatreds and kinetic carnage.

For Facebook, with more than 2.8 billion active users, Mark Zuckerberg is the effective governor of the informational environment of a population nearly double the size of China, the largest country in the world.

Review posted – January 28, 2022

Publication date – September 21,2021

This review has been cross-posted on GoodReads

=======================================EXTRA STUFF

Links to the Rob Reich’s (pronounced Reesh) Stanford profile and Twitter pages
Reich is a professor of Political science at Stanford, and co-director of Stanford’s McCoy Center for Ethics, and associate director of Stanford’s Institute for Human-Centered Artificial intelligence

Links to Mehran Sahami’s Stanford profile and Twitter pages
Sahami is a Stanford professor in the School of Engineering and professor and associate Chair for Education in the Computer Science Department. Prior to Stanford he was a senior research scientist at Google. He conducts research in computer science education, AI and ethics.

Jeremy M. Weinstein’s Stanford profile

JEREMY M. WEINSTEIN went to Washington with President Obama in 2009. A key staffer in the White House, he foresaw how new technologies might remake the relationship between governments and citizens, and launched Obama’s Open Government Partnership. When Samantha Power was appointed US Ambassador to the United Nations, she brought Jeremy to New York, first as her chief of staff and then as her deputy. He returned to Stanford in 2015 as a professor of political science, where he now leads Stanford Impact Labs.

Interviews
—–Computer History Museum – CHM Live | System Error: Rebooting Our Tech Future – with Marietje Schaake – 1:30:22
This is outstanding, in depth
—–Politics and Prose – Rob Reich, Mehran Sahami & Jeremy Weinstein SYSTEM ERROR with Julián Castro with Julian Castro and Bradley Graham – video – 1:02:51

Items of Interest
—–Washington Post – Former Google scientist says the computers that run our lives exploit us — and he has a way to stop them
—–The Nation – Fixing Tech’s Ethics Problem Starts in the Classroom By Stephanie Wykstra
—–NY Times – Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It
—–Brookings Institution – It Is Time to Restore the US Office of Technology Assessment by Darrell M. West

Makes Me Think Of
—–Automating Inequality by Virginia Eubanks
—–Chaos Monkeys by Antonio Garcia Martinez
—–Machines of Loving Grace by John Markoff

Leave a comment

Filed under AI, Artificial Intelligence, computers, Non-fiction, programming, Public policy

Leave a comment