Tag Archives: business

System Error by Rob Reich, Mehran Sahami,  Jeremy M. Weinstein,  

book cover

Technologists have no unique skill in governing, weighing competing values, or assessing evidence. Their expertise is in designing and building technology. What they bring to expert rule is actually a set of values masquerading as expertise—values that emerge from the marriage of the optimization mindset and the profit motive.

Like a famine, the effects of technology on society are a man-made disaster: we create the technologies, we set the rules, and what happens is ultimately the result of our collective choices.

Yeah, but what if the choices are not being made collectively?

What’s the bottom line on the bottom line? The digital revolution has made many things in our lives better, but changes have come at considerable cost. There have been plenty of winners from the digitization of content, the spread of the internet, the growth of wireless communication, and the growth of AI. But there have been battlefields full of casualties as well. Unlike actual battlefields, like those at Gettysburg, many of the casualties in the battles of the digital revolution did not enlist, and did not have a chance to vote for or against those waging the war, a war that has been going on for decades. But we, citizens, do not get a say in how that war is waged, what goals are targeted, or how the spoils or the costs of that war are distributed.

description
Reich, Sahami, and Weinstein – image from Stanford University

In 2018, the authors of System Error, all professors at Stanford, developed a considerable course on Technology, Policy, and Ethics. Many Technical and Engineering programs require that Ethics be taught in order to gain accreditation. But usually those are stand-alone classes, taught by non-techies. Reich, Sahami, and Weinstein wanted something more meaningful, more a part of the education of budding computer scientists than a ticking-off-the-box required course. They wanted the teaching of the ethics of programming to become a full part of their students’ experience at Stanford. That was the source for what became this book.

They look at the unintended consequences of technological innovation, focusing on the notions of optimization and agency. It is almost a religion in Silicon Valley, the worship of optimization uber alles. Faster, cleaner, more efficient, cheaper, lighter. But what is it that is being optimized? To what purpose? At what cost, to whom? Decided on by whom?

…there are times when inefficiency is preferable: putting speed bumps or speed limits onto roads near schools in order to protect children; encouraging juries to take ample time to deliberate before rendering a verdict; having the media hold off on calling an election until all the polls have closed…Everything depends on the goal or end result. The real worry is that giving priority to optimization can lead to focusing more on the methods than on the goals in question.

Often blind allegiance to the golden calf of optimization yields predictable results. One genius decided to optimize eating, so that people could spend more time at work, I guess. He came up with a product that delivered a range of needed nutrients, in a quickly digestible form, and expected to conquer the world. This laser focus managed to ignore vast swaths of human experience. Eating is not just about consuming needed nutrients. There are social aspects to eating that somehow escaped the guy’s notice. We do not all prefer to consume product at our desks, alone. Also, that eating should be pleasurable. This clueless individual used soy beans and lentils as the core ingredients of his concoction. You can guess what he named it. Needless to say, it was not exactly a marketing triumph, given the cultural associations with the name. And yes, they knew, and did it anyway.

There are many less entertaining examples to be found in the world. How about a social media giant programming its app to encourage the spread of the most controversial opinions, regardless of their basis in fact? The outcome is actual physical damage in the world, people dead as a result, democracy itself in jeopardy. And yet, there is no meaningful requirement that programmers adhere to a code of ethics. Optimization, in corporate America, is on profits. Everything else is secondary, and if there are negative results in the world as a result of this singular focus, not their problem.

How about optimization that relies on faulty (and self-serving) definitions. Do the things we measure actually measure the information we want? For example, there were some who measured happiness with their product by counting the number of minutes users spent on it. Was that really happiness being measured, or maybe addictiveness?

Algorithms are notorious for picking up the biases of their designers. In an example of a business using testing smartly, a major company sought to develop an algorithm it could use to evaluate employment candidates. They gave it a pretty good shot, too, making revision after revision. But no matter how they massaged the model the results were still hugely sexist. Thankfully they scrapped it and returned to a less automated system. One wonders, though, how many algorithmic projects were implemented when those in charge opted to ignore the down-side results.

So, what is to be done? There are a few layers here. Certainly, a professional code of ethics is called for. Other professions have them and have not collapsed into non-existence, doctors, lawyers, engineers, for example. Why not programmers? At present there is not a single, recognized organization, like the AMA, that could gain universal accedence to such a requirement. Organizations that accredit university computer science programs could demand more robust inclusion of ethical course material across course-work.

But the only real way we as a society have to hold companies accountable for the harm already inflicted, and the potential harm new products might cause, is via regulation. As individuals, we have virtually no power to influence major corporations. It is only when we join our voices together through democratic processes that there is any hope of reining in the worst excesses of the tech world, or working with technology companies to come to workable solutions to real-world problems. It is one thing for Facebook to set up a panel to review the ethics of this or that element of its offerings. But if the CEO can simply ignore the group’s findings, such panels are meaningless. I think we have all seen how effective review boards controlled by police departments have been. Self-regulation rarely works.

There need not be an oppositional relationship between tech corporations and government, despite the howling by CEOs that they will melt into puddles should the wet of regulation ever touch their precious selves. What a world: what a world! A model the authors cite is transportation. There needs to be some entity responsible for roads, for standardizing them, taking care of them, seeing that rules of the road are established and enforced. It is the role of government to make sure the space is safe for everyone. As our annual death rate on the roads attests, one can only aim for perfection without ever really expecting to achieve it. But, overall, it is a system in which the government has seen to the creation and maintenance of a relatively safe communal space. We should not leave to the CEOs of Facebook and Twitter decisions about how much human and civic roadkill is acceptable on the Information Highway.

The authors offer some suggestions about what might be done. One I liked was the resurrection of the Congressional Office of Technology Assessment. We do not expect our elected representatives to be techies. But we should not put them into a position of having to rely on lobbyists for technical expertise on subjects under legislative consideration. The OTA provided that objective expertise for many years before Republicans killed it. This is doable and desirable. Another interesting notion:

“Right now, the human worker who does, say $50,000 worth of work in. factory, that income is taxed and you get an income tax, social security tax, all those things.
It a robot comes in to do the same thing, you’d think we’d tax the robot at a similar level.”

Some of their advice, while not necessarily wrong, seems either bromitic or unlikely to have any chance of happening. This is a typical thing for books on social policy.

…democracies, which welcome a clash of competing interests and permit the revisiting and revising of questions of policy, will respond by updating rules when it is obvious that current conditions produce harm…

Have the authors ever actually visited America outside the walls of Stanford? In America, those being harmed are blamed for the damage, not the evil-doers who are actually foisting it on them.

What System Error will give you is a pretty good scan of the issues pertaining to tech vs the rest of us, and how to think about them. It offers a look at some of the ways in which the problems identified here might be addressed. Some entail government regulation. Many do not. You can find some guidance as to what questions to ask when algorithmic systems are being proposed, challenged, or implemented. And you can also get some historical context re how the major tech changes of the past impacted the wider society, and how they were wrangled.

The book does an excellent job of pointing out many of the ethical problems with the impact of high tech, on our individual agency and on our democracy. It correctly points out that decisions with global import are currently in the hands of CEOs of large corporations, and are not subject to limitation by democratic nations. Consider the single issue of allowing lies to be spread across social media, whether by enemies foreign or domestic, dark-minded individuals, profit-seekers, or lunatics. That needs to change. If reasonable limitations can be devised and implemented, then there may be hope for a brighter day ahead, else all may be lost, and our nation will descend into a Babel of screaming hatreds and kinetic carnage.

For Facebook, with more than 2.8 billion active users, Mark Zuckerberg is the effective governor of the informational environment of a population nearly double the size of China, the largest country in the world.

Review posted – January 28, 2022

Publication date – September 21,2021

This review has been cross-posted on GoodReads

=======================================EXTRA STUFF

Links to the Rob Reich’s (pronounced Reesh) Stanford profile and Twitter pages
Reich is a professor of Political science at Stanford, and co-director of Stanford’s McCoy Center for Ethics, and associate director of Stanford’s Institute for Human-Centered Artificial intelligence

Links to Mehran Sahami’s Stanford profile and Twitter pages
Sahami is a Stanford professor in the School of Engineering and professor and associate Chair for Education in the Computer Science Department. Prior to Stanford he was a senior research scientist at Google. He conducts research in computer science education, AI and ethics.

Jeremy M. Weinstein’s Stanford profile

JEREMY M. WEINSTEIN went to Washington with President Obama in 2009. A key staffer in the White House, he foresaw how new technologies might remake the relationship between governments and citizens, and launched Obama’s Open Government Partnership. When Samantha Power was appointed US Ambassador to the United Nations, she brought Jeremy to New York, first as her chief of staff and then as her deputy. He returned to Stanford in 2015 as a professor of political science, where he now leads Stanford Impact Labs.

Interviews
—–Computer History Museum – CHM Live | System Error: Rebooting Our Tech Future – with Marietje Schaake – 1:30:22
This is outstanding, in depth
—–Politics and Prose – Rob Reich, Mehran Sahami & Jeremy Weinstein SYSTEM ERROR with Julián Castro with Julian Castro and Bradley Graham – video – 1:02:51

Items of Interest
—–Washington Post – Former Google scientist says the computers that run our lives exploit us — and he has a way to stop them
—–The Nation – Fixing Tech’s Ethics Problem Starts in the Classroom By Stephanie Wykstra
—–NY Times – Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It
—–Brookings Institution – It Is Time to Restore the US Office of Technology Assessment by Darrell M. West

Makes Me Think Of
—–Automating Inequality by Virginia Eubanks
—–Chaos Monkeys by Antonio Garcia Martinez
—–Machines of Loving Grace by John Markoff

Leave a comment

Filed under AI, Artificial Intelligence, computers, Non-fiction, programming, Public policy

Genius Makers by Cade Metz

book cover

[In 2016] Ed Boyton, a Princeton University professor who specialized in nascent technologies for sending information between machines and the human brain…told [a] private audience that scientists were approaching the point where they could create a complete map of the brain and then simulate it with a machine. The question was whether the machine, in addition to acting like a human, would actually feel what it was like to be human. This, they said, was the same question explored in Westworld.

AI, Artificial Intelligence, is a source of active concern in our culture. Tales abound in film, television, and written fiction about the potential for machines to exceed human capacities for learning, and ultimately gain self-awareness, which will lead to them enslaving humanity, or worse. There are hopes for AI as well. Language recognition is one area where there has been growth. However much we may roll our eyes at Siri or Alexa’s inability to, first, hear, the words we say properly, then interpret them accurately, it is worth bearing in mind that Siri was released a scant ten years ago, in 2011, Alexa following in 2014. We may not be there yet, but self-driving vehicles are another AI product that will change our lives. It can be unclear where AI begins and the use of advanced algorithms end in the handling of our on-line searching, and in how those with the means use AI to market endless products to us.

description
Cade Metz – image from Wired

So what is AI? Where did it come from? What stage of development is it currently at and where might it take us? Cade Metz, late of Wired Magazine and currently a tech reporter with the New York Times, was interested in tracking the history of AI. There are two sides to the story of any scientific advance, the human and the technological. No chicken and egg problem to be resolved here, the people came first. In telling the tales of those, Metz focuses on the brightest lights in the history of AI development, tracking their progress from the 1950s to the present, leading us through the steps, and some mis-steps, that have brought us to where we are today, from a seminal conference in the late 1950s to Frank Rosenblatt’s Perceptron in 1958, from the Boltzmann Machine to the development of the first neural network, SNARC, cadged together from remnant parts of old B-24s by Marvin Minsky, from the AI winter of governmental disinvestment that began in 1971 to its resumption in the 1980s, from training machines to beat the most skilled humans at chess, and then Go, to training them to recognize faces, from gestating in universities to being hooked up to steroidal sources of computing power at the world’s largest corporations, from early attempts to mimic the operations of the human brain to shifting to the more achievable task of pattern recognition, from ignoring social elements to beginning to see how bias can flow through people into technology, from shunning military uses to allowing, if not entirely embracing them.

description
This is one of 40 artificial neurons used in Marvin Minsky’s SPARC machine – image from The Scientist

Metz certainly has had a ringside seat for this, drawing from hundreds of interviews he conducted with the players in his reportorial day jobs, eight years at Wired and another two at the NY Times. He conducted another hundred or so interviews just for the book.

Some personalities shine through. We meet Geoffrey Hinton in the prologue, as he auctions his services (and the services of his two assistants) off to the highest corporate bidder, the ultimate figure a bit startling. Hinton is the central figure in this AI history, a Zelig-like-character who seems to pop up every time there is an advance in the technology. He is an interesting, complicated fellow, not just a leader in his field, but a creator of it and a mentor to many of the brightest minds who followed. It must have helped his recruiting that he had an actual sense of humor. He faced more than his share of challenges, suffering a back condition that made it virtually impossible for him to sit. Makes those cross country and trans-oceanic trips by train and plane just a wee bit of a problem. He suffered in other ways as well, losing two wives to cancer, providing a vast incentive for him to look at AI and neural networking as tools to help develop early diagnostic measures for diverse medical maladies.

description
Marvin Minsky in a lab at M.I.T. in 1968.Credit…M.I.T. – image and caption from NY Times

Where there are big ideas there are big egos, and sometimes an absence of decency. At a 1966 conference, when a researcher presented a report that did not sit well with Marvin Minsky, he interrupted the proceedings from the floor at considerable personal volume.

“How can an intelligent young man like you,” he asked, “waste your time with something like this?”

This was not out of character for the guy, who enjoyed provoking controversy, and, clearly, pissing people off. He single-handedly short-circuited a promising direction in AI research with his strident opposition.

description
Skynet’s Employee of the month

One of the developmental areas on which Metz focuses is deep learning, namely, feeding vast amounts of data to neural networks that are programmed to analyze the incomings for commonalities, in order to then be able to recognize unfamiliar material. For instance, examine hundreds of thousands of images of ducks and the system is pretty likely to be able to recognize a duck when it sees one. Frankly, it does not seem all that deep, but it is broad. Feeding a neural net vast quantities of data in order to train it to recognize particular things is the basis for a lot of facial recognition software in use today. Of course, the data being fed into the system reflects the biases of those doing the feeding. Say, for instance, that you are looking to identify faces, and most of the images that have been fed in are of white people, particularly white men. In 2015, when Google’s foto recognition app misidentified a black person as a gorilla, Google’s response was not to re-work its system ASAP, but to remove the word “gorilla” from its AI system. So, GIGO rules, fed by low representation by women and non-white techies. Metz addresses the existence of such inherent bias in the field, flowing from tech people in the data they use to feed neural net learning, but it is not a major focus of the book. He addresses it more directly in interviews.

description
Frank Rosenblatt and his Perceptron – image from Cornell University

On the other hand, by feeding systems vast amounts of information, it may be possible, for example, to recognize early indicators of public health or environmental problems that narrower examination of data would never unearth, and might even be able to give individuals a heads up that something might merit looking into.

He gives a lot of coverage to the bouncings back and forth of this, that, and the other head honcho researcher from institution to institution, looking at why such changes were made. A few of these are of interest, like why Hinton crossed the Atlantic to work, or why he moved from the states to Canada, and then stayed where he was based once he settled, regardless of employer. But a lot of the personnel movement was there to illustrate how strongly individual corporations were committed to AI development. This sometimes leads to odd, but revealing, images, like researchers having been recruited by a major company, and finding when they get there that the equipment they were expected to use was laughably inadequate to the project they were working on. When researchers realized that running neural networks would require vast numbers of Graphics Processing Units, GPUs (comparable to the Central Processing Units (CPUs) that are at the heart of every computer, but dedicated to a narrower range of activities) some companies dove right in while others balked. This is the trench warfare that I found most interesting, the specific command decisions that led to or impeded progress.

description
Rehoboam – the quantum supercomputer at the core of WestWorld – Image from The Sun

There are a lot of names in The Genius Makers. I would imagine that Metz and his editors pared quite a few out, but it can still be a bit daunting at times, trying to figure out which ones merit retaining, unless you already know that there is a manageable number of these folks. It can slow down reading. It would have been useful for Dutton to have provided a graphic of some sort, a timeline indicating this idea began here, that idea began then, and so on. It is indeed possible that such a welcome add-on is present in the final hardcover book. I was working from an e-ARE. Sometimes the jargon was just a bit too much. Overall, the book is definitely accessible for the general, non-technical, reader, if you are willing to skip over a phrase and a name here and there, or enjoy, as I do, looking up EVERYTHING.

The stories Metz tells of these pioneers, and their struggles are worth the price of admission, but you will also learn a bit about artificial intelligence (whatever that is) and the academic and corporate environments in which AI existed in the past, and is pursued today. You will not get a quick insight into what AI really is or how it works, but you will learn how what we call AI today began and evolved, and get a taste of how neural networking consumes vast volumes of data in a quest to amass enough knowledge to make AI at least somewhat…um…knowledgeable. Intelligence is a whole other thing, one of the dreams that has eluded developers and concerned the public. It is one of the ways in which AI has always been bedeviled by the curse of unrealistic expectations.

description
(left to right) Yann LeCun, Geoffrey Hinton, Yoshua Bengio – Image from Eyerys

Metz is a veteran reporter, so knows how to tell stories. It shows in his glee at telling us about this or that event. He includes a touch of humor here and there, a lightly sprinkled spice. Nothing that will make you shoot your coffee out your nose, but enough to make you smile. Here is an example.

…a colleague introduced [Geoff Hinton] at an academic conference as someone who had failed at physics, dropped out of psychology, and then joined a field with no standards at all: artificial intelligence. It was a story Hinton enjoyed repeating, with a caveat. “I didn’t fail at physics and drop out of psychology,” he would say. “I failed at psychology and dropped out of physics—which is far more reputable.”

The Genius Makers is a very readable bit of science history, aimed at a broad public, not the techie crowd, who would surely be demanding a lot more detail in the theoretical and implementation ends of decision-making and the construction of hardware and software. It will give you a clue as to what is going on in the AI world, and maybe open your mind a bit to what possibilities and perils we can all look forward to.

There are many elements involved in AI. But the one (promoted by Elon Musk) we tend to be most concerned about is that it will develop, frighteningly portrayed in many sci-fi films and TV series, as a dark, all-powerful entity driven to subjugate weak humans. This is called AGI, for Artificial General Intelligence and is something that we do not know how to achieve. Bottom line for that is pass the popcorn and enjoy the show. Skynet may take over in one fictional future, but it ain’t gonna happen in our real one any time soon.

Review first posted – April 16, 2021

Publication dates
———-Hardcover – March 16, 2021
———-Trade Paperback – February 15, 2022

I received an e-book ARE from Dutton in return for…I’m gonna need a lot more data before I can answer that accurately.

This review has been cross-posted on GoodReads

=======================================EXTRA STUFF

Links to the author’s personal, FB, and Twitter pages

Interview
—–C-Span2 – Genius Makers with Daniela Hernandez – video – 1:28:17 – this one is terrific

Other Reviews
—–Forbes – The Mavericks Who Brought AI to the World – Review of “Genius Makers” by Cade Metz by Calum Chace
—–Fair Observer – The Unbearable Shallowness of “Deep AI” By William Softky • Mar 31, 2021
—– Christian Science Monitor – Machines that learn: The origin story of artificial intelligence By Seth Stern

Items of Interest from the author
—–A list of Metz’s New York Times articles
—–A list of Metz’s Wired articles
—–excerpt
—–NY Times – Can Humans Be Replaced by Machines? by James Fallows

Items of Interest
—–Public Integrity – Are we ready for weapons to have a mind of their own? by Zachary Fryer-Biggs
—–Wiki on Geoffrey Hinton
—–Wiki for Demis Hassabis
—–Cornell Chronicle – Professor’s perceptron paved the way for AI – 60 years too soon by Melanie Lefkowitz
—–The Scientist – Machine, Learning, 1951 by Jef Akst

Leave a comment

Filed under AI, American history, Artificial Intelligence, business, computers, History, Non-fiction, programming