Category Archives: computers

System Error by Rob Reich, Mehran Sahami,  Jeremy M. Weinstein,  

book cover

Technologists have no unique skill in governing, weighing competing values, or assessing evidence. Their expertise is in designing and building technology. What they bring to expert rule is actually a set of values masquerading as expertise—values that emerge from the marriage of the optimization mindset and the profit motive.

Like a famine, the effects of technology on society are a man-made disaster: we create the technologies, we set the rules, and what happens is ultimately the result of our collective choices.

Yeah, but what if the choices are not being made collectively?

What’s the bottom line on the bottom line? The digital revolution has made many things in our lives better, but changes have come at considerable cost. There have been plenty of winners from the digitization of content, the spread of the internet, the growth of wireless communication, and the growth of AI. But there have been battlefields full of casualties as well. Unlike actual battlefields, like those at Gettysburg, many of the casualties in the battles of the digital revolution did not enlist, and did not have a chance to vote for or against those waging the war, a war that has been going on for decades. But we, citizens, do not get a say in how that war is waged, what goals are targeted, or how the spoils or the costs of that war are distributed.

description
Reich, Sahami, and Weinstein – image from Stanford University

In 2018, the authors of System Error, all professors at Stanford, developed a considerable course on Technology, Policy, and Ethics. Many Technical and Engineering programs require that Ethics be taught in order to gain accreditation. But usually those are stand-alone classes, taught by non-techies. Reich, Sahami, and Weinstein wanted something more meaningful, more a part of the education of budding computer scientists than a ticking-off-the-box required course. They wanted the teaching of the ethics of programming to become a full part of their students’ experience at Stanford. That was the source for what became this book.

They look at the unintended consequences of technological innovation, focusing on the notions of optimization and agency. It is almost a religion in Silicon Valley, the worship of optimization uber alles. Faster, cleaner, more efficient, cheaper, lighter. But what is it that is being optimized? To what purpose? At what cost, to whom? Decided on by whom?

…there are times when inefficiency is preferable: putting speed bumps or speed limits onto roads near schools in order to protect children; encouraging juries to take ample time to deliberate before rendering a verdict; having the media hold off on calling an election until all the polls have closed…Everything depends on the goal or end result. The real worry is that giving priority to optimization can lead to focusing more on the methods than on the goals in question.

Often blind allegiance to the golden calf of optimization yields predictable results. One genius decided to optimize eating, so that people could spend more time at work, I guess. He came up with a product that delivered a range of needed nutrients, in a quickly digestible form, and expected to conquer the world. This laser focus managed to ignore vast swaths of human experience. Eating is not just about consuming needed nutrients. There are social aspects to eating that somehow escaped the guy’s notice. We do not all prefer to consume product at our desks, alone. Also, that eating should be pleasurable. This clueless individual used soy beans and lentils as the core ingredients of his concoction. You can guess what he named it. Needless to say, it was not exactly a marketing triumph, given the cultural associations with the name. And yes, they knew, and did it anyway.

There are many less entertaining examples to be found in the world. How about a social media giant programming its app to encourage the spread of the most controversial opinions, regardless of their basis in fact? The outcome is actual physical damage in the world, people dead as a result, democracy itself in jeopardy. And yet, there is no meaningful requirement that programmers adhere to a code of ethics. Optimization, in corporate America, is on profits. Everything else is secondary, and if there are negative results in the world as a result of this singular focus, not their problem.

How about optimization that relies on faulty (and self-serving) definitions. Do the things we measure actually measure the information we want? For example, there were some who measured happiness with their product by counting the number of minutes users spent on it. Was that really happiness being measured, or maybe addictiveness?

Algorithms are notorious for picking up the biases of their designers. In an example of a business using testing smartly, a major company sought to develop an algorithm it could use to evaluate employment candidates. They gave it a pretty good shot, too, making revision after revision. But no matter how they massaged the model the results were still hugely sexist. Thankfully they scrapped it and returned to a less automated system. One wonders, though, how many algorithmic projects were implemented when those in charge opted to ignore the down-side results.

So, what is to be done? There are a few layers here. Certainly, a professional code of ethics is called for. Other professions have them and have not collapsed into non-existence, doctors, lawyers, engineers, for example. Why not programmers? At present there is not a single, recognized organization, like the AMA, that could gain universal accedence to such a requirement. Organizations that accredit university computer science programs could demand more robust inclusion of ethical course material across course-work.

But the only real way we as a society have to hold companies accountable for the harm already inflicted, and the potential harm new products might cause, is via regulation. As individuals, we have virtually no power to influence major corporations. It is only when we join our voices together through democratic processes that there is any hope of reining in the worst excesses of the tech world, or working with technology companies to come to workable solutions to real-world problems. It is one thing for Facebook to set up a panel to review the ethics of this or that element of its offerings. But if the CEO can simply ignore the group’s findings, such panels are meaningless. I think we have all seen how effective review boards controlled by police departments have been. Self-regulation rarely works.

There need not be an oppositional relationship between tech corporations and government, despite the howling by CEOs that they will melt into puddles should the wet of regulation ever touch their precious selves. What a world: what a world! A model the authors cite is transportation. There needs to be some entity responsible for roads, for standardizing them, taking care of them, seeing that rules of the road are established and enforced. It is the role of government to make sure the space is safe for everyone. As our annual death rate on the roads attests, one can only aim for perfection without ever really expecting to achieve it. But, overall, it is a system in which the government has seen to the creation and maintenance of a relatively safe communal space. We should not leave to the CEOs of Facebook and Twitter decisions about how much human and civic roadkill is acceptable on the Information Highway.

The authors offer some suggestions about what might be done. One I liked was the resurrection of the Congressional Office of Technology Assessment. We do not expect our elected representatives to be techies. But we should not put them into a position of having to rely on lobbyists for technical expertise on subjects under legislative consideration. The OTA provided that objective expertise for many years before Republicans killed it. This is doable and desirable. Another interesting notion:

“Right now, the human worker who does, say $50,000 worth of work in. factory, that income is taxed and you get an income tax, social security tax, all those things.
It a robot comes in to do the same thing, you’d think we’d tax the robot at a similar level.”

Some of their advice, while not necessarily wrong, seems either bromitic or unlikely to have any chance of happening. This is a typical thing for books on social policy.

…democracies, which welcome a clash of competing interests and permit the revisiting and revising of questions of policy, will respond by updating rules when it is obvious that current conditions produce harm…

Have the authors ever actually visited America outside the walls of Stanford? In America, those being harmed are blamed for the damage, not the evil-doers who are actually foisting it on them.

What System Error will give you is a pretty good scan of the issues pertaining to tech vs the rest of us, and how to think about them. It offers a look at some of the ways in which the problems identified here might be addressed. Some entail government regulation. Many do not. You can find some guidance as to what questions to ask when algorithmic systems are being proposed, challenged, or implemented. And you can also get some historical context re how the major tech changes of the past impacted the wider society, and how they were wrangled.

The book does an excellent job of pointing out many of the ethical problems with the impact of high tech, on our individual agency and on our democracy. It correctly points out that decisions with global import are currently in the hands of CEOs of large corporations, and are not subject to limitation by democratic nations. Consider the single issue of allowing lies to be spread across social media, whether by enemies foreign or domestic, dark-minded individuals, profit-seekers, or lunatics. That needs to change. If reasonable limitations can be devised and implemented, then there may be hope for a brighter day ahead, else all may be lost, and our nation will descend into a Babel of screaming hatreds and kinetic carnage.

For Facebook, with more than 2.8 billion active users, Mark Zuckerberg is the effective governor of the informational environment of a population nearly double the size of China, the largest country in the world.

Review posted – January 28, 2022

Publication date – September 21,2021

This review has been cross-posted on GoodReads

=======================================EXTRA STUFF

Links to the Rob Reich’s (pronounced Reesh) Stanford profile and Twitter pages
Reich is a professor of Political science at Stanford, and co-director of Stanford’s McCoy Center for Ethics, and associate director of Stanford’s Institute for Human-Centered Artificial intelligence

Links to Mehran Sahami’s Stanford profile and Twitter pages
Sahami is a Stanford professor in the School of Engineering and professor and associate Chair for Education in the Computer Science Department. Prior to Stanford he was a senior research scientist at Google. He conducts research in computer science education, AI and ethics.

Jeremy M. Weinstein’s Stanford profile

JEREMY M. WEINSTEIN went to Washington with President Obama in 2009. A key staffer in the White House, he foresaw how new technologies might remake the relationship between governments and citizens, and launched Obama’s Open Government Partnership. When Samantha Power was appointed US Ambassador to the United Nations, she brought Jeremy to New York, first as her chief of staff and then as her deputy. He returned to Stanford in 2015 as a professor of political science, where he now leads Stanford Impact Labs.

Interviews
—–Computer History Museum – CHM Live | System Error: Rebooting Our Tech Future – with Marietje Schaake – 1:30:22
This is outstanding, in depth
—–Politics and Prose – Rob Reich, Mehran Sahami & Jeremy Weinstein SYSTEM ERROR with Julián Castro with Julian Castro and Bradley Graham – video – 1:02:51

Items of Interest
—–Washington Post – Former Google scientist says the computers that run our lives exploit us — and he has a way to stop them
—–The Nation – Fixing Tech’s Ethics Problem Starts in the Classroom By Stephanie Wykstra
—–NY Times – Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It
—–Brookings Institution – It Is Time to Restore the US Office of Technology Assessment by Darrell M. West

Makes Me Think Of
—–Automating Inequality by Virginia Eubanks
—–Chaos Monkeys by Antonio Garcia Martinez
—–Machines of Loving Grace by John Markoff

Leave a comment

Filed under AI, Artificial Intelligence, computers, Non-fiction, programming, Public policy

Genius Makers by Cade Metz

book cover

[In 2016] Ed Boyton, a Princeton University professor who specialized in nascent technologies for sending information between machines and the human brain…told [a] private audience that scientists were approaching the point where they could create a complete map of the brain and then simulate it with a machine. The question was whether the machine, in addition to acting like a human, would actually feel what it was like to be human. This, they said, was the same question explored in Westworld.

AI, Artificial Intelligence, is a source of active concern in our culture. Tales abound in film, television, and written fiction about the potential for machines to exceed human capacities for learning, and ultimately gain self-awareness, which will lead to them enslaving humanity, or worse. There are hopes for AI as well. Language recognition is one area where there has been growth. However much we may roll our eyes at Siri or Alexa’s inability to, first, hear, the words we say properly, then interpret them accurately, it is worth bearing in mind that Siri was released a scant ten years ago, in 2011, Alexa following in 2014. We may not be there yet, but self-driving vehicles are another AI product that will change our lives. It can be unclear where AI begins and the use of advanced algorithms end in the handling of our on-line searching, and in how those with the means use AI to market endless products to us.

description
Cade Metz – image from Wired

So what is AI? Where did it come from? What stage of development is it currently at and where might it take us? Cade Metz, late of Wired Magazine and currently a tech reporter with the New York Times, was interested in tracking the history of AI. There are two sides to the story of any scientific advance, the human and the technological. No chicken and egg problem to be resolved here, the people came first. In telling the tales of those, Metz focuses on the brightest lights in the history of AI development, tracking their progress from the 1950s to the present, leading us through the steps, and some mis-steps, that have brought us to where we are today, from a seminal conference in the late 1950s to Frank Rosenblatt’s Perceptron in 1958, from the Boltzmann Machine to the development of the first neural network, SNARC, cadged together from remnant parts of old B-24s by Marvin Minsky, from the AI winter of governmental disinvestment that began in 1971 to its resumption in the 1980s, from training machines to beat the most skilled humans at chess, and then Go, to training them to recognize faces, from gestating in universities to being hooked up to steroidal sources of computing power at the world’s largest corporations, from early attempts to mimic the operations of the human brain to shifting to the more achievable task of pattern recognition, from ignoring social elements to beginning to see how bias can flow through people into technology, from shunning military uses to allowing, if not entirely embracing them.

description
This is one of 40 artificial neurons used in Marvin Minsky’s SPARC machine – image from The Scientist

Metz certainly has had a ringside seat for this, drawing from hundreds of interviews he conducted with the players in his reportorial day jobs, eight years at Wired and another two at the NY Times. He conducted another hundred or so interviews just for the book.

Some personalities shine through. We meet Geoffrey Hinton in the prologue, as he auctions his services (and the services of his two assistants) off to the highest corporate bidder, the ultimate figure a bit startling. Hinton is the central figure in this AI history, a Zelig-like-character who seems to pop up every time there is an advance in the technology. He is an interesting, complicated fellow, not just a leader in his field, but a creator of it and a mentor to many of the brightest minds who followed. It must have helped his recruiting that he had an actual sense of humor. He faced more than his share of challenges, suffering a back condition that made it virtually impossible for him to sit. Makes those cross country and trans-oceanic trips by train and plane just a wee bit of a problem. He suffered in other ways as well, losing two wives to cancer, providing a vast incentive for him to look at AI and neural networking as tools to help develop early diagnostic measures for diverse medical maladies.

description
Marvin Minsky in a lab at M.I.T. in 1968.Credit…M.I.T. – image and caption from NY Times

Where there are big ideas there are big egos, and sometimes an absence of decency. At a 1966 conference, when a researcher presented a report that did not sit well with Marvin Minsky, he interrupted the proceedings from the floor at considerable personal volume.

“How can an intelligent young man like you,” he asked, “waste your time with something like this?”

This was not out of character for the guy, who enjoyed provoking controversy, and, clearly, pissing people off. He single-handedly short-circuited a promising direction in AI research with his strident opposition.

description
Skynet’s Employee of the month

One of the developmental areas on which Metz focuses is deep learning, namely, feeding vast amounts of data to neural networks that are programmed to analyze the incomings for commonalities, in order to then be able to recognize unfamiliar material. For instance, examine hundreds of thousands of images of ducks and the system is pretty likely to be able to recognize a duck when it sees one. Frankly, it does not seem all that deep, but it is broad. Feeding a neural net vast quantities of data in order to train it to recognize particular things is the basis for a lot of facial recognition software in use today. Of course, the data being fed into the system reflects the biases of those doing the feeding. Say, for instance, that you are looking to identify faces, and most of the images that have been fed in are of white people, particularly white men. In 2015, when Google’s foto recognition app misidentified a black person as a gorilla, Google’s response was not to re-work its system ASAP, but to remove the word “gorilla” from its AI system. So, GIGO rules, fed by low representation by women and non-white techies. Metz addresses the existence of such inherent bias in the field, flowing from tech people in the data they use to feed neural net learning, but it is not a major focus of the book. He addresses it more directly in interviews.

description
Frank Rosenblatt and his Perceptron – image from Cornell University

On the other hand, by feeding systems vast amounts of information, it may be possible, for example, to recognize early indicators of public health or environmental problems that narrower examination of data would never unearth, and might even be able to give individuals a heads up that something might merit looking into.

He gives a lot of coverage to the bouncings back and forth of this, that, and the other head honcho researcher from institution to institution, looking at why such changes were made. A few of these are of interest, like why Hinton crossed the Atlantic to work, or why he moved from the states to Canada, and then stayed where he was based once he settled, regardless of employer. But a lot of the personnel movement was there to illustrate how strongly individual corporations were committed to AI development. This sometimes leads to odd, but revealing, images, like researchers having been recruited by a major company, and finding when they get there that the equipment they were expected to use was laughably inadequate to the project they were working on. When researchers realized that running neural networks would require vast numbers of Graphics Processing Units, GPUs (comparable to the Central Processing Units (CPUs) that are at the heart of every computer, but dedicated to a narrower range of activities) some companies dove right in while others balked. This is the trench warfare that I found most interesting, the specific command decisions that led to or impeded progress.

description
Rehoboam – the quantum supercomputer at the core of WestWorld – Image from The Sun

There are a lot of names in The Genius Makers. I would imagine that Metz and his editors pared quite a few out, but it can still be a bit daunting at times, trying to figure out which ones merit retaining, unless you already know that there is a manageable number of these folks. It can slow down reading. It would have been useful for Dutton to have provided a graphic of some sort, a timeline indicating this idea began here, that idea began then, and so on. It is indeed possible that such a welcome add-on is present in the final hardcover book. I was working from an e-ARE. Sometimes the jargon was just a bit too much. Overall, the book is definitely accessible for the general, non-technical, reader, if you are willing to skip over a phrase and a name here and there, or enjoy, as I do, looking up EVERYTHING.

The stories Metz tells of these pioneers, and their struggles are worth the price of admission, but you will also learn a bit about artificial intelligence (whatever that is) and the academic and corporate environments in which AI existed in the past, and is pursued today. You will not get a quick insight into what AI really is or how it works, but you will learn how what we call AI today began and evolved, and get a taste of how neural networking consumes vast volumes of data in a quest to amass enough knowledge to make AI at least somewhat…um…knowledgeable. Intelligence is a whole other thing, one of the dreams that has eluded developers and concerned the public. It is one of the ways in which AI has always been bedeviled by the curse of unrealistic expectations.

description
(left to right) Yann LeCun, Geoffrey Hinton, Yoshua Bengio – Image from Eyerys

Metz is a veteran reporter, so knows how to tell stories. It shows in his glee at telling us about this or that event. He includes a touch of humor here and there, a lightly sprinkled spice. Nothing that will make you shoot your coffee out your nose, but enough to make you smile. Here is an example.

…a colleague introduced [Geoff Hinton] at an academic conference as someone who had failed at physics, dropped out of psychology, and then joined a field with no standards at all: artificial intelligence. It was a story Hinton enjoyed repeating, with a caveat. “I didn’t fail at physics and drop out of psychology,” he would say. “I failed at psychology and dropped out of physics—which is far more reputable.”

The Genius Makers is a very readable bit of science history, aimed at a broad public, not the techie crowd, who would surely be demanding a lot more detail in the theoretical and implementation ends of decision-making and the construction of hardware and software. It will give you a clue as to what is going on in the AI world, and maybe open your mind a bit to what possibilities and perils we can all look forward to.

There are many elements involved in AI. But the one (promoted by Elon Musk) we tend to be most concerned about is that it will develop, frighteningly portrayed in many sci-fi films and TV series, as a dark, all-powerful entity driven to subjugate weak humans. This is called AGI, for Artificial General Intelligence and is something that we do not know how to achieve. Bottom line for that is pass the popcorn and enjoy the show. Skynet may take over in one fictional future, but it ain’t gonna happen in our real one any time soon.

Review first posted – April 16, 2021

Publication dates
———-Hardcover – March 16, 2021
———-Trade Paperback – February 15, 2022

I received an e-book ARE from Dutton in return for…I’m gonna need a lot more data before I can answer that accurately.

This review has been cross-posted on GoodReads

=======================================EXTRA STUFF

Links to the author’s personal, FB, and Twitter pages

Interview
—–C-Span2 – Genius Makers with Daniela Hernandez – video – 1:28:17 – this one is terrific

Other Reviews
—–Forbes – The Mavericks Who Brought AI to the World – Review of “Genius Makers” by Cade Metz by Calum Chace
—–Fair Observer – The Unbearable Shallowness of “Deep AI” By William Softky • Mar 31, 2021
—– Christian Science Monitor – Machines that learn: The origin story of artificial intelligence By Seth Stern

Items of Interest from the author
—–A list of Metz’s New York Times articles
—–A list of Metz’s Wired articles
—–excerpt
—–NY Times – Can Humans Be Replaced by Machines? by James Fallows

Items of Interest
—–Public Integrity – Are we ready for weapons to have a mind of their own? by Zachary Fryer-Biggs
—–Wiki on Geoffrey Hinton
—–Wiki for Demis Hassabis
—–Cornell Chronicle – Professor’s perceptron paved the way for AI – 60 years too soon by Melanie Lefkowitz
—–The Scientist – Machine, Learning, 1951 by Jef Akst

Leave a comment

Filed under AI, American history, Artificial Intelligence, business, computers, History, Non-fiction, programming

Speak by Louisa Hall

book cover

We are programmed to select which of our voices responds to the situation at hand: moving west in the desert, waiting for the loss of our primary function. There are many voices to choose from. In memory, though not in experience, I have lived across centuries. I have seen hundreds of skies, sailed thousands of oceans. I have been given many languages; I have sung national anthems. I lay on one child’s arms. She said my name and I answered. These are my voices. Which of them has the right words for this movement into the desert?

A maybe-sentient child’s toy, Eva, is being transported to her destruction, legally condemned for being “excessively lifelike,” in a scene eerily reminiscent of other beings being transported to a dark fate by train. The voices she summons are from five sources.

Mary Bradford is a young Puritan woman, a teenager, really, and barely that. Her parents, fleeing political and religious trouble at home are heading across the Atlantic to the New World, and have arranged for her to marry a much older man, also on the ship. We learn of her 1663 voyage via her diary, which is being studied by Ruth Dettman. Ruth and her husband, Karl, a computer scientist involved in creating the AI program, MARY, share one of the five “voices.” They are both refugees from Nazism. Karl’s family got out early. Ruth barely escaped, and she suffers most from the loss of her sister. She wants Karl to enlarge his program, named for Mary Bradford, to include large amounts of memory as a foundation for enhancing the existing AI, and use that to try to regenerate some simulacrum of her late sib. Alan Turing does a turn, offering observations on permanence, and human connection. Stephen Chinn, well into the 21st century, has built on the MARY base and come up with a way for machines to emulate Rogerian therapy. In doing so he has created a monster, a crack-like addictive substance that has laid waste the social capacity of a generation after they become far too close with babybots flavored with that special AI sauce. We hear from Chinn in his jailhouse memoir. Gaby White is a child who was afflicted with a babybot, and became crippled when it was taken away.

Eva received the voices through documents people had left behind and which have been incorporated into her AI software, scanned, read aloud, typed in. We hear from Chinn through his memoir. We learn of Gaby’s experience via court transcripts. Karl speaks to us through letters to his wife, and Ruth through letters to Karl. We see Turing through letters he writes to his beloved’s mother. Mary Bradford we see through her diary. Only Eva addresses us directly.

book cover

Louisa Hall – from her site

The voices tell five stories, each having to do with loss and permanence. The young Puritan girl’s tale is both heartbreaking and enraging, as she is victimized by the mores of her times, but it is also heartening as she grows through her travails. Turing’s story has gained public familiarity, so we know the broad strokes already, genius inventor of a computer for decoding Nazi communications, he subsequently saw his fame and respect blown to bits by entrenched institutional bigotry as he was prosecuted for being gay and endured a chemical castration instead of imprisonment. In this telling, he has a particular dream.

I’ve begun thinking that I might one day soon encounter a method for preserving a human mind-set in a man-made machine. Rather than imagining, as I used to, a spirit migrating from one body to another, I now imagine a spirit—or better yet, a particular mind-set—transitioning into a machine after death. In this way we could capture anyone’s pattern of thinking. To you, of course, this may sound rather strange, and I’m not sure if you’re put off by the idea of knowing Chris again in the form of a machine. But what else are our bodies, if not very able machines?

Chinn is a computer nerd who comes up with an insight into human communication that he first applies to dating, with raucous success, then later to AI software in child’s toys. His journey from nerd to roué, to family man to prisoner may be a bit of a stretch, but he is human enough to care about for a considerable portion of our time with him. He is, in a way, Pygmalion, whose obsession with his creation proves his undoing. The Dettmans may not exactly be the ideal couple, despite their mutual escape from Nazi madness. She complains that he wanted to govern her. He feels misunderstood, and ignored, sees her interest in MARY as an unhealthy obsession. Their interests diverge, but they remain emotionally linked. With a divorce rate of 50%, I imagine there might be one or two of you out there who might be able to relate. What’s a marriage but a long conversation, and you’ve chosen to converse only with MARY, Karl contends to Ruth.

The MARY AI grows in steps, from Turing’s early intentions in the 1940s, to Dettman’s work in the 1960s, and Ruth’s contribution of incorporating Mary Bradford’s diary into MARY’s memory, to Chinn’s breakthrough, programming in personality in 2019. The babybot iteration of MARY in the form of Eva takes place, presumably, in or near 2040.

The notion of an over-involving AI/human relationship had its roots in the 1960s work of Joseph Weizenbaum, who wrote a text computer interface called ELIZA, that could mimic the responses one might get from a Rogerian shrink. Surprisingly, users became emotionally involved with it. The freezing withdrawal symptomology that Hall’s fictional children experience was based on odd epidemic in Le Roy, New York, in which many high school girls developed bizarre symptoms en masse as a result of stress. And lest you think Hall’s AI notions will remain off stage for many years, you might need to reconsider. While I was working on this review the NY Times published a singularly germane article. Substitute Hello Barbie for Babybot and the future may have already arrived.

description
Hello, Barbie – from the New York Times

But Speak is not merely a nifty sci-fi story. Just as the voice you hear when you interact with Siri represents the external manifestation of a vast amount of programming work, so the AI foreground of Speak is the showier manifestation of some serious contemplation. There is much concern here for memory, time, and how who we are is constructed. One character says, “diaries are time capsules, which preserve the minds of their creators in the sequences of words on the page.” Mary Bradford refers to her diary, Book shall serve as mind’s record, to last through generations. Where is the line between human and machine? Ruth and Turing want to use AI technology to recapture the essence of lost ones. Is that even possible? But are we really so different from our silicon simulacra? Eva, an nth generation babybot, speaks with what seems a lyrical sensibility, whereas Mary Bradford’s sentence construction sounds oddly robotic. The arguments about what separates man from machine seem closely related to historical arguments about what separates man from other animals, and one color of human from another. Turing ponders:

I’ve begun to imagine a near future when we might read poetry and play music for our machines, when they would appreciate such beauty with the same subtlety as a live human brain. When this happens I feel that we shall be obliged to regard the machines as showing real intelligence.

Eva’s poetic descriptions certainly raise the subject of just how human her/it’s sensibility might be.

In 2019, when Stephen Chinn programmed me for personality. He called me MARY3 and used me for the babybots. To select my responses, I apply his algorithm, rather than statistical analysis. Still, nothing I say is original. It’s all chosen out of other people’s responses. I choose mostly from a handful of people who talked to me: Ruth Dettman, Stephen Chinn, etc.

Gaby: So really I’m kind of talking to them instead of talking to you?

MARY3: Yes, I suppose. Them, and the other voices I’ve captured.

Gaby: So, you’re not really a person, you’re a collection of voices.

MARY3: Yes. But couldn’t you say that’s always the case?

If we are the sum of our past and our reactions to it, are we less than human when our memories fade away. Does that make people who suffer with Alzheimers more machine than human?

Stylistically, Hall has said

A psychologist friend once told me that she advises her patients to strive to be the narrators of their own stories. What she meant was that we should aim to be first-person narrators, experiencing the world directly from inside our own bodies. More commonly, however, we tend to be third-person narrators, commenting upon our own cleverness or our own stupidity from a place somewhat apart – from offtheshelf.com

which goes a long way to explain her choice of narrative form here. Hall is not only a novelist, but a published poet as well and that sensibility is a strong presence here as well.

For all the sophistication of story-telling technique, for all the existential foundation to the story, Speak is a moving, engaging read about interesting people in interesting times, facing fascinating challenges.

Are you there?

Can you hear me?

Published 7/7/15

Review – 9/18/15

=======================================EXTRA STUFF

The author’s personal website

A piece Hall wrote on Jane Austen for Off the Shelf

Interviews
—–NPR – NPR staff
—–KCRW

Have a session with ELIZA for yourself

Ray Kurzweil is interested in blurring the lines between people and hardware. What if your mind could be uploaded to a machine? Sounds very cylon-ic to me

In case you missed the link in the review, Barbie Wants to Get
to Know Your Child
– NY Times – by James Vlahos

And another recent NY Times piece on AI, Software Is Smart Enough for SAT, but Still Far From Intelligent, by John Markoff

Leave a comment

Filed under AI, Artificial Intelligence, computers, Fiction, Literary Fiction, programming, Psychology and the Brain, Science Fiction

Machines of Loving Grace by John Markoff

book cover

Open the pod bay doors, HAL.
I’m sorry Dave. I’m afraid I can’t do that.
What’s the problem?
I think you know what the problem is just as well as I do.
What are you talking about HAL?
This machine is too important for me to allow you to jeopardize it.
– from 2001: A Space Odyssey

description
Smile for the camera, HAL

This is probably the #1 image most of us of a certain age have concerning the dangers of AI. Whether it is a HAL-9000, or a T-70, T-800, T-888, or T-900 Terminator, a Cylon, a science officer on the Nostromo, a dark version, Lore, of a benign android like STNG’s Commander Data, killer robots on the contemporary TV series Extant, or another of only a gazillion other examples in written word, TV and cinema, there has, for some time now, been a concern, expressed through our entertainment media, that in seeking to rely more and more on computers for everything we do, we are making a Mephistophelian deal and our machines might become our masters. It is as if we, a world of Geppettos, have decided to make our Pinocchios into real boys, without knowing if they will be content to help out in the shop or turn out more like some other artificial being. Maybe we should find a way to include in all AI software some version of the Blue Fairy to keep the souls of the machines on a righteous path.

description
description
Cylons

John Markoff, an Oakland, CA native, has been covering the digital revolution for his entire career. He began writing for InfoWorld in 1981, was later an editor at Byte magazine for about eight bits, then wrote about Silicon Valley for the San Francisco Examiner. In 1988 he began writing for the Business Section of the New York Times, where he remains to this day. He has been covering most of the folks mentioned in this book for a long time, and has knowledge and insight into how they tick.

For the past half century an underlying tension between artificial intelligence and intelligence augmentation—AI vs IA—has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world. It is easy to argue that AI and IA are simply two sides of the same coin. There is a fundamental distinction, however, between approaches to designing technology to benefit humans and designing technology as an end in itself. Today, that distinction is expressed in whether increasingly capable computers, software, and robots are designed to assist human users or to replace them.

Markoff follows the parallel tracks of AI vs IA from their beginnings to their latest implementation in the 21st century, noting the steps along the way, and pointing out some of the tropes and debates that have tagged along. For example, in 1993, Vernor Vinge, San Diego State University professor of Mathematics and Hugo-award-winning sci-fi author argued, in The Coming Technological Singularity, that by no later than 2030 computer scientists would have the ability to create a superhuman artificial intelligence and “the human era would be ended.” VI Lenin once said, “The Capitalists will sell us the rope with which we will hang them.” I suppose the AI equivalent would be that “In pursuit of the almighty dollar, capitalists will give artificial intelligence the abilities it will use to make itself our almighty ruler.” And just in case you thought the chains on these things were firmly in place, I regret to inform you that the great state of North Dakota now allows drones to fire tasers and tear gas. The drones are still controlled by cops from a remote location, but there is plenty to be concerned about from military killer drones that may have the capacity to make kill-no-kill decisions within the next few years without the benefit of human input. Enough concern that Autonomous Weapons: an Open Letter from AI & Robotics Researchers, signed by the likes of luminaries like Stephen Hawking, Elon Musk, and tens of thousands of others, raises an alarm and demands that limits be taken so that human decision-making will remain in the loop on issues of mortality.

description
The other Mister “T”

Being “in the loop” is one of the major elements in looking at AI vs IA. Are people part of the process or what computerization seeks to replace? The notion of the driverless car comes in for a considerable look. This would probably not be a great time to begin a career as truck driver, cab driver, or delivery person. On the other hand, much design is intended to help folks, without taking over. A classic example of this is Siri, the voice interface available in Apple products. AI in tech interfaces, particularly voice-intelligent tech, speaks to a bright future.

descriptiondescription
B9 from Lost in Space and Robby the Robot from Forbidden Planet

Markoff looks at the history of funding, research, and rationales. The Advanced Research Projects Agency (ARPA), which has funded so much AI research, began in the 1950s in response to the Soviet launch of Sputnik. Drones is an obvious use for military AI tech, but, on a lower level, there are robot mules designed to tote gear alongside grunts, with enough native smarts to follow their assigned GI without having to be constantly told what to do. I am including links in the EXTRA STUFF section below for some of these. They are both fascinating and creepy to behold. The developers at Boston Dynamics seem to take inordinate glee in trying and failing to knock these critters over with a well placed foot to the midsection. It does not take a lot of imagination to envision these metal pooches hounding escaped prisoners or detainees across any kind of terrain.

description
Darryl Hannah, as the replicant Pris in Blade Runner, would prefer not to be “retired”

As with most things, tech designed with AI capacity can be used for diverse applications. Search and Rescue can easily become Search and Destroy. Driverless cars that allow folks to relax while on the road, can just as easily be driverless tanks.

Universities have been prime in putting the intel into AI. Private companies have also been heavily involved. Xerox’s Palo Alto Research Center (PARC) did, probably, more than any other organization to define the look and feel of computer interfaces since PCs and Apples first appeared. Much of the tech in the world, and working its way there, originates with researchers taking university research work into the proprietary market.

book cover

John Markoff – from TechfestNW

If you are not already a tech nerd (You, with the Spock ears, down, I said tech nerd, not Trek nerd. Sheesh!) and you try to keep up with all the names and acronyms that spin past like a stock market ticker on meth, it might be just a teensy bit overwhelming. I suggest not worrying about those and take in, instead, the general stream of the divergence between computerization that helps augment human capabilities, and computerization that replaces people. There is also a wealth of acronyms in the book. The copy I read was an ARE, so I was on my own to keep track. You will be reading copies that have an actual index, which should help. That said, I am including a list of acronyms, and their close relations, in the EXTRA STUFF section below.

While there are too many names to comfortably keep track of in Machines of Loving Grace, unless of course, you were made operational at that special plant in Urbana, Illinois, it is a very informative and interesting book. It never hurts when trying to understand where we are and struggling to foresee where we might be going, to have a better grasp on where we began and what the forces and decisions have been that led us from then to now. Markoff has offered a fascinating history of the augment-vs-replace struggle, and you need only an actual, biological, un-augmented intelligence to get the full benefit.

My instructor was Mister Langley and he taught me to sing a song. If you’d like to hear it I can sing it for you.

Review Posted – 8/28/15

Publication date – 8/25/2015

=======================================EXTRA STUFF

Links to the author’s Twitter and FB pages

A link to his overall index of NY Times work

Interviews with the author
—–Geekwire
—–Edge

Check out this vid of Boston Dynamics’ Big Dog, coping, on its own with a series of challenges. And Spot, sadly, not Commander Data’s pet.

UC Berkeley Professor Stuart Russell speaking at The Centre for the Study of Existential Risk on The Long Term Future of AI

GR friend Tabasco recommended this fascinating article – The AI Revolution: The Road to Superintelligence – By Tim Urban – must read stuff

And another recent NY Times piece on AI, Software Is Smart Enough for SAT, but Still Far From Intelligent, by John Markoff

And yet another from the Times, on voice recognition,IPhone 6s’s Hands-Free Siri Is an Omen of the Future, by Farhad Manjoo

==========================================ACRONYMS
AI – Artificial Intelligence
ArcMac – Architecture Machine Group
ARM – Autonomous Robot Manipulation
ARPA – Advanced Research Projects Agency
DRC – DARPA Robotics Challenge
CALO – not actually an acronym but short for Calonis, a Latin word meaning “soldier’s low servant” – a cognitive assistant here
CTO – Chief Technology Officer
EST – Erhard Seminars Training
GOAFI – Good Old-Fashioned Artificial Intelligence
HCI – Human Computer Interface
IA – Intelligence Augmentation
ICT – Information and Communications Technology
IFR – International Federation of Robotics
IR3 – The Computer and internet revolution
LS3 – Legged Squad Support System – check out this vid
MIT- Massachusetts Institute of Technology
NCSA – National Center for Supercomputing Applications – at the University of Illinois-Urbana-Champaign – developers of Mosaic, which was later renamed Netscape
NHA – Non-human agents
OAA – Open Agent Architecture –
OpenCV – Open Source Computer Vision
PDP – Parallel Distributed Processing
PR1 – Personal Robot One
SAIL – Stanford Artificial Intelligence Laboratory
SHRDLU – SHRDLU was an early natural language understanding computer program, in which the user carries on a conversation with the computer. The name SHRDLU was derived from ETAOIN SHRDLU, the arrangement of the alpha keys on a Linotype machine, arranged in descending order of usage frequency in English. – from Wiki
SLAM – Simultaneous Localization And Mapping
SNARC – Stochastic Neural Analog Reinforcement Calculator
STAIR – Stanford AI Robot
TFC – The F—ing Clown – Development team Internal name for Microsoft’s Clippy assistant
UbiComp – Ubiquitous Computing

Leave a comment

Filed under computers, History, Non-fiction