[In 2016] Ed Boyton, a Princeton University professor who specialized in nascent technologies for sending information between machines and the human brain…told [a] private audience that scientists were approaching the point where they could create a complete map of the brain and then simulate it with a machine. The question was whether the machine, in addition to acting like a human, would actually feel what it was like to be human. This, they said, was the same question explored in Westworld.
AI, Artificial Intelligence, is a source of active concern in our culture. Tales abound in film, television, and written fiction about the potential for machines to exceed human capacities for learning, and ultimately gain self-awareness, which will lead to them enslaving humanity, or worse. There are hopes for AI as well. Language recognition is one area where there has been growth. However much we may roll our eyes at Siri or Alexa’s inability to, first, hear, the words we say properly, then interpret them accurately, it is worth bearing in mind that Siri was released a scant ten years ago, in 2011, Alexa following in 2014. We may not be there yet, but self-driving vehicles are another AI product that will change our lives. It can be unclear where AI begins and the use of advanced algorithms end in the handling of our on-line searching, and in how those with the means use AI to market endless products to us.
Cade Metz – image from Wired
So what is AI? Where did it come from? What stage of development is it currently at and where might it take us? Cade Metz, late of Wired Magazine and currently a tech reporter with the New York Times, was interested in tracking the history of AI. There are two sides to the story of any scientific advance, the human and the technological. No chicken and egg problem to be resolved here, the people came first. In telling the tales of those, Metz focuses on the brightest lights in the history of AI development, tracking their progress from the 1950s to the present, leading us through the steps, and some mis-steps, that have brought us to where we are today, from a seminal conference in the late 1950s to Frank Rosenblatt’s Perceptron in 1958, from the Boltzmann Machine to the development of the first neural network, SNARC, cadged together from remnant parts of old B-24s by Marvin Minsky, from the AI winter of governmental disinvestment that began in 1971 to its resumption in the 1980s, from training machines to beat the most skilled humans at chess, and then Go, to training them to recognize faces, from gestating in universities to being hooked up to steroidal sources of computing power at the world’s largest corporations, from early attempts to mimic the operations of the human brain to shifting to the more achievable task of pattern recognition, from ignoring social elements to beginning to see how bias can flow through people into technology, from shunning military uses to allowing, if not entirely embracing them.
This is one of 40 artificial neurons used in Marvin Minsky’s SPARC machine – image from The Scientist
Metz certainly has had a ringside seat for this, drawing from hundreds of interviews he conducted with the players in his reportorial day jobs, eight years at Wired and another two at the NY Times. He conducted another hundred or so interviews just for the book.
Some personalities shine through. We meet Geoffrey Hinton in the prologue, as he auctions his services (and the services of his two assistants) off to the highest corporate bidder, the ultimate figure a bit startling. Hinton is the central figure in this AI history, a Zelig-like-character who seems to pop up every time there is an advance in the technology. He is an interesting, complicated fellow, not just a leader in his field, but a creator of it and a mentor to many of the brightest minds who followed. It must have helped his recruiting that he had an actual sense of humor. He faced more than his share of challenges, suffering a back condition that made it virtually impossible for him to sit. Makes those cross country and trans-oceanic trips by train and plane just a wee bit of a problem. He suffered in other ways as well, losing two wives to cancer, providing a vast incentive for him to look at AI and neural networking as tools to help develop early diagnostic measures for diverse medical maladies.
Marvin Minsky in a lab at M.I.T. in 1968.Credit…M.I.T. – image and caption from NY Times
Where there are big ideas there are big egos, and sometimes an absence of decency. At a 1966 conference, when a researcher presented a report that did not sit well with Marvin Minsky, he interrupted the proceedings from the floor at considerable personal volume.
“How can an intelligent young man like you,” he asked, “waste your time with something like this?”
This was not out of character for the guy, who enjoyed provoking controversy, and, clearly, pissing people off. He single-handedly short-circuited a promising direction in AI research with his strident opposition.
Skynet’s Employee of the month
One of the developmental areas on which Metz focuses is deep learning, namely, feeding vast amounts of data to neural networks that are programmed to analyze the incomings for commonalities, in order to then be able to recognize unfamiliar material. For instance, examine hundreds of thousands of images of ducks and the system is pretty likely to be able to recognize a duck when it sees one. Frankly, it does not seem all that deep, but it is broad. Feeding a neural net vast quantities of data in order to train it to recognize particular things is the basis for a lot of facial recognition software in use today. Of course, the data being fed into the system reflects the biases of those doing the feeding. Say, for instance, that you are looking to identify faces, and most of the images that have been fed in are of white people, particularly white men. In 2015, when Google’s foto recognition app misidentified a black person as a gorilla, Google’s response was not to re-work its system ASAP, but to remove the word “gorilla” from its AI system. So, GIGO rules, fed by low representation by women and non-white techies. Metz addresses the existence of such inherent bias in the field, flowing from tech people in the data they use to feed neural net learning, but it is not a major focus of the book. He addresses it more directly in interviews.
Frank Rosenblatt and his Perceptron – image from Cornell University
On the other hand, by feeding systems vast amounts of information, it may be possible, for example, to recognize early indicators of public health or environmental problems that narrower examination of data would never unearth, and might even be able to give individuals a heads up that something might merit looking into.
He gives a lot of coverage to the bouncings back and forth of this, that, and the other head honcho researcher from institution to institution, looking at why such changes were made. A few of these are of interest, like why Hinton crossed the Atlantic to work, or why he moved from the states to Canada, and then stayed where he was based once he settled, regardless of employer. But a lot of the personnel movement was there to illustrate how strongly individual corporations were committed to AI development. This sometimes leads to odd, but revealing, images, like researchers having been recruited by a major company, and finding when they get there that the equipment they were expected to use was laughably inadequate to the project they were working on. When researchers realized that running neural networks would require vast numbers of Graphics Processing Units, GPUs (comparable to the Central Processing Units (CPUs) that are at the heart of every computer, but dedicated to a narrower range of activities) some companies dove right in while others balked. This is the trench warfare that I found most interesting, the specific command decisions that led to or impeded progress.
Rehoboam – the quantum supercomputer at the core of WestWorld – Image from The Sun
There are a lot of names in The Genius Makers. I would imagine that Metz and his editors pared quite a few out, but it can still be a bit daunting at times, trying to figure out which ones merit retaining, unless you already know that there is a manageable number of these folks. It can slow down reading. It would have been useful for Dutton to have provided a graphic of some sort, a timeline indicating this idea began here, that idea began then, and so on. It is indeed possible that such a welcome add-on is present in the final hardcover book. I was working from an e-ARE. Sometimes the jargon was just a bit too much. Overall, the book is definitely accessible for the general, non-technical, reader, if you are willing to skip over a phrase and a name here and there, or enjoy, as I do, looking up EVERYTHING.
The stories Metz tells of these pioneers, and their struggles are worth the price of admission, but you will also learn a bit about artificial intelligence (whatever that is) and the academic and corporate environments in which AI existed in the past, and is pursued today. You will not get a quick insight into what AI really is or how it works, but you will learn how what we call AI today began and evolved, and get a taste of how neural networking consumes vast volumes of data in a quest to amass enough knowledge to make AI at least somewhat…um…knowledgeable. Intelligence is a whole other thing, one of the dreams that has eluded developers and concerned the public. It is one of the ways in which AI has always been bedeviled by the curse of unrealistic expectations.
(left to right) Yann LeCun, Geoffrey Hinton, Yoshua Bengio – Image from Eyerys
Metz is a veteran reporter, so knows how to tell stories. It shows in his glee at telling us about this or that event. He includes a touch of humor here and there, a lightly sprinkled spice. Nothing that will make you shoot your coffee out your nose, but enough to make you smile. Here is an example.
…a colleague introduced [Geoff Hinton] at an academic conference as someone who had failed at physics, dropped out of psychology, and then joined a field with no standards at all: artificial intelligence. It was a story Hinton enjoyed repeating, with a caveat. “I didn’t fail at physics and drop out of psychology,” he would say. “I failed at psychology and dropped out of physics—which is far more reputable.”
The Genius Makers is a very readable bit of science history, aimed at a broad public, not the techie crowd, who would surely be demanding a lot more detail in the theoretical and implementation ends of decision-making and the construction of hardware and software. It will give you a clue as to what is going on in the AI world, and maybe open your mind a bit to what possibilities and perils we can all look forward to.
There are many elements involved in AI. But the one (promoted by Elon Musk) we tend to be most concerned about is that it will develop, frighteningly portrayed in many sci-fi films and TV series, as a dark, all-powerful entity driven to subjugate weak humans. This is called AGI, for Artificial General Intelligence and is something that we do not know how to achieve. Bottom line for that is pass the popcorn and enjoy the show. Skynet may take over in one fictional future, but it ain’t gonna happen in our real one any time soon.
Review first posted – April 16, 2021
———-Hardcover – March 16, 2021
———-Trade Paperback – February 15, 2022
I received an e-book ARE from Dutton in return for…I’m gonna need a lot more data before I can answer that accurately.
This review has been cross-posted on GoodReads
—–C-Span2 – Genius Makers with Daniela Hernandez – video – 1:28:17 – this one is terrific
—–Forbes – The Mavericks Who Brought AI to the World – Review of “Genius Makers” by Cade Metz by Calum Chace
—–Fair Observer – The Unbearable Shallowness of “Deep AI” By William Softky • Mar 31, 2021
—– Christian Science Monitor – Machines that learn: The origin story of artificial intelligence By Seth Stern
Items of Interest
—–Public Integrity – Are we ready for weapons to have a mind of their own? by Zachary Fryer-Biggs
—–Wiki on Geoffrey Hinton
—–Wiki for Demis Hassabis
—–Cornell Chronicle – Professor’s perceptron paved the way for AI – 60 years too soon by Melanie Lefkowitz
—–The Scientist – Machine, Learning, 1951 by Jef Akst