By Stuart J. Russell
Grade Levels: 9+ and Adults
Elon Musk has been a darling of the tech scene for many years now, at least since he began serving as Tesla’s CEO in 2008. And he’s been in the news recently with his attempt (likely stymied) to buy a controlling share of Twitter. However, much more has been said about what Musk writes and says (often controversial) than what he reads. But Musk has clearly been influenced by other tech thinkers of the day and has his own opinions about what a forward-looking person should be reading. One book that Musk recommended back in 2019 as worth reading is Human Compatible by British computer scientist and UC Berkeley professor Stuart Russell.
Russell’s book focuses on the potential problems of unrestrained artificial intelligence research. As a society, we’ve been exposed to the idea of an AI apocalypse for decades - as early as the 60s with Ellison’s haunting “I Have No Mouth and I Must Scream,” into the age of cinema with Schwarzenegger’s Terminator, and to the Keanu Reeves vehicle - The Matrix. But Russell sees the 21st century as the first time that humanity has had the capacity and the economic incentive to create real superintelligences - a word for a mind surpassing that of the human brain, so far unobserved in nature or in technology. In fact, he claims that the economic incentives to develop such a technology are so strong that its eventual creation is inevitable. And his book takes a deep and at times extremely granular dive into the problem of superintelligences and potential solutions.
Russell claims that the danger in AI or machine learning is that AI are programmed only with limited, rigid, human-defined goals that do not include a respect for broader human values and that this will eventually lead to issues where the AI present solutions to problems that violate those values or result in secondary effects that are harmful to humanity. His book outlines ways in which AI could be made to learn more gradually, by recognizing human preferences through analysis of human actions, while the purpose of their learning narrows over time. He also criticizes the computer science community for being resistant to the recognition of the dangers of AI research because of an assumption that that criticism will be channeled into reduced funding or restrictive laws limiting what can be developed. Instead, he makes the case that it is important to begin working now on research to make AI, as the title suggests, compatible with human life, preferences, and objectives.
Whether you fall on the AI-is-dangerous side of the debate or the AI-is-benign side (and it’s fairly clear that Musk supports the former view), this book is a cogent look into the potential dangers of AI and potential solutions for the future as presented by a qualified expert. Although at times a bit jargon-y, the book is generally witty and approachable for the educated reader. If you’re curious about the future of AI, looking at a career in computer science, or just really love sci-fi and want to learn more about the scientific background of a potential robot apocalypse - look no further!