Wednesday, January 03, 2018

Book Review - Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

Finished Life 3.0 over the weekend.  Hmmm.  Let’s put it this way - I ran to this book entirely because of its connection to Elon Musk.  If I described Elon Musk as a billionaire genius who believed that for the good of the human race we need to escape the planet, he might sound like something out of a Robert Heinlein novel. But then you read his biography and you think, “Damn, everything he’s doing makes sense, that could actually work.”  So when I saw his name attached to Life 3.0 I thought that I might get a similar look at the future of machine learning - sure we’re talking about moonshots but let’s break it down to incremental, believable steps….

This book does that a little bit, at least in theory, and at least in the beginning.  He started with some very grounded examples of how machine learning is evolving, and how quickly and unexpectedly (take for example the solution to Go, which was not expected for a long long time).  But before you know it he has jumped to galaxy sized super intelligences warring with each other via doomsday devices that will take out entire planets.  Not kidding. Then he turns around and argues that he personally doesn’t believe there’s any other life in the universe.  I’m not sure how he can reconcile the one with the other, but anyway.

The book is very similar in philosophy to the “Well, the sun’s going to burn out someday, therefore it’s our duty to figure out how to get out of this solar system” argument.  Only he’s arguing that one option for the future of humanity is through AI, the digitization of consciousness, and other stuff like that which you’d normally leave to science fiction.  There’s sections on Dyson Spheres, quantum drives, all kinds of great stuff you’ve read about, as if it’s just a foregone conclusion that we should assume will exist one day, now let’s talk about the best way to use it.

At the end he finally comes back around to reality and how all he has wanted to do is start some conversation among the world’s smartest thinkers that an “AI apocalypse” could happen, in theory, and that therefore avoiding it should be on the table in modern discussion.  He spends a lot of time upset about how every time he gets media coverage it’s always “Elon Musk and Stephen Hawking predict killer robots!”  Which I can see.  I’ve read many of those articles.  He’s not a big “singularity” guy, he’s not big on robots.  He argues persuasively that it’s a more gradual thing that you’ll never even realize is coming, until it’s too late to stop it.  Still, though, it’s hard to make some of the leaps he makes.  There’s that Terminator-esque “it decided that it wanted to survive, and the best way to do that …” leap in the logic where it’s hard to follow where he leads.  That just flies in the face of today’s focus on supervised learning.  If the computer says, “I think the best option here is to kill all the humans”, it’s the humans job to put that one in the “No” category and let it go back-propagate itself.  To get around that you have to assume that we’ve reached the point where it’s all unsupervised, which means we’ve reached the point where our trust in the system is so great that we expect it to never make mistakes, and have built in no safeguards, and I just don’t believe that such a day is coming any time soon.

No comments: