Review: Superintelligence: Paths, Dangers, Strategies

Last Christmas my uncle gave me this little book: Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, and it captivated me from the very first pages. As I kept reading, my understanding of “the big picture”, as it should be called, became clearly expanded. There is no other way to call this book as an eye-opening experience towards the imminent technical, moral and philosophical struggles within AI development that will occur during the following years (centuries?) and here’s my personal review about it.

What an undergrad student would care about AI?

During 2016, HBO launched a new show called WestWorld. If you’ve been watching the show, just like me, we must agree on the magnificent and seductive nature of developing consciousness and (or) other intelligent life forms with perfected skills. The idea of perfection is, by itself, impossible but, by having strict control over every cognitive aspect of an individual, we might overcome the horrible scenario in which a dominant AI wipes out the human race, if that were the outcome and not a world of human slaves commanded by a superintelligent AI which will value us as much as we value simple life forms like flies, fleas or mice.

We become passionate about the design process, about giving our creations a bit (pun intended) of cognitive value in the work they develop. As an undergrad student, all of these stuff captures us in a deep sea of imagination-powered ideas and the outcome is a deep sense of respect, appreciation and excitement about the AI courses.

The simple beginnings, and more

As one reaches that educational point in our college career when advanced and specialized courses become unlocked, the AI ones are the first to run out of available spots. But the misconception about the reaches of those courses are really the reason behind the massive amount of drop-outs by the end of the semester. By 2 or 3 classes students ask: “Wait a second, how in the world all of this is related into my purpose of writing a general intelligence like Iron Man’s Jarvis?”, and they run through the door by all the astounding amount of complication required for a simple natural language processing program such as an Eliza chatbot.

But those of us who endure the reality-checks of the cosmic complexity and elegance required in the AI field, may start to really appreciate publicly available AI-powered software and the intricate relationship between the field and many other natural sciences such as Astronomy, Biology, Engineering, Physics and Mathematics.

The missing part

Acknowledging the fact that a career on AI is indeed a long, funny, deceptive, intriguing and hard journey is no simple task. Often filled with promising solutions which lead to disappointing outcomes, AI has become one of the most thriving tech industries nowadays, conquering thousands of hearts with the innocence of supreme automation and human interaction capabilities.

Scientific community tends to discard philosophical and utopical predictions about the future of technology. During the past century multiple scientific celebrities made a mistake by giving technological and scientific forecasts, handing out nothing but a delusional expectancy to the public about what could be done. This hype, eventually, froze out and led to focus on other more tangible areas. Falling into the prophet scheme is easy, and AI has been one of the areas in which the impact has been palpable around the world. From The Supersonics to WestWorld and as far as the early works of Sir Isaac Asimov (which I totally recommend you to read if you haven’t already!), theoretical and philosophical analysis of the outcomes of AI in our daily lives, as dystopical as they may seem, are very important to assess the different risks and possible scenarios in which we may (or may not) end.

Written carefully, with the “not fall in prophecy writing” premise in mind, this book dives into the current stage of AI as seen from multiple points of view. It gives us a deeper look onto the world, and here are my personal favorite quotes and facts about the book.

The book

As the book begins, Nick Bostrom gives us a brief look into AI’s history, from the early beginnings when a handful of computer scientists began developing mathematical models which could be used to simulate a level of cognitive value, all through the different “AI Winters”, as he defined them, every big leap that has been made and, finally, the current state of the art for AI in each of its multiple branches.

Reading about AI’s history allowed me to connect some missing dots on my mind map for the institutions, scientists, companies, governments, motivations and results of different AI programs. It also allowed me to understand that there are thousands of things that have to co-exist in order for a scientific progress to be made.

Defining superintelligence is no easy task. Even for seasoned programmers and AI scientists themselves, defining what would a superintelligent being would actually mean in terms of capabilities and actual reaches becomes difficult. Nick gives a great definition by saying:

“… any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”

This definition allow us to think of superintelligence in a more abstract direction, as it won’t fall into technicisms and other philosophical matters. Should a superintelligence be able to play chess? Sure. Should it also know how to drive? Yeah. Would it be able to make macroeconomic inferences and take stands on global decisions? Sure. Predict cosmic events? Yep. Play football? Probably yes. Being as open as possible to the astonishing amount of fields a superintelligence would be useful is one of the fundamental skills to really understand and appreciate this book.

Now, we know a superintelligence would be able to perform a series of tasks of relevant interest for us, humans, but, how are we going to achieve the so-called superintelligence?

Right now we can look at AI and think of it as a “hands-free” automation process. But achieving a superintelligence level of development is hard, really hard, so multiple available approaches do exist, from whole brain emulations to brain-computer interfaces, each one of them having its own pros and cons. Are they viable? Most of them depend on each other for fully achievable advances and, as my humble opinion, biological cognitive enhancement seems like the most feasible option. The creation of such a thing would make itself present in various ways, from fast learning to a collective intelligence with little nodes working together (such as in a microservice architecture) and feeding each other with the required, relevant, information that a specific process would need to be completed.

In terms of the book, the way it talks about the implications of a superingelligence, gives the reader a great insight on the opportunities, dangers and paths the inminent creation will create for all of us.

Imminent doom

As the book goes by, it explains most of the complications that may arise. Here I would like to be honest and say that, as Nick gives a plethora of solutions and viable workarounds for most of the scenarios, it is mostly possible a human race extinction if such an intelligence becomes a reality.

There has been thousands of explorations on this matter, some of them with justifications for their predictions and most of them focusing on a dystopic outcome for the instantiation of a new superintelligent AI. It has been since Isaac Asimov that we’ve been worried about the outcome of creating intelligent beings (things?), it arises many philosophical questions and dilemmas but, this book in particular, tackle them down one by one with such a great proficiency and scientific validation, allowing the reader to choose the most valid outcome based on the data provided.

From implementation details like the input and output of the said system, to the will of a superintelligent being, each page resembles the authenticity of our existential fear towards our own creations. Would they, like us, value the intrinsic nature of life? If they don’t, should we encapsulate or imprison such a thing? And if they do, in fact, value the life of all beings (including us), are we going to be able to value the chance and trust this thing?

Such questions become feasible as the book goes by. It is genuinely inspiring to read such great insights on each one of the subjects related to each question. As one keeps reading, it evens become more apparent the need for regulations, for warnings and, as a last resource, a international security enforcement to prevent an intelligence outbreak with the wrong moral values being liberated into the wild (as the book clearly explains how this could turn really, really bad).

Economics and social impacts of superintelligence are also main characters in each one of Nick’s book. From governments being overtaken by a superintelligence, to wars, social rebellions, religious stands and current AI developers positions, the actors of this complex system would define the future of our own existence by the great courage and value we each take on building the next superintelligent AI.

This book is probably the one that made popular figures such as Elon Musk and Bill Gates seem “paranoid” about AI, at least that’s how people tend to see this guys by some comments made on interviews and other public appearances. For those who do not feel like they’re being reasonable, I kindly ask them to read this book, as it will surely change the way they think.

As an engineering student, I am mostly amazed of how much of these subjects were so easily digested by my (mostly) technical brain. Sometimes, thinking in philosophical ways may seem appropriate and, for this particular subject, won’t hurt to think in ways we are not used to, being open-minded about a fatal outcome allow us to do something, to keep learning, creating and developing AI, at the same time we take several cautions; it is our responsibility.


This book is a must read for everyone. Maybe it contains a lot of technicisms, but someone without this knowledge should be able to understand the great majority and the ideas without a problem. It is one of my favorite books I’ve read in my whole life, and I will surely keep the ideas (and the warnings) forever with me.

As always, thanks for reading. For comments and feedback, you can find me on Twitter as: @humbertowoody.