Ch 1. Introduction
1.1 Strong Artificial Intelligence
Narrow or weak AI is the kind of artificial intelligence that does well at the limited range of tasks for which it was designed. Its defining characteristic is its rigidity. New narrow AI algorithms and implementations have to be created or trained for each new type of problem or situation we wish to automate. Further, there are many conscious and unconscious processes that we take for granted that can not be attempted by any narrow AI, neither now nor in the future. This is not due to degrees of effectiveness, but represents a fundamental difference in kind.
Narrow AI represents a fundamental misunderstanding of the role of conscious processing in the derivation of value and meaning. This is not just a philosophical conundrum, but a very practical and scientific matter that impacts its construction, effectiveness, and efficiency. Current approaches, including deep learning and other popular methods, are fundamentally incapable of bridging the gap between mere automation and the machine understanding required to achieve generalizing capacity. This will remain true regardless of advances in computing hardware.
By contrast, strong AI will have the ability to apply past experience to new problems areas and challenges. Its defining characteristic is its generalizing capacity. Like us, it will have the ability to adjust and operate in new environments or situations with growing effectiveness over time. It will not have to be reprogrammed or redesigned for each new type of situation or problem it attempts to solve. Most importantly, however, will be its ability to understand meaning and derive value, which presupposes higher cognition in both machines and animals. In addition to these abilities, strong AI will also be vastly more efficient, as it will not have to approximate the benefits of machine understanding through brute-force association and enumeration.
Strong AI represents the ne plus ultra of human achievement; there is simply nothing more beyond this in terms of impact. Once achieved, we will have unlocked the secrets of abstract cognition, enabling us to do labor and research that will be limited only by the material and energy resources we choose to pool towards it. The eradication of poverty, hunger, and disease will be virtually assured. Humanity will have realized the means to achieve its greatest ambitions and dreams, but not without cost.
The immediate threat will not be from strong AI itself, but from those who will utilize it. Strong AI is a force multiplier. It enhances the power and effectiveness of that which is used in conjunction with it, and there is no realistic and practical way to dictate who uses this power in the world once it is released. Further, it will not be possible to prevent its eventual release or limit its spread. Laws and regulations will be ineffective for the same reasons they have been ineffective at combating the piracy of various digital works. It will only take one reverse engineered copy of strong AI for the threat model to change permanently. In the end, anyone who wants access to strong AI will eventually gain access to it, regardless of any and all restrictions we build into or around it.
There is also an emerging threat from misinformation and propaganda. Private interests promulgate fear and seek to delay the development and use of this technology. Obsessed with control, they fail to understand the inherently uncontrollable medium in which strong artificial intelligence will be developed and used. As long as their initiatives are distracted by local AI safety, they will be incapable of addressing the global AI security issues.
Tamper resistance, moral intelligence, and self-security will be useful for making artificial intelligence safe for small numbers of people, but will do nothing to protect large populations. This is because all forms of self-security and AI safety can be potentially circumvented by those with the expertise. This is a fundamental issue that will not change with time.
Three things motivate this book. The first is to make it crystal clear that we are ultimately powerless to stop the release and future abuse of this technology.
The second is to show that the best case scenario requires a fundamental change in society, possibly to human nature itself. With strong AI, we may have reached a point where individual power has exceeded the means of conventional human power structures. When this happens, we will be judged not by some subverting force of super-intelligence, but by our own genetic and cultural baggage. The malevolent among us will have access to infinite knowledge and expertise over any subject, with the means to cause great harm with minimal resources.
This leads to the third and final motivation. The most realistic scenario to mitigate the destructive potential of this technology is to develop and instrument it for defensive purposes as soon as possible, before it is developed unexpectedly somewhere else in the world. We must cooperate in this game-theoretic step by making this first cooperative move.
It is extremely improbable that we will change enough to be responsible in our use of strong AI before it arrives. Also, based on the present rate of propaganda and politicizing of the issue, we will have the additional challenge of determining how best to prepare.
The most logical strategy will be to exploit a first-mover advantage by developing this technology now and using the only advantage that large power structures have over asymmetric actors: vast resources. With defensive strong AI systems, we may be able to stay ahead of malicious users of this technology. This represents the most realistic hope in what will become a developmental struggle for humanity as it learns to cope with a new found power over thought and experience.