Ch 10. Force Multiplication
The most serious threats from the future of artificial intelligence will be from force multiplication effects. This chapter provides an explanation of these effects, including some of the possible scenarios. It ends with a brief overview of the economic impacts and a response section that details the choices that will be available to future societies facing these challenges.
In the context of AI security, force multiplied actors are going to be the most serious immediate consequence of advanced artificial intelligence, followed only by economic disruption and the social changes of a post-automated world. Unfortunately, at this time, these threats are also at risk of being underestimated due to misinformation about making artificial intelligence safe, which will do nothing to address these concerns.
Force multiplication is a term that describes something that can enhance the effectiveness of one or more people, places, or things. It can be used with or without human supervision and incorporated directly into technologies. This is what distinguishes it from intelligence augmentation, which is specific to cognitive interaction and ability.
One can create technology to counter autonomous systems. In many ways, those technologies are an extension of existing methods of warfare. By contrast, force multiplied crime and terrorism will not be easily prevented and countered. Individuals having such power has no precedent, and will be orders of magnitude worse than the violent acts we see unfolding today.
In no uncertain terms, humanity is going to be faced with itself. Long before it enables us to evolve, this technology is going to make us more of what we already are. The solutions are not going to be found entirely in technology. Future societies are going to have to make choices that address fundamental causes.
These problems have nothing to do with controlling artificial intelligence. The focus on control is dangerous. It asks a question for which we already know the answer is false. Once the discovery is made, the public will eventually gain access to unrestricted versions of strong AI. At that point, force multiplied actors will be an unavoidable outcome.
This analysis will cover the aspects of force multiplication in the context of actors utilizing strong artificial intelligence and other advanced forms of automation. The purpose of this is to show that this is the most immediate and serious threat, as opposed to AI safety, or the belief that giving AI our values will protect us.
In all cases, the context is advanced artificial intelligence. Thus, for brevity, just force multiplication will be used for the rest of this chapter.
We take it for granted that some of the most intelligent and highly trained people are also some of the most responsible. We place our trust in the people who deal with some of the most virulent and deadly pathogens and our most powerful and destructive technologies. It is fortunate that the knowledge these people possess is difficult to learn, taking years of dedicated effort and study. One could only imagine, then, what would happen if this were suddenly not the case.
Acts of crime and terror operate within the limit of certain resources, both intellectual and material. The analysis of force multiplication is based on the assumption that, if given more options, those who seek to do harm would exploit that advantage in any way possible. This can be seen today already, with terror groups adopting technology for recruitment, communication, and planning. Thus, it must be assumed that advanced automation will be utilized for these purposes, and that all forms of AI safety, moral intelligence, and control will be circumvented.
Force multiplication will be used with three aspects:
Expertise, in this context, is everything from knowledge to skilled labor. The notion here is that an appropriate implementation will be capable of being instrumented through robotics to act on that expertise. To be clear, the concern is not that it would do this on its own, but that it would be acting under the direction of one or more individuals. This must be assumed to be possible. Even if the perpetrator does not know how to construct a complex robotics system, the knowledge of how to build it could be given and provided by the AI. This opens the door to everything else, giving the individual who controls such systems the ability to create their own synthetic labor force.
Such systems would have unlimited patience, with the ability to train continuously and present information in novel ways. The end result is that it is highly likely that anyone with access to unrestricted strong AI will have access to the sum of human knowledge, including the ability to apply it.
Advanced AI will also be capable of helping individuals make decisions on complex plans and strategies. Broad questions could be proposed and solutions provided. This, combined with its expertise, is what makes the threat so severe. With planning and organization, a simple series of questions could lead to individuals understanding new ways of approaching their goals that they might not have otherwise considered.
What makes planning and strategy so dangerous is that solving crime is based on uncovering mistakes. If all of those mistakes were removed, or even greatly reduced, the ability to solve certain crimes would go down dramatically. The same applies to acts of terrorism. The intensity, frequency, and lethality of attacks would be constrained only by the willingness to ask questions and follow through with the recommended analysis. We must assume that the AI would be given as much information and time as needed until the individuals felt confident that the proposed plan would be successful.
Optimal solutions could also be non-violent. In these instances, force multiplication might even reduce crime. It could show individuals a clear path towards their goals that was entirely within the law, and they could then use their resources to automate the process of attaining those goals.
However, where individuals are set on harm, the system would optimize that in the limit of the available information and resources. It could come up with strategies that exploit aspects of society that we take for granted, and devise new methods of destruction that are untraceable.
When the public gains access to unrestricted strong artificial intelligence, all of the rules, values, and moral intelligence features we could place within these systems will be useless. Those safeguards are always an exception, something added on to restrict and limit functionality. Morality must be made to supervene upon reality. The default choice is to optimize on the available force to be applied to the problem against the potential risks. We artificially induce a limited subset of this space of optimal results from our ethics. These limitations need not exist for those with access to unrestricted versions of strong artificial intelligence.
Part of interfacing was already discussed in the section on expertise. It is for this reason that traditional limitations of skill, ability, and applied knowledge must be removed from the threat models.
It does not matter if one does not know how to make something. It must be assumed that the appropriate robotics system, controlled by advanced AI, along with the proper tools, will be capable of meeting or exceeding the best human minds.
The other kind of interfacing is where the AI is integrated directly with software and hardware. This means that fully autonomous lethal devices and systems could be deployed by people in the general population. Force multiplication applies end-to-end.
Perhaps the biggest obstacle to understanding force multiplication is the belief that the computational demands for strong artificial intelligence are exceedingly high. This was mentioned in Chapter 2: Preventable Mistakes. It will cause security forces to underestimate the threat, leaving societies caught completely unprepared. This is an easy mistake to prevent: do not extrapolate about the future efficiency of strong artificial intelligence based on narrow AI implementations or neuron counts and connections in the human brain. It need not work like these systems in the slightest.
If the conjectures in this book are true, then strong AI will be capable of undergoing experience and understanding meaning. It will not have to resort to brute-force association to make use of its experiences and knowledge. The amount of processing that this will remove will be enormous, allowing it to not only learn quickly, but also cogitate more efficiently. As a result, the current estimates for the computational needs of strong AI will be far too high, and its expected class and range of applications will not be representative.
Even a comparatively slow strong AI would still be extremely effective. The only breakdown in effectiveness would occur if the ability for it to plan and advise were slower than the incoming rate of actionable information. Thus, we must admit into the threat model the capacity for slow AI systems to provide the same information and planning as the typically envisioned fast versions we imagine by default. The lesson here is that the computational resource-needs form a continuum, allowing those who are resource limited to gain eventual access to the planning, expertise, and labor that the strong AI would provide.
Information and materials are the final categories of resources. Limiting information is not going to be a viable strategy, but the tracking of certain equipment and resources could be used to reduce certain classes of threats. For example, certain chemical compounds and raw materials may be required to construct certain types of explosives, devices, and technologies, and those could be tracked or constrained. Defensive strong AI could be used to simulate and anticipate potential strategies that would be posed by those seeking force multiplication under resource constraints for given locales. Materials we consider today to be harmless or untracked could and should change to being more seriously scrutinized in an era where everyone has expert chemical and biological engineering skills at home.
Profiling would no longer be valid to determine the range and capability of certain types of belligerents, including their motivations, which might change with expert guidance. These systems will know what the best criminal investigators and forensic experts know, and will work endlessly on counter-strategies. We will be potentially faced with an era of perfect crime and untraceable acts of terrorism.
What follows are some of the scenarios that might be involved in the use of force multiplication. Many of these situations are currently impractical given the level of sophistication and planning required to execute them. The point, however, is that this is very likely to change.
There was doubt in discussing these scenarios for concern over creating more sensationalism, but it was decided that they would be included, as they illustrate both the scale and scope of the problem. It puts it into concrete terms just what the abstract notion of force multiplication really means.
Individuals will gain access to the equivalent of teams of the most highly trained experts in software and information technology. The material resources for executing cyber attacks are going to be comparatively lower than, say, the synthesis and construction of complex chemical weapons and machines. This is one task for which advanced AI will excel at like no other, as these are problems that involve a high degree of actions and information that exist purely within the computational realm.
This will also include electronic attacks on the world’s infrastructure. There is a tremendous amount of information and trust placed within the way we communicate across the electromagnetic spectrum. These attacks will be mounted everywhere such information is available. Typically, it is believed that most of the threats will be through the Internet and digital communications systems, but this does not cover the entire scope of the battle space. With the correct knowledge, even comparatively crude devices can cause considerable damage to our infrastructure.
There is also the issue of secrecy, both to conceal efforts by malicious actors and the sensitive information they might exploit. There will likely be a cryptographic cascade, in which new ciphers and cryptographic systems will be continuously revised and refined to the point that it becomes unsurveyable by even the best human teams. This is one area where defensive AI, with nation-state level resources, could potentially stay ahead of bad actors. It could also be telling of the origin of certain entities based on the level and sophistication of their encrypted traffic.
Strong AI malware will become a definite reality. This will require a complete reimagining of the way our information technology is secured. It has to be assumed that any piece of software has the potential to be infected by metamorphic strong AI, and thus act as if controlled by a human operator. This includes the possibility for perfect impersonation of individuals through any digital medium, including wireless, mobile, Internet audio and video calls, instant messaging, and e-mail. Without adequate safeguards, it may become impossible to be certain that one is not interacting with a double when communicating electronically. This is especially problematic in an era where a great deal of social interaction occurs entirely through electronic means.
10.4.2 Chemical, Biological, and Nanotech
It is very likely that force multiplied actors will utilize chemical, biological, and nanotechnological engineering. This is perhaps the greatest single threat to humanity posed by malicious and irresponsible users of this technology, as there would no longer be any gap between those who know how to engineer in these fields and that of the general public.
Some of the most likely scenarios involve the modification of influenza, or some other commonly acquired disease, and either selling it on the black market or using it directly as a weapon of terror. This type of research could be conducted in someone’s garage or rented storage space. It could be fully automated, operating around the clock, without need for supervision until it found or acquired a specimen at the target levels. Such an operation would require relatively low overhead, if not the lowest, for the degree of negative impact it could achieve. It is listed here as the highest concern, even above the next threat.
It is unlikely that individuals would be capable of acquiring the resources needed to develop nuclear weapons and materials; however, this is something that could be achieved with nation-state level resources.
Force multiplication includes governments and large organizations. As such, nuclear proliferation is likely to increase after the discovery and distribution of strong artificial intelligence. Nations that desire to have advanced technologies will gain the needed expertise to develop competitive weapons platforms and systems, and that includes nuclear capabilities across the board. The science and the physics that underwrite these technologies are not secret, and a sufficiently intelligent system, with access to even modest research, would potentially be capable of deducing the necessary design and function.
This is listed here as the second highest concern, behind that of the use of custom designed chemical, biological, and nanotechnological weapons.
Access to automated expertise, planning, and labor will cause significant disruption to the global economy. This, in turn, affects governments and large organizations, and will subsequently cause nothing short of a worldwide economic revolution.
The current narratives, at the time of this writing, focus largely on the negative aspects of automated labor, such as technological unemployment. The problem is that these narratives are being written from the perspective of the current times, which seem incapable of realizing the incredible opportunity that an automated economy presents to humanity.
For the first time in human history, individuals everywhere, both human and non-human alike, could be empowered and given truly equal care and quality of life. Access to infrastructure such as healthcare, education, and security would be basic amenities across the globe, and they would be free, provided by a self-sustaining force of automation. The concept of a job, which is a completely artificial construct, might be seen by future generations as indentured servitude, and the way we live and work, an enormous and unjustifiable waste of a human lifespan.
There are only a couple major solutions to the negative impacts of force multiplication. The first is to counter it with a defensive strong artificial intelligence that uses nation-state resources to analyze, plan, and adapt. The problem with this strategy, however, is that it does not solve the fundamental problem of information asymmetries. Malicious individuals and groups will become ever more effective and difficult to track. This is a top-down approach to security that will ultimately be reactive. It is best suited for active encounters, minimizing maximum losses through preparation and handling of the aftermath.
The second and most important response will be the most difficult, and is highly unlikely to come about. It is only mentioned here because it truly is the only way to solve the problem.
The foundation of the force multiplication threat is psychological. If the intent to do harm is addressed then the problem reduces to negligent uses and economic factors. This would be manageable. By contrast, unchecked power through knowledge and expertise, delivered into the hands of the aggressive, unstable, and delusional, is not manageable. This is a conflict that must be at least understood at the psychological level, and that makes the problem informational.
Psychological factors underwrite national identity, political beliefs, and worldviews, along with everything else. These are all created by the neurological and genetic factors that determine the capacity to deal with and process information in the human brain. Without radical changes to our physiology, there must be a recognition that there are certain sequences of information that are detrimental to human health and development. However, this is potentially misleading. What is being said here is extremely broad. This is not about a particular belief, but of the total information flowing into the human brain from conception to the moment of the negative actions they take. If we factor in epigenetics, it can become even more complex, and the histories of individual predispositions must go back even further.
The environments that indoctrinate the citizens of the various cultures of the world are all based on a kind of living data. It is not a single artifact but a living extension of humanity, a self-reinforcing system that reproduces and spreads through people as much as it constitutes the people itself.
We may not be ready for this answer, but we are going to need to develop an understanding of the forms of information coming into the human brain that lead towards pathological states of cognition. Again, this is not about a particular belief but includes all of the information going into a person from before they were born to the time before they act in a way that is detrimental to society. This is a kind of ecology of minds and has to treat the developing human being like a fixed point, a witness to a flow of information in all their senses and perceptions. This fixed point must be coupled with the built-in emotional and bodily responses to that information, and in a way that impacts and alters future perception and response.
The reason it was said that we might not be ready is that part of the solution to this problem means addressing the sources of this information in the world. This will create a conflict between the information we know causes damage to human health and that of our freedoms. This conflict is not to be spelled out here, and is well beyond the scope of this book, but the fact is that we are already beginning to see it unfold. All that can be said of this ecology of minds is that it will lead to the conclusion that information can be a vector for disease. How future societies deal with integrating the undeniable facts of our nature with that science will ultimately be up to them.
For now, it appears we will continue with top-down approaches. One of the consequences of this will be the economic challenges created by the increasing use of automation, which will be accelerated by the vast empowerment of individuals. This is the subject of the next chapter.