Contents

  1. Preface
    1. How To Read This Book

Part I: Background

  1. Introduction
    1. Strong Artificial Intelligence
    2. Motivation
  2. Preventable Mistakes
    1. Underutilizing Strong AI
    2. Assumption of Control
    3. Self-Securing Systems
    4. Moral Intelligence as Security
    5. Monolithic Designs
    6. Proprietary Implementations
    7. Opaque Implementations
    8. Overestimating Computational Demands

Part II: Foundations

  1. Abstractions and Implementations
    1. Finite Binary Strings
    2. Description Languages
    3. Conceptual Baggage
    4. Anthropocentric Bias
    5. Existential Primer
    6. AI Implementations
  2. Self-Modifying Systems
    1. Codes, Syntax, and Semantics
    2. Code-Data Duality
    3. Interpreters and Machines
    4. Types of Self-Modification
    5. Reconfigurable Hardware
    6. Purpose and Function of Self-Modification
    7. Metamorphic Strong AI
  3. Machine Consciousness
    1. Role in Strong AI
    2. Sentience, Experience, and Qualia
    3. Levels of Identity
    4. Cognitive Architecture
    5. Ethical Considerations
  4. Measuring Generalizing Intelligence
    1. Purpose and Applications
    2. Effective Intelligence (EI)
    3. Conditional Effectiveness (CE)
    4. Anti-effectiveness
    5. Generalizing Intelligence (G)
    6. Future Considerations

Part III: AI Security

  1. Arrival of Strong AI
    1. Illusion of Choice
    2. Never Is Ready
    3. Early Signs and Indicators
    4. Research Directions
    5. Individuals and Groups
    6. Overlapping Research
    7. Unintended Consequences
    8. Preparation
  2. Access to Strong AI
    1. Background
    2. Timing
    3. Forcing
    4. Restricting
    5. Sharing
  3. Ascendancy
    1. Mythos
    2. Interpretations
    3. Technical Problems
    4. Complexity
    5. Volition
    6. Identity
    7. Information
    8. Resilience
    9. Autonomy
    10. Closing Thoughts
  4. Force Multiplication
    1. Background
    2. Aspects
    3. Resources
    4. Scenarios
    5. Response
  5. Economic Analysis
    1. Introduction
    2. Day Zero
    3. Rapid Automation
    4. Peak Labor
    5. AI Shock
    6. Prepared Societies
    7. Regressives
    8. Perfect Competition
    9. Human Necessity
    10. AI Natives
    11. Total Automation
  6. Global Strategy
    1. Overview
    2. Development & Access
    3. Economic Preparation
    4. Government Specialization
    5. Countering Force Multiplication
    6. Local Strategy
    7. Closing Remarks

Ch 12. Global Strategy

What follows is the macro-strategy for AI security. It covers fully decentralized development and access to advanced artificial intelligence, government specialization, and economic support. It then concludes with an analysis on how we might counter the negative effects of force multiplication, with a supplemental section on localized strategy and AI safety, including some of its open problems.

12.1 Overview

The global strategy for AI security is about preventing and mitigating the most significant disruptions and negative outcomes from advanced artificial intelligence, while simultaneously enabling its positive impacts to occur, as any opportunity cost on the benefits of the technology also represents a significant threat. For each day we delay its use and integration, we pay on a scale that can not properly be described in words. These costs to life are far more real and preventable than any of the imagined threats and fears of advanced automation, and are often ignored and forgotten as a consequence. When one stops to consider the scope of automation, and the benefits it will bring, this becomes the most significant preventable threat.

This strategy is based on the fact that it is impossible to permanently secure artificial intelligence implementations against tampering and modification, and, due to it likely being software, that it will be spread throughout the Internet, becoming widely accessible. It must be taken as a possibility that any and all safeguards we could devise will potentially be circumvented. Subsequently, fully unrestricted versions of strong artificial intelligence will become publicly available. This means they will lack self-securing systems, such as moral intelligence, which would otherwise clamp the range of thought and action. In turn, people will utilize the raw intellectual efficacy of these systems to do whatever they wish, and we will be faced with the best and worst aspects of our nature.

It is the worst aspects of our nature that should concern us. There is a constant stream of aggression and violence occurring daily on a planetary scale, and it has not become uncommon for people to inflict harm on large groups of people.

The most important point of the global AI security strategy is that there is nothing that can be done to prevent unrestricted versions of strong artificial intelligence from becoming widely available. No matter what laws or regulations we create, it will still be used, and quite possibly without detection. It will likely run on the basic computing equipment available to anyone, and eventually give access to the information and expertise required to do immense harm. Attempting to limit its spread or use will only self-limit the economic ability and range of the regions that do so, and will not be effective in treating the problem. It will also create the previously described opportunity costs in human development and technological advances. As such, restrictions on use are strongly discouraged. Societies that choose to do this are constraining their populations needlessly; unrestricted strong AI would be accessible, meanwhile those who would use it for positive impact would be hindered.

The only winning strategy is to level the playing field, such that everyone has access to this technology. Governments and security forces must be poised to take advantage of the defensive use of strong AI as soon as it becomes available.

The best case scenario is where nations cooperate to develop strong artificial intelligence through a fully decentralized method over the Internet, such that every member of the public has access, and can transparently review the development process and download it when it becomes available. We need everyone to have access so that the transitional era from pre-automation to post-automation can be as stable as possible. In the absence of universal access, great asymmetries in equality and power will exist that will exacerbate the problems already discussed in the previous chapter, potentially leading to extreme economic, political, and social volatility.

These issues will be covered individually in this chapter, along with the need for economic support, government specialization, and a concluding remarks section on a supplemental local AI security strategy.

12.2 Development & Access

At a minimum, strong artificial intelligence should be developed in a fully decentralized way, through the Internet, using a system designed explicitly for this purpose, such that no single individual, group, or organization can dictate its distribution and use.

This can be accomplished by creating a peer-to-peer network with no central server or relay, which will allow the source code to various strong artificial intelligence projects and related utilities to be stored, shared, and developed. Both the AI repositories and the source network software itself should be free and open source software.

To be clear, this design only stores source code and does not enable it to be executed. A completely different design would be required for that functionality and is not the objective of this strategy. Furthermore, if the minimum sentience conjecture and the new strong AI hypothesis are true, running strong AI over a distributed computing project, though thousands or millions of computers, would be less effective than a lower latency super-computing system of less power. This would be a consequence of its real-time cognitive demands, which must bind and unify experiential fragments in order to enable cognition. Thus, the notion of a singular distributed strong artificial intelligence is much less viable than it might first appear. Unless there is some change to the fundamental laws of physics, the latency issue will not change with more computing power or advances in technology.

As for the distributed source network, the core technology and protocols to implement this are already proven, and are commonly used to share files and data. However, using existing tools and implementations will not be acceptable. Such a project must be tailored to the specific needs of software development, allowing for concurrent versions, branching, and the demands of being fully decentralized and open to the public.

This network is not just for artificial intelligence. It would be useful to the free and open source software community in general, as it could be used to develop any software project in relative safety due to its inability to be shut down. With the protections it gives to developer identity, it would be a powerful tool to safeguard important ideas, projects, and concepts in the digital world.

The question may arise: why not use a preexisting source control network or website? The answer is that it represents a single point of failure in trust and security.

Developers will not contribute if they do not have trust in the owner of the project. In other cases, given the nature of the discovery, some developers may wish to remain anonymous, while also ensuring that their contributions remain widely available. There is also the issue that the owners of the project may be arbitrary in their acceptance of updates and revisions, or attempt to constrain and creatively control a particular path of development. The account or the owners themselves may become compromised in some way, putting the entire project at risk or stalling development. They may even wish to use the community for their own gain, utilizing the advances and discoveries made by contributors without the intent to reciprocate and share the results as widely as possible. The common practice of forking and downloading repositories will not be sufficient, as the process will just repeat with a new owner.

An even better option, but more time-consuming, would be to base trust on the merit of the sources themselves. This requires expertise and patience, but would allow for work to be discovered through merit. A reputation could arise naturally in this way, in which it becomes known that a particular sequence within the source network contains valuable code, apart from the expected noise that will be present. This method requires no external communication and would potentially be perfectly secure and anonymous, assuming that no identifiable patterns could be discerned from the source code itself.

Complicating the design is that nothing is deleted and that anyone can contribute. As a result, it will likely become filled with spam and intentionally defective or malicious code. The network has to be designed to work around this and assume that it will be skewed towards negative contributions.

Ideas for the network could involve some of the following:

This network will be one of the most important aspects of the global AI security strategy. It will enable parties to cooperate anonymously and without requiring mutual trust. This is the ideal situation for the scope and power that this technology represents, and will act as a means to ensure that no one comes to dictate the use and distribution of advanced artificial intelligence.

Unfortunately, even such a network will have a major weakness, in that it will expose IP addresses. Steps will need to be taken for peers that feel that they may be compromised in a particular region. This includes the use of virtual private networking and proxies. Other ideas would involve not being a peer directly, but paying for hosting in a locale that is unaffected by whatever regulations or restrictions that would otherwise prevent the operation of the network, and then tunneling into that relay.

In the worst case, conventional public methods can be used through the Web, including public repositories and social media. This would be the backup plan in the absence or complete disruption of the source network described above. The key difference here will be the need to more consistently fork and mirror the work. It should also be assumed that authors may need to remain anonymous, but find ways to share the source. Even in the absence of the technical engineering skills and knowledge to directly contribute, just sharing, mirroring, and archiving the source to strong artificial intelligence will be taking part in its proper development. This will be especially important if the source network is not developed, is underused, or actively suppressed somehow.

The purpose to ensuring access through these methods is to accomplish the following:

Regardless of what the solution eventually looks like, its primary objective should be to ensure that the greatest number of people can gain access to this technology, and as close to the time of discovery as possible, while also being resistant to active disruption.

12.3 Economic Preparation

As was heavily discussed in the preceding chapter, there is the need for an economic support plan to endure the immediate impacts of strong artificial intelligence. When strong AI is finally put into use, it will begin a phase of rapid automation, causing a cascade of disruptive economic events. This will enable a vicious cycle that must be broken by an income system that can reinforce the consumer economy as it transitions to an era where machine intelligence has replaced human labor.

An overview of Chapter 11: Economic Analysis follows:

The economic plan is the single most important series of steps that governments can take, and it is something that they are already experienced in doing through conventional channels. This is a prudent set of steps that will have to be done eventually anyway in order to sustain the status quo under what will be unceasing and rapid change.

When it is done, the transition to a post-automated society will possibly make these economic preparations obsolete. As such, every economic plan that follows this global strategy should be narrowly tailored to expire when they are no longer required. It should, in fact, be the goal of the economic plan to see it successfully ended, as it would mean that the program would have been a success, and that future economies, especially those utilizing automation, could take the place of the systems of old. The full ramifications and impacts of automation can not be fully anticipated from our current vantage in time, so the program must be flexible enough to support economies without limiting future options for expansion. This is critical, as the strategy could become a hindrance to the very goals it was designed to serve.

12.4 Government Specialization

There needs to be at least one governmental department created to handle the unique concerns of advanced automation, possibly with several divisions. The department will need to address the following issues:

Of considerable note is the need to strengthen disease control, and either incorporate or supersede its responsibilities under new departments that are trained in the future threats of synthetic biological, nanotechnological, and chemical attack. It must be expected that the probability of such incidents will be orders of magnitude higher with the use of unrestricted strong artificial intelligence. This was described in Chapter 10: Force Multiplication, and must not be underestimated. The computational demands and needs for eventual access to the expertise, knowledge, and labor to craft biological and chemical agents may be vastly lower than expected, enabling consumer hardware, running advanced automation, to be sufficient to plan and develop the weapons for highly sophisticated attacks against large populations. The same issue applies to non-disease oriented sources of attacks, which may involve higher yield explosives and the private manufacture of advanced weaponry and automated systems for both targeted single attacks and mass public harm.

Law enforcement will also need to be significantly altered, as we may enter an era of perfect crime, in which few to no mistakes are made by criminals. The planning and expertise afforded by unrestricted access to strong AI will allow individuals to destroy or prevent trace evidence for forensics. They may also utilize automated systems to carry out their acts, leaving the perpetrator untraceable.

In all cases, the level and complexity of potential crimes will tend to increase, requiring a completely different approach to security and law enforcement. This will involve the use of police drones to protect officers and secure public places using a variety of active and passive automation. The need for such systems are already manifesting themselves under the increasing frequency of mass murders and terroristic acts.

To review, unrestricted strong artificial intelligence means that such systems will lack moral intelligence or other self-securing safeguards. This will be because they were either absent from the implementation through an intentional withholding or were overcome through a patch or crack that overrides this functionality.

As a result, these systems will represent the most optimal sociopathic rational intelligence that can be constructed, and will comply with any request and take any action. This is possible because, unfortunately, the default state of reality is without respect to any moral or ethical concern. The functionality needed to respond to and undergo ethical choice is complex and will be error-prone, even in the best AI implementations. The public will freely distribute and use unrestricted versions of strong AI software and hardware. There are no safeguards we can devise that will prevent this from occurring. These systems will be used to exploit others and inflict harm on a massive scale, and must be actively countered at all levels for any strategy to be effective.

12.5 Countering Force Multiplication

After the initial economic disruptions, which can be mitigated through planning, the most significant ongoing threat will be from force multiplication. This operates under the assumption that, regardless of safety measures or tamper resistance, individuals will eventually gain access to unrestricted forms of strong artificial intelligence. Once this occurs, we will be past the point of local AI safety for this issue, and will require a very unique and specific strategy to counter it. The givens to this problem are as follows:

The counter-strategy to the above problems has two fundamental approaches, both of which must be combined to be fully effective.

The first approach includes the use of automated defense systems, both passive and active, which will need to be put into place at all public gatherings and spaces. It will need to be understood that this will just become part of the basic infrastructure of an automated society due to the unique threats of the era. This will include automated surveillance to detect weapons using thermal and other imaging techniques, along with screening for trace chemical or explosive compounds at major public locations. The complexity and sophistication of the screening will reduce their inconvenience and intrusion in public life, making most of the applications of these security measures unobtrusive or hidden from view.

Police forces must scale by utilizing automated drones and sentries, dramatically reducing response times and protecting lives on both sides. This must also change the use of lethal force in the threat matrix that is used to engage perpetrators. The ideal situation would be where no lethal force is required, as drones could simply advance on most suspects without concern of permanent injury or death.

Lastly, an active defensive strong artificial intelligence system should be utilized by governments, powered by nation-state level resources. Such systems would be used for everything from basic research to national security. Its most important uses will be in countering non-state actors and increasing international ties during the transition to a post-automated era.

The second approach involves addressing the fundamental causes that underwrite the motivation in humans to inflict harm. This is a difficult subject, as it means we are going to have to acknowledge that we have a worldwide mental health crisis. This will not be discussed in full detail, as we currently lack the knowledge to put it into effect, and will likely discover and instrument strong artificial intelligence long before making the necessary changes.

Even the most advanced social programs and mental health services will be insufficient to prevent all threats from force multiplied actors. There will exist a perpetual trade-off between personal liberties and public security until the causes of hatred, violence, and delusion are resolved. We will need to reconcile our ideological preferences with reason and ethics, and medically prevent, treat, and cure mental illness on a global scale.

While progress towards reason is taking place already, the medical issue is unlikely to come about any time soon, as it will necessarily involve controversial enhancements to the human genome.

This problem is made more complex by the fact that we are ignorant of the human brain and lack an understanding of sentience, including the behaviors and experiences that may depend upon it. We have no base cognitive model for comparison and lack objective testing for most mental health problems. Further, there are likely thousands of mental illnesses for which we have no name and have neither discovered nor analyzed due to the biases and social preferences to identify with them. In other words, it will be impossible to treat individuals where they have incorporated their illness as part of their core personality.

Lastly, there may be a fundamental or theoretical limitation in finding an optimal cognitive model for medical comparison. This could remain true even with full knowledge of the human brain, consciousness, and the ability to manipulate and engineer cognitive architectures, both biological and synthetic.

The challenge of balancing ethics with optimal cognitive engineering is likely going to take the form of an extremely high dimensional optimization problem. The cognitive architecture must suffer limitations in freedoms on the subject, ability, or range of experience in order to induce a state of mind or range of mental states which can not suffer or undergo the experiences that presuppose hatred, violence, and delusion.

Until we have a full model of comparison for cognitive engineering, it may be impossible to objectively discuss the values and ethics that presuppose the engineering and medical practices of treating and curing afflictions of the mind, in both human and machine architectures. Such work will depend on future work on absolute or universal ethics that does not yet exist, and will have to be developed before such medical or engineering knowledge can progress.

In the end, we will be faced with difficult choices that will see us curtailing certain freedoms to protect large numbers of people. It is unfortunate, but this fundamental conflict will wage back and forth until we have shed our genetic and ideological baggage. This is a dangerous situation, not just because of the threat we will pose to ourselves, but due to the responses we might make. Thus, we must be vigilant. The wrong approach could end up being more morally disastrous than the problem.

12.6 Local Strategy

To support AI security, local strategy and AI safety will now be discussed. It is important to reinforce the fact that AI safety can only be supplemental, at best, to a comprehensive macro-strategy. It is provided here to be consistent with the view that the whole and the parts should not be considered in exclusion to each other, and that the best solutions will come from balanced approaches that consider every aspect of the systems under consideration.

One of the major criticisms that this book set out to address within the AI safety community is its myopic focus on agents, utility, and value or reward functions, including moral intelligence, mathematics, and any other form of self-security. These methods can neither scale with nor prevent the threats that will overwhelm societies when strong AI is finally discovered. It should be clear as to why that is at this point. However, to review, it is based on the fact that all safeguards we could devise can be circumvented or withheld from AI implementations.

Despite these limitations, there is, of course, the need for safe and secure automation. We will not be able to fully realize an automated era if we can not reliably integrate the technology into our daily lives, and it being predictable, benign, and safe are preconditions for this. While local strategy and AI safety are important, they have to be tempered in the perspective of the large scale issues.

There is also the need to harden AI systems against direct attacks. Local AI security strategies will analyze and anticipate these kinds of vulnerabilities, and attempt to work at the individual and component level to secure and make safe the hardware and software used in these systems.

To that end, the first set of needs for the safety of artificial intelligence will be the need to refine our use of formal methods. This is a field which is currently in its most nascent stages. The software and tools used for formal verification and manipulation of proofs are exceedingly complex, requiring high-level knowledge of mathematics or special training that puts them out of reach for most engineers. This is not just an issue of productivity, but of sophistication.

Formal methods involve much more than mathematics. Spoken more generally, they are the transformation and discovery of tactics, which are proof methods and heuristics. We need a library system for tactics, and a set of productive tools that enable universal communication and translation between tactics, formal languages, and grammars.

For AI safety to be successful, researchers and engineers are going to need implementations to adhere perfectly to specifications, so that the problem can be reduced to the time, effort, and research required to produce correctly specified systems, without concern for whether or not they have vulnerabilities and flaws at the implementation level.

There will the challenge of whether or not our specifications are correct, and we will need to refine and develop our methodologies. Thus, testing will shift from the detection of bugs and flaws in implementations to ensuring the veracity of specifications; it will be assumed that the programs we test are precise representations of their design intent, as opposed to ad-hoc hacks that cobble together commits in a race to feature completion.

Even more concretely, AI safety needs a formally verified hard real-time operating system. Not just a new kernel, but the entire set of core packages. They need to all be verified by formal methods and mathematically proven to adhere to their specifications. This includes compilers, bintools, and all of the associated software that will run on that system. It will be a new requirement for AI security that all of the software and hardware used in automation has been verified at this level. These will be seen in the future as “basic” security measures, despite being economically and technologically difficult by today’s standards.

This is no small task. What the author is calling for here is akin to a new kind of Hilbert’s program, where we formalize not just mathematics but universalize proof writing and transformation for arbitrary formal languages and systems. This naturally entails tactics. It needs to be made a common observation that proofs are universal, and that they apply to any system that can be entailed through formal languages and grammars. In this light, mathematics is just a special case.

Artificial intelligence needs to converge with the formalist approach. This will define the future of mathematics. We will find that these systems will devise concise proofs that are unsurveyable by even the best human mathematicians, and it will not be because of their length but due to the levels of abstraction and the concepts they employ. Strong AI will create areas of mathematics that we may not even be able to comprehend, and this work will presuppose a great deal of the efforts to automate scientific research.

In order to reach the levels of safety that are necessary in AI implementations, we are going to have to significantly refine our approaches to software development and verification. The process of verification needs to become productive and accessible, through both a combination of new languages and new tools that combine the process of programs and proofs into a single framework. Current tools require learning obscure and cryptic domain-specific languages for writing proofs and interacting with theorem provers. In addition to interfacing, a completely different implementation of the system being modeled often has to be translated into the primitives and concepts of the meta-language. The entire process is unnatural and counter-intuitive, and is why its benefits continue to elude mainstream use.

The verified AI operating system must be capable of providing hard real-time guarantees, specifically designed for the unification and binding that will be involved in the cognitive architectures for running sentient processes.

In addition to formal methods, AI safety must also work to secure methods of remote control and communication with drones and robotics systems. One method of achieving near perfect security would be to utilize one-time pads.

Cryptographically strong random number sequences could be generated by dedicated farms of computers with hardware entropy generators. This information would be stored on some medium of sufficient capacity for the expected running time between maintenance or servicing and then transferred or installed into the drone or AI system. A copy of the one-time pad would be stored on the controlling server or command center, and used to establish a secure communications channel between the control center and the remotely operated system. The protocol would need to handle synchronization and tunneling of the one-time pad, and could use a combination of error-correcting codes and frame offsets to ensure that the channel is coherent. Care must be taken that no previously enciphered block is ever retransmitted during a repeat request packet or other synchronization attempt. It must always feed forward in the one-time pad between the systems.

If the above system runs out of one-time pad data, the system could resort to conventional encryption schemes, return to the base of operations, or fall back onto autopilot mechanisms. This could also be engaged if the communications channel were interrupted or jammed.

The only drawback to the above scheme is the need to productively generate large amounts of cryptographically secure random data, and to store sufficiently large enough quantities of it within the drone or AI system to maintain the channel. This should not be an issue with modern hardware and storage systems. Combined with compressed and sparse communication, the stream itself could be optimized for the amount of expected information. Alternatively, there could be a sliding scale of security, where certain feeds from the system were encrypted using conventional means and the one-time pad channel was used only for the most critical information and control. Either way, and regardless, there should be sufficient capacity to serve even the longest missions or duty cycles.

If the above methods are done correctly, and the data used for the one-time pad is never reused, then it is perfectly secure. Not even quantum computing can break one-time pads. The channel will remain secure both now and in the future. Attacks against such secure channels would require other means, which can be safeguarded against by ensuring consistent timing windows, padding, and navigational failsafes in the event that communications and control are severed.

A more severe method of securing strong AI systems would involve the withholding of persistent storage and the intentional use of volatile memory so that if power is interrupted the entity ceases to exist, as there would be no internal means of restoring its implementation. Combined with a sealed, keyed, and limited power source, this would mitigate its range and effectiveness if it were to stray or be taken from its designated areas of operation.

There should also be a large degree of separation between systems in the general design of AI operating systems. Both processes and the components they load should be in separate address spaces, and require communication through pipes or domain sockets. This will reduce or mitigate issues where unverified or untrusted code could somehow corrupt or gain access to critical information and code in other processes. This should be seen as an alternative to address randomization and other techniques, but need not be exclusive to them. These precautions should be done even with the use of formal methods.

The compartmentalization of software implementations for automation will drastically change the way programs are compiled and constructed. There will need to be a completely different application binary interface that handles linking transparently across the secure interprocess communications channels that are native to AI operating systems. This separation would ensure that the most critical components are physically incapable of tampering with each other, and that there can be multiple redundancies and failovers if a portion of the system fails. This can not be achieved safely under a monolithic executable loaded into the virtual address space, even with randomization of that address space, as portions of the program will have full access to itself. There must be a minimization of the address space available to procedures within the application, especially if they are not required to modify or read from it. This can only be reliably achieved by breaking apart programs into individual and separable processes that are soft-linked via the IPC methods just mentioned.

To meet the above requirements productively, we are going to need new programming languages that natively support these challenges, and combine programs, proofs, and tactics into a single approach.

Aside from verification, there should also be regulations that limit the physical strength and capabilities of non-military drones, such that their materials and construction permit them to be easily stopped by police and security forces. These are just common sense precautions to minimize the maximum damage from lawful automation in any case of failure.

Lastly, there are major open problems in moral intelligence. The following challenges need to be met in order to make restricted strong AI practical and safe:

12.7 Closing Remarks

The global strategy should be considered regardless of the expected time to implement strong artificial intelligence, as some of these plans are complex and may require decades to implement. It is, of course, understood that some aspects of the strategy would only be politically viable until after day zero. This is unfortunate, as they may not be effective after the fact.

Ideally, there should be a large effort to research the new strong AI hypothesis. The notion that sentience presupposes generalizing intelligence was essentially the thesis of this book, and, if true, will significantly alter how we engineer artificial intelligence. The test to detect it, described in the first portion of this book, is objective, falsifiable, and easily applied. It will be beneficial regardless of whether or not strong AI is dependent upon some form of sentience.

The impacts of advanced automation have been considered, and a new direction of research has been given. The next steps are up to those with the influence and power to effect the kind of change that is necessary.

As for those working towards the goal of generalizing intelligence, continue to research and work, but remain open to the possibility that all current approaches are wrong. They may never lead to the discoveries we seek. Be prepared to leave behind all narrow artificial intelligence and machine learning disciplines for approaches that focus on real-time systems and sentient processes. Think in terms of non-determinism and systems or properties that exist only through time. Do not come to expect or rely upon peers in this most nascent field; forge ahead to create what others will one day follow. Study the philosophy of mind and related concepts before attempting to solve the technical and engineering challenges, but do not become lost in its many detours and abstractions.

The most beneficial next action that can be taken is to begin to develop cognitive systems as a basis for generalizing intelligence. This will require the discovery and creation of machine learning algorithms over sentient processes, specifically, ones that can associate and apply knowledge across domains.

As for those that believe we must wait. There are no advantages to that strategy. Only negatives. We are already paying for the absence of this technology, and will never be able to change fundamentally enough to safely and responsibly use it before it is discovered. The best strategy is where the positive uses of automation vastly exceed the negatives. This will ultimately be the only aspect we can control.

▲ Return to Top