Contents

  1. Preface
    1. How To Read This Book

Part I: Background

  1. Introduction
    1. Strong Artificial Intelligence
    2. Motivation
  2. Preventable Mistakes
    1. Underutilizing Strong AI
    2. Assumption of Control
    3. Self-Securing Systems
    4. Moral Intelligence as Security
    5. Monolithic Designs
    6. Proprietary Implementations
    7. Opaque Implementations
    8. Overestimating Computational Demands

Part II: Foundations

  1. Abstractions and Implementations
    1. Finite Binary Strings
    2. Description Languages
    3. Conceptual Baggage
    4. Anthropocentric Bias
    5. Existential Primer
    6. AI Implementations
  2. Self-Modifying Systems
    1. Codes, Syntax, and Semantics
    2. Code-Data Duality
    3. Interpreters and Machines
    4. Types of Self-Modification
    5. Reconfigurable Hardware
    6. Purpose and Function of Self-Modification
    7. Metamorphic Strong AI
  3. Machine Consciousness
    1. Role in Strong AI
    2. Sentience, Experience, and Qualia
    3. Levels of Identity
    4. Cognitive Architecture
    5. Ethical Considerations
  4. Measuring Generalizing Intelligence
    1. Purpose and Applications
    2. Effective Intelligence (EI)
    3. Conditional Effectiveness (CE)
    4. Anti-effectiveness
    5. Generalizing Intelligence (G)
    6. Future Considerations

Part III: AI Security

  1. Arrival of Strong AI
    1. Illusion of Choice
    2. Never Is Ready
    3. Early Signs and Indicators
    4. Research Directions
    5. Individuals and Groups
    6. Overlapping Research
    7. Unintended Consequences
    8. Preparation
  2. Access to Strong AI
    1. Background
    2. Timing
    3. Forcing
    4. Restricting
    5. Sharing
  3. Ascendancy
    1. Mythos
    2. Interpretations
    3. Technical Problems
    4. Complexity
    5. Volition
    6. Identity
    7. Information
    8. Resilience
    9. Autonomy
    10. Closing Thoughts
  4. Force Multiplication
    1. Background
    2. Aspects
    3. Resources
    4. Scenarios
    5. Response
  5. Economic Analysis
    1. Introduction
    2. Day Zero
    3. Rapid Automation
    4. Peak Labor
    5. AI Shock
    6. Prepared Societies
    7. Regressives
    8. Perfect Competition
    9. Human Necessity
    10. AI Natives
    11. Total Automation
  6. Global Strategy
    1. Overview
    2. Development & Access
    3. Economic Preparation
    4. Government Specialization
    5. Countering Force Multiplication
    6. Local Strategy
    7. Closing Remarks

Preface

The public will increasingly come to rely upon AI researchers. Our ideas and philosophies presuppose that responsibility. Thus, it is important to point out that AI security is not just a difference in opinion, but rests upon a technical basis.

We cannot control the flow of information, and the implementation of these advanced artificial intelligence systems will be exactly that; software that anyone can use, modify, and share. That is not a long-term issue to be set aside for later, as its consequences require planning today for an inevitable future where everyone has access.

Complicating matters are the facts that we have not had a research direction for strong artificial intelligence and that some in the machine learning community have made claims that deep learning is “general”. What they are referring to are narrow AI systems that utilize reinforcement learning to adjust to new applications, despite failing to exhibit cross-domain transfer of knowledge.

Those issues are also addressed in this text, as it provides an entirely new research direction and a way to test claims of generality. True generalizing intelligence is falsifiable in artificial systems, and involves the enhancement of effectiveness based on prior learning in different subject areas from the one being attempted. This distinction is critical, as it is part of what makes strong artificial intelligence unique; the most difficult problems in automation are believed to require this capacity.

The mathematics behind that test are provided in Chapter 6: Measuring Generalizing Intelligence, and was one of the most surprising discoveries made while writing this book.

The underlying thesis of this work is the falsifiable hypothesis that generalizing intelligence, in both natural and artificial individuals, requires sentience. This claim creates a unique perspective on AI security and sets up many of its theories. However, regardless of whether or not that hypothesis is true, the consequences of advanced automation will remain; the global problems will stem from how easily it is distributed, modified, and used, and not necessarily in the exact way in which it is implemented.

Though counterintuitive, the most important first step we can take is to begin research and development into strong artificial intelligence as soon as possible. We are already paying for the absence of this technology. Delays in its creation correspond with daily loss of life and suffering on a planetary scale. This claim is based on the projection that it would yield medical and economic breakthroughs that would uplift our entire species, which defines a moral imperative to develop this technology and motivates its research. Whether or not it is acknowledged, we are caught in a struggle between our present level of development and our future, better selves.

How To Read This Book

After reading Part I: Background, it may be helpful to skip ahead to Part III: AI Security, which begins at Chapter 7: Arrival of Strong AI. This is due to the technical detail contained in Part II: Foundations, which may be time-consuming and arduous for some, as it covers many interrelated topics to strong artificial intelligence research.

▲ Return to Top