Image from Google Jackets

Artificial intelligence safety and security / edited by Roman V. Yampolskiy.

Contributor(s): Material type: TextTextSeries: Chapman & Hall/CRC artificial intelligence and robotics seriesPublisher: Boca Raton, FL : Chapman and Hall/CRC, an imprint of Taylor and Francis, 2018Edition: First editionDescription: 1 online resource (474 pages) : 45 illustrationsContent type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781351251389
Subject(s): Additional physical formats: Print version: : No titleDDC classification:
  • 006.3 23
LOC classification:
  • T59.5 .A76 2018
Online resources:
Contents:
part, PART I Concerns of Luminaries -- chapter 1 Why the Future Doesn't Need Us -- chapter 2 The Deeply Intertwined Promise and Peril of GNR -- chapter 3 The Basic AI Drives -- chapter 4 The Ethics of Artificial Intelligence -- chapter 5 Friendly Artificial Intelligence: The Physics Challenge -- chapter 6 MDL Intelligence Distillation: Exploring Strategies for Safe Access to Superintelligent Problem-Solving Capabilities -- chapter 7 The Value Learning Problem -- chapter 8 Adversarial Examples in the Physical World -- chapter 9 How Might AI Come About?: Different Approaches and Their Implications for Life in the Universe -- chapter 10 The MADCOM Future: How Artificial Intelligence Will Enhance Computational Propaganda, Reprogram Human Culture, and Threaten Democracy ... and What can be Done About It -- chapter 11 Strategic Implications of Openness in AI Development -- part, PART II Responses of Scholars -- chapter 12 Using Human History, Psychology, and Biology to Make AI Safe for Humans -- chapter 13 AI Safety: A First-Person Perspective -- chapter 14 Strategies for an Unfriendly Oracle AI with Reset Button -- chapter 15 Goal Changes in Intelligent Agents -- chapter 16 Limits to Verification and Validation of Agentic Behavior -- chapter 17 Adversarial Machine Learning (AI Safety and Security Textbook) -- chapter 18 Value Alignment via Tractable Preference Distance -- chapter 19 A Rationally Addicted Artificial Superintelligence -- chapter 20 On the Security of Robotic Applications Using ROS -- chapter 21 Social Choice and the Value Alignment Problem* -- chapter 22 Disjunctive Scenarios of Catastrophic AI Risk -- chapter 23 Offensive Realism and the Insecure Structure of the International System: Artificial Intelligence and Global Hegemony -- chapter 24 Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History -- chapter 25 Military AI as a Convergent Goal of Self-Improving AI -- chapter 26 A Value-Sensitive Design Approach to Intelligent Agents -- chapter 27 Consequentialism, Deontology, and Artificial Intelligence Safety -- chapter 28 Smart Machines ARE a Threat to Humanity.
Abstract: The history of robotics and artificial intelligence in many ways is also the history of humanity's attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It is comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book is the first edited volume dedicated to addressing challenges of constructing safe and secure advanced machine intelligence. The chapters vary in length and technical content from broad interest opinion essays to highly formalized algorithmic approaches to specific problems. All chapters are self-contained and could be read in any order or skipped without a loss of comprehension.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

part, PART I Concerns of Luminaries -- chapter 1 Why the Future Doesn't Need Us -- chapter 2 The Deeply Intertwined Promise and Peril of GNR -- chapter 3 The Basic AI Drives -- chapter 4 The Ethics of Artificial Intelligence -- chapter 5 Friendly Artificial Intelligence: The Physics Challenge -- chapter 6 MDL Intelligence Distillation: Exploring Strategies for Safe Access to Superintelligent Problem-Solving Capabilities -- chapter 7 The Value Learning Problem -- chapter 8 Adversarial Examples in the Physical World -- chapter 9 How Might AI Come About?: Different Approaches and Their Implications for Life in the Universe -- chapter 10 The MADCOM Future: How Artificial Intelligence Will Enhance Computational Propaganda, Reprogram Human Culture, and Threaten Democracy ... and What can be Done About It -- chapter 11 Strategic Implications of Openness in AI Development -- part, PART II Responses of Scholars -- chapter 12 Using Human History, Psychology, and Biology to Make AI Safe for Humans -- chapter 13 AI Safety: A First-Person Perspective -- chapter 14 Strategies for an Unfriendly Oracle AI with Reset Button -- chapter 15 Goal Changes in Intelligent Agents -- chapter 16 Limits to Verification and Validation of Agentic Behavior -- chapter 17 Adversarial Machine Learning (AI Safety and Security Textbook) -- chapter 18 Value Alignment via Tractable Preference Distance -- chapter 19 A Rationally Addicted Artificial Superintelligence -- chapter 20 On the Security of Robotic Applications Using ROS -- chapter 21 Social Choice and the Value Alignment Problem* -- chapter 22 Disjunctive Scenarios of Catastrophic AI Risk -- chapter 23 Offensive Realism and the Insecure Structure of the International System: Artificial Intelligence and Global Hegemony -- chapter 24 Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History -- chapter 25 Military AI as a Convergent Goal of Self-Improving AI -- chapter 26 A Value-Sensitive Design Approach to Intelligent Agents -- chapter 27 Consequentialism, Deontology, and Artificial Intelligence Safety -- chapter 28 Smart Machines ARE a Threat to Humanity.

The history of robotics and artificial intelligence in many ways is also the history of humanity's attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It is comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book is the first edited volume dedicated to addressing challenges of constructing safe and secure advanced machine intelligence. The chapters vary in length and technical content from broad interest opinion essays to highly formalized algorithmic approaches to specific problems. All chapters are self-contained and could be read in any order or skipped without a loss of comprehension.

There are no comments on this title.

to post a comment.

To Reach Us

0206993118
amiu.library@amref.ac.ke

Our Location

Lang’ata Road, opposite Wilson Airport
PO Box 27691 – 00506,   Nairobi, Kenya

Social Networks

Powered by Koha