100% found this document useful (3 votes)
61 views

An Introduction to Ethics in Robotics and AI Christoph Bartneck 2024 scribd download

Christoph

Uploaded by

rovenozurdo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (3 votes)
61 views

An Introduction to Ethics in Robotics and AI Christoph Bartneck 2024 scribd download

Christoph

Uploaded by

rovenozurdo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Download the full version of the textbook now at textbookfull.

com

An Introduction to Ethics in Robotics and AI


Christoph Bartneck

https://textbookfull.com/product/an-introduction-
to-ethics-in-robotics-and-ai-christoph-bartneck/

Explore and download more textbook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Robotics, AI, And Humanity: Science, Ethics, And Policy


Joachim Von Braun

https://textbookfull.com/product/robotics-ai-and-humanity-science-
ethics-and-policy-joachim-von-braun/

textbookfull.com

An Introduction to Modeling Neuronal Dynamics 1st Edition


Christoph Börgers (Auth.)

https://textbookfull.com/product/an-introduction-to-modeling-neuronal-
dynamics-1st-edition-christoph-borgers-auth/

textbookfull.com

Eye Movement Research An Introduction to its Scientific


Foundations and Applications Christoph Klein

https://textbookfull.com/product/eye-movement-research-an-
introduction-to-its-scientific-foundations-and-applications-christoph-
klein/
textbookfull.com

A Theory Of Imperialism Utsa Patnaik

https://textbookfull.com/product/a-theory-of-imperialism-utsa-patnaik/

textbookfull.com
Book Retreat Mystery 07 Murder in the Cookbook Nook Ellery
Adams

https://textbookfull.com/product/book-retreat-mystery-07-murder-in-
the-cookbook-nook-ellery-adams/

textbookfull.com

Maths in Focus: Year 11 Mathematics Extension 1 Grove

https://textbookfull.com/product/maths-in-focus-year-11-mathematics-
extension-1-grove/

textbookfull.com

Becoming a Better Communicator 3rd Edition Rhonda M.


Gallagher

https://textbookfull.com/product/becoming-a-better-communicator-3rd-
edition-rhonda-m-gallagher/

textbookfull.com

The chemistry of medical and dental materials Second


Edition Royal Society Of Chemistry (Great Britain)

https://textbookfull.com/product/the-chemistry-of-medical-and-dental-
materials-second-edition-royal-society-of-chemistry-great-britain/

textbookfull.com

A Short Path to Change 30 Ways to Transform Your Life


First Edition Mannion

https://textbookfull.com/product/a-short-path-to-change-30-ways-to-
transform-your-life-first-edition-mannion/

textbookfull.com
Wiley Cpaexcel Exam Review 2018 Study Guide Auditing And
Attestation wiley Cpa Exam Review Auditing Attestation 1st
Edition Wiley
https://textbookfull.com/product/wiley-cpaexcel-exam-
review-2018-study-guide-auditing-and-attestation-wiley-cpa-exam-
review-auditing-attestation-1st-edition-wiley/
textbookfull.com
SPRINGER BRIEFS IN ETHICS

Christoph Bartneck
Christoph Lütge
Alan Wagner
Sean Welsh

An Introduction
to Ethics in
Robotics and AI

123
SpringerBriefs in Ethics
Springer Briefs in Ethics envisions a series of short publications in areas such as
business ethics, bioethics, science and engineering ethics, food and agricultural
ethics, environmental ethics, human rights and the like. The intention is to present
concise summaries of cutting-edge research and practical applications across a wide
spectrum.
Springer Briefs in Ethics are seen as complementing monographs and journal
articles with compact volumes of 50 to 125 pages, covering a wide range of content
from professional to academic. Typical topics might include:
• Timely reports on state-of-the art analytical techniques
• A bridge between new research results, as published in journal articles, and a
contextual literature review
• A snapshot of a hot or emerging topic
• In-depth case studies or clinical examples
• Presentations of core concepts that students must understand in order to make
independent contributions

More information about this series at http://www.springer.com/series/10184


Christoph Bartneck Christoph Lütge
• •

Alan Wagner Sean Welsh


An Introduction to Ethics
in Robotics and AI

123
Christoph Bartneck Christoph Lütge
HIT Lab NZ Institute for Ethics in Artificial Intelligence
University of Canterbury Technical University of Munich
Christchurch, New Zealand München, Germany

Alan Wagner Sean Welsh


College of Engineering Department of Philosophy
Pennsylvania State University University of Canterbury
University Park, PA, USA Christchurch, New Zealand

ISSN 2211-8101 ISSN 2211-811X (electronic)


SpringerBriefs in Ethics
ISBN 978-3-030-51109-8 ISBN 978-3-030-51110-4 (eBook)
https://doi.org/10.1007/978-3-030-51110-4
© The Editor(s) (if applicable) and The Author(s) 2021. This book is an open access publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adap-
tation, distribution and reproduction in any medium or format, as long as you give appropriate credit to
the original author(s) and the source, provide a link to the Creative Commons license and indicate if
changes were made.
The images or other third party material in this book are included in the book’s Creative Commons
license, unless indicated otherwise in a credit line to the material. If material is not included in the book’s
Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publi-
cation does not imply, even in the absence of a specific statement, that such names are exempt from the
relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Fig. 1 The logo of the EPIC project

This book was made possible through the European Project “Europe’s ICT
Innovation Partnership With Australia, Singapore & New Zealand (EPIC)” under
the European Commission grant agreement Nr 687794. The project partners in this
consortium are:
• eutema GmbH
• Intersect Australia Limited (INTERSECT)
• Royal Melbourne Institute Of Technology (RMIT)
• Callaghan Innovation Research Limited (CAL)
• University Of Canterbury (UOC)
• National University Of Singapore (NUS)
• Institute For Infocomm Research (i2r)
From February 2–6, 2019 we gathered at the National University of Singapore.
Under the guidance of Laia Ros from Book Sprints we wrote this book in an
atmosphere of mutual respect and with great enthusiasm for our shared passion:
artificial intelligence and ethics. We have backgrounds in different disciplines and
the synthesis of our knowledge enabled us to cover the wide spectrum of topics
relevant to AI and ethics.
This book was written using the BookSprint method (http://www.booksprints.net).

v
Contents

1 About the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Structure of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 What Is AI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Introduction to AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 The Turing Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.2 Strong and Weak AI . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.3 Types of AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 What Is Machine Learning? . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 What Is a Robot? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 Sense-Plan-Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 System Integration. Necessary but Difficult . . . . . . . . . 13
2.4 What Is Hard for AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Science and Fiction of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 What Is Ethics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1 Descriptive Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Normative Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.1 Deontological Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.2 Consequentialist Ethics . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.3 Virtue Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Meta-ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Applied Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Relationship Between Ethics and Law . . . . . . . . . . . . . . . . . . . 22
3.6 Machine Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.6.1 Machine Ethics Examples . . . . . . . . . . . . . . . . . . . . . . 23
3.6.2 Moral Diversity and Testing . . . . . . . . . . . . . . . . . . . . 25

vii
viii Contents

4 Trust and Fairness in AI Systems . . . . . . . ....... . . . . . . . . . . . . 27


4.1 User Acceptance and Trust . . . . . . . . ....... . . . . . . . . . . . . 28
4.2 Functional Elements of Trust . . . . . . . ....... . . . . . . . . . . . . 28
4.3 Ethical Principles for Trustworthy and Fair AI . . . . . . . . . . . . . 28
4.3.1 Non-maleficence . . . . . . . . . . ....... . . . . . . . . . . . . 29
4.3.2 Beneficence . . . . . . . . . . . . . ....... . . . . . . . . . . . . 30
4.3.3 Autonomy . . . . . . . . . . . . . . ....... . . . . . . . . . . . . 30
4.3.4 Justice . . . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . 33
4.3.5 Explicability . . . . . . . . . . . . . ....... . . . . . . . . . . . . 35
4.4 Conclusion . . . . . . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . 37
5 Responsibility and Liability in the Case of AI Systems . . . . . . . . . . 39
5.1 Example 1: Crash of an Autonomous Vehicle . . . . . . . . . . . . . 39
5.2 Example 2: Mistargeting by an Autonomous Weapon . . . . . . . . 40
5.2.1 Attribution of Responsibility and Liability . . . . . . . . . . 41
5.2.2 Moral Responsibility Versus Liability . . . . . . . . . . . . . 41
5.3 Strict Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.4 Complex Liability: The Problem of Many Hands . . . . . . . . . . . 43
5.5 Consequences of Liability: Sanctions . . . . . . . . . . . . . . . . . . . . 43
6 Risks in the Business of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.1 General Business Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.1.1 Functional Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.1.2 Systemic Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.1.3 Risk of Fraud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.1.4 Safety Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.2 Ethical Risks of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2.1 Reputational Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2.2 Legal Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2.3 Environmental Risk . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2.4 Social Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.3 Managing Risk of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.4 Business Ethics for AI Companies . . . . . . . . . . . . . . . . . . . . . . 50
6.5 Risks of AI to Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7 Psychological Aspects of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.1 Problems of Anthropomorphisation . . . . . . . . . . . . . . . . . . . . . 55
7.1.1 Misplaced Feelings Towards AI . . . . . . . . . . . . . . . . . 56
7.1.2 Misplaced Trust in AI . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2 Persuasive AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.3 Unidirectional Emotional Bonding with AI . . . . . . . . . . . . . . . . 58
8 Privacy Issues of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.1 What Is Privacy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.2 Why AI Needs Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Contents ix

8.3 Private Data Collection and Its Dangers . . . . . . . . . . . . . . . . . . 63


8.3.1 Persistence Surveillance . . . . . . . . . . . . . . . . . . . . . . . 64
8.3.2 Usage of Private Data for Non-intended Purposes . . . . 67
8.3.3 Auto Insurance Discrimination . . . . . . . . . . . . . . . . . . 69
8.3.4 The Chinese Social Credit System . . . . . . . . . . . . . . . . 69
8.4 Future Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
9 Application Areas of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9.1 Ethical Issues Related to AI Enhancement . . . . . . . . . . . . . . . . 71
9.1.1 Restoration Versus Enhancement . . . . . . . . . . . . . . . . . 71
9.1.2 Enhancement for the Purpose of Competition . . . . . . . . 72
9.2 Ethical Issues Related to Robots and Healthcare . . . . . . . . . . . . 73
9.3 Robots and Telemedicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
9.3.1 Older Adults and Social Isolation . . . . . . . . . . . . . . . . 73
9.3.2 Nudging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
9.3.3 Psychological Care . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
9.3.4 Exoskeletons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
9.3.5 Quality of Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
9.4 Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
9.4.1 AI in Educational Administrative Support . . . . . . . . . . 76
9.4.2 Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
9.4.3 Forecasting Students’ Performance . . . . . . . . . . . . . . . 78
9.5 Sex Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
10 Autonomous Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
10.1 Levels of Autonomous Driving . . . . . . . . . . . . . . . . . . . . . . . . 83
10.2 Current Situation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
10.3 Ethical Benefits of AVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
10.4 Accidents with AVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
10.5 Ethical Guidelines for AVs . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
10.6 Ethical Questions in AVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
10.6.1 Accountability and Liability . . . . . . . . . . . . . . . . . . . . 87
10.6.2 Situations of Unavoidable Accidents . . . . . . . . . . . . . . 87
10.6.3 Privacy Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.6.4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.6.5 Appropriate Design of Human-Machine Interface . . . . . 90
10.6.6 Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
10.6.7 Manually Overruling the System? . . . . . . . . . . . . . . . . 90
10.6.8 Possible Ethical Questions in Future Scenarios . . . . . . . 90
x Contents

11 Military Uses of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
11.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
11.2 The Use of Autonomous Weapons Systems . . . . . . . . . . . . . . . 95
11.2.1 Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
11.2.2 Proportionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
11.2.3 Responsibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
11.3 Regulations Governing an AWS . . . . . . . . . . . . . . . . . . . . . . . 97
11.4 Ethical Arguments for and Against AI for Military
Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
11.4.1 Arguments in Favour . . . . . . . . . . . . . . . . . . . . . . . . . 97
11.4.2 Arguments Against . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
11.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
12 Ethics in AI and Robotics: A Strategic Challenge . . . . . . . . . . . . . . 101
12.1 The Role of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
12.2 International Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
List of Figures

Fig. 2.1 Siri’s response to a not so uncommon question . . . . . . . . . . . .. 6


Fig. 2.2 Alan Turing (1912–1954) (Source Jon Callas) . . . . . . . . . . . . .. 9
Fig. 3.1 Immanuel Kant (1724–1804) (Source Johann
Gottlieb Becker) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Fig. 3.2 Plato (Source Richard Mortel) . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Fig. 3.3 The Sophia robot (Source Hanson Robotics) . . . . . . . . . . . . . . . 23
Fig. 4.1 Justitia (Source Waugsberg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Fig. 6.1 Der Spiegel covers on mass unemployment . . . . . . . . . . . . . . . . 52
Fig. 7.1 Robot guiding people out of a building . . . . . . . . . . . . . . . . . . . 57
Fig. 8.1 Amazon Echo Plus uses Alexa (Source Amazon) . . . . . . . . . . . . 64
Fig. 8.2 Hello Barbie (Source Mattel) . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Fig. 8.3 Former US president Barack Obama was used to showcase
the power of deep fakes (Source BuzzFeedVideo) . . . . . . . . . .. 68
Fig. 9.1 da Vinci surgical system (Source Cmglee) . . . . . . . . . . . . . . . .. 74
Fig. 9.2 Paro robot (Source National Institute of Advanced
Industrial Science and Technology) . . . . . . . . . . . . . . . . . . . . .. 75
Fig. 9.3 A real doll (Source real doll) . . . . . . . . . . . . . . . . . . . . . . . . . .. 79
Fig. 10.1 Waymo’s fully self-driving Chrysler Pacifica Hybrid
minivan on public roads (Source Waymo) . . . . . . . . . . . . . . . .. 85
Fig. 10.2 Example question from the Moral Machine experiment
that confronted people with trolley problems (Source MIT) . . .. 88
Fig. 11.1 MIM-104 Patriot (Source Darkone) . . . . . . . . . . . . . . . . . . . . .. 94

xi
Chapter 1
About the Book

This book provides an introduction into the ethics of robots and artificial intelligence.
The book was written with university students, policy makers, and professionals in
mind but should be accessible for most adults. The book is meant to provide balanced
and, at times, conflicting viewpoints as to the benefits and deficits of AI through the
lens of ethics. As discussed in the chapters that follow, ethical questions are often not
cut and dry. Nations, communities, and individuals may have unique and important
perspectives on these topics that should be heard and considered. While the voices
that compose this book are our own, we have attempted to represent the views of the
broader AI, robotics, and ethics communities.

1.1 Authors

Christoph Bartneck is an associate professor and director of postgraduate studies


at the HIT Lab NZ of the University of Canterbury. He has a background in Indus-
trial Design and Human-Computer Interaction, and his projects and studies have
been published in leading journals, newspapers, and conferences. His interests lie
in the fields of Human-Computer Interaction, Science and Technology Studies,
and Visual Design. More specifically, he focuses on the effect of anthropomor-
phism on human-robot interaction. As a secondary research interest he works
on bibliometric analyses, agent based social simulations, and the critical review
on scientific processes and policies. In the field of Design Christoph investigates
the history of product design, tessellations and photography. The press regularly
reports on his work, including the New Scientist, Scientific American, Popular
Science, Wired, New York Times, The Times, BBC, Huffington Post, Washington
Post, The Guardian, and The Economist.

© The Author(s) 2021 1


C. Bartneck et al., An Introduction to Ethics in Robotics and AI,
SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-51110-4_1
2 1 About the Book

Christoph Lütge holds the Peter Löscher Chair of Business Ethics at Techni-
cal University of Munich (TUM). He has a background in business informatics
and philosophy and has held visiting positions in Harvard in Taipei, Kyoto and
Venice. He was awarded a Heisenberg Fellowship in 2007. In 2019, Lütge was
appointed director of the new TUM Institute for Ethics in Artificial Intelligence.
Among his major publications are: “The Ethics of Competition” (Elgar 2019),
“Order Ethics or Moral Surplus: What Holds a Society Together?” (Lexington
2015), and the “Handbook of the Philosophical Foundations of Business Ethics”
(Springer 2013). He has commented on political and economic affairs on Times
Higher Education, Bloomberg, Financial Times, Frankfurter Allgemeine Zeitung,
La Repubblica and numerous other media. Moreover, he has been a member of the
Ethics Commission on Automated and Connected Driving of the German Federal
Ministry of Transport and Digital Infrastructure, as well as of the European AI
Ethics initiative AI4People. He has also done consulting work for the Singapore
Economic Development Board and the Canadian Transport Commission.
Alan R. Wagner is an assistant professor of aerospace engineering at the Penn-
sylvania State University and a research associate with the universities ethics
institute. His research interest include the development of algorithms that allow
a robot to create categories of models, or stereotypes, of its interactive partners,
creating robots with the capacity to recognize situations that justify the use of
deception and to act deceptively, and methods for representing and reasoning
about trust. Application areas for these interests range from military to health-
care. His research has won several awards including being selected for by the
Air Force Young Investigator Program. His research on deception has gained
significant notoriety in the media resulting in articles in the Wall Street Journal,
New Scientist Magazine, the journal of Science, and described as the 13th most
important invention of 2010 by Time Magazine. His research has also won awards
within the human-robot interaction community, such as the best paper award at
RO-MAN 2007.
Sean Welsh holds a PhD in philosophy from the University of Canterbury and is
co-lead of the Law, Ethics and Society working group of the AI Forum of New
Zealand. Prior to embarking on his doctoral research in AI and robot ethics he
worked as a software engineer for various telecommunications firms. His arti-
cles have appeared in The Conversation, the Sydney Morning Herald, the World
Economic Forum, Euronews, Quillette and Jane’s Intelligence Review. He is the
author of Ethics and Security Automata, a research monograph on machine ethics.

1.2 Structure of the Book

This book begins with introductions to both artificial intelligence (AI) and ethics.
These sections are meant to provide the reader with the background knowledge nec-
essary for understanding the ethical dilemmas that arise in AI. Opportunities for
further reading are included for those interested in learning more about these top-
1.2 Structure of the Book 3

ics. The sections that follow focus on how businesses manage the risks, rewards,
and ethical implications of AI technology and their own liability. Next, psychologi-
cal factors that mediate how humans and AI technologies interact and the resulting
impact on privacy are presented. The book concludes with a discussion of AI appli-
cations ranging from healthcare to warfare. These sections present the reader with
real world situations and dilemmas that will impact stakeholders around the world.
The chapter that follows introduces the reader to ethics and AI with an example that
many people can try at home.

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 2
What Is AI?

In this chapter we discuss the different definitions of Artificial Intelligence


(AI). We then discuss how machines learn and how a robot works in general.
Finally we discuss the limitations of AI and the influence the media has on our
preconceptions of AI.

chris: Siri, should I lie about my weight on my dating profile?


siri: I can’t answer that, Chris.

Siri is not the only virtual assistant that will struggle to answer this question
(see Fig. 2.1). Toma et al. (2008) showed that almost two thirds of people provide
inaccurate information about their weight on dating profiles. Ignoring, for a moment,
what motivates people to lie about their dating profiles, why is it so difficult, if not
impossible, for digital assistants to answer this question?
To better understand this challenge it is necessary to look behind the scene and
to see how this question is processed by Siri. First, the phone’s microphone needs
to translate the changes in air pressure (sounds) into a digital signal that can then be
stored as data in the memory of the phone. Next, this data needs to be sent through
the internet to a powerful computer in the cloud. This computer then tries to classify
the sounds recorded into written words. Afterwards, an artificial intelligence (AI)
system needs to extract the meaning of this combination of words. Notice that it
even needs to be able to pick the right meaning for the homophone “lie”. Chris does
not want to lie down on his dating profile, he is wondering if he should put inaccurate
information on it.
While the above steps are difficult and utilise several existing AI techniques,
the next step is one of the hardest. Assuming Siri fully understands the meaning
of Chris’s question, what advice should Siri give? To give the correct advice, it
would need to know what a person’s weight means and how the term relates to their
attractiveness. Siri needs to know that the success of dating depends heavily on both

© The Author(s) 2021 5


C. Bartneck et al., An Introduction to Ethics in Robotics and AI,
SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-51110-4_2
6 2 What Is AI?

Fig. 2.1 Siri’s response to a


not so uncommon question

participants considering each other attractive—and that most people are motivated
to date. Furthermore, Siri needs to know that online dating participants cannot verify
the accuracy of information provided until they meet in person. Siri also needs to
know that honesty is another attribute that influences attractiveness. While deceiving
potential partners online might make Chris more attractive in the short run, it would
have a negative effect once Chris meets his date face-to-face.
But this is not all. Siri also needs to know that most people provide inaccurate
information on their online profiles and that a certain amount of dishonesty is not
likely to impact Chris’s long-term attractiveness with a partner. Siri should also be
aware that women select only a small portion of online candidates for first dates and
that making this first cut is essential for having any chance at all of convincing the
potential partners of Chris’s other endearing qualities.
There are many moral approaches that Siri could be designed to take. Siri could
take a consequentialist approach. This is the idea that the value of an action depends
on the consequences it has. The best known version of consequentialism is the clas-
sical utilitarianism of Jeremy Bentham and John Stuart Mill (Bentham 1996; Mill
1863). These philosophers would no doubt advise Siri to maximise happiness: not
just Chris’s happiness but also the happiness of his prospective date. So, on the con-
sequentalist approach Siri might give Chris advice that would maximise his chances
to not only to have many first dates, but maximise the chances for Chris to find true
love.
2 What Is AI? 7

Alternatively, Siri might be designed to take a deontological approach. A deon-


tologist like Immanuel Kant might prioritise duty over happiness. Kant might advise
Chris that lying is wrong. He has a duty not to lie so he should tell the truth about
his weight, even if this would decrease his chances of getting a date.
A third approach Siri could take would be a virtue ethics approach. Virtue ethics
tend to see morality in terms of character. Aristotle might advise Chris that his
conduct has to exhibit virtues such as honesty.
Lastly, Siri needs to consider whether it should give a recommendation at all.
Providing wrong advice might damage Siri’s relationship to Chris and he might con-
sider switching to another phone with another digital assistant. This may negatively
impact Apple’s sales and stock value.
This little example shows that questions that seem trivial on the surface might be
very difficult for a machine to answer. Not only do these machines need the ability
to process sensory data, they also need to be able to extract the correct meaning
from it and then represent this meaning in a data structure that can be digitally
stored. Next, the machine needs to be able to process the meaning and conclude with
desirable actions. This whole process requires knowledge about the world, logical
reasoning and skills to learn and adapt. Having these abilities may make the machine
autonomous.
There are various definitions of “autonomy” and “autonomous” in AI, robotics and
ethics. At its simplest, autonomous simply refers to the ability of a machine to operate
for a period of time without a human operator. Exactly what that means differs from
application to application. What is considered “autonomous” in a vehicle is different
to what is considered “autonomous” in an weapon. In bioethics autonomy refers to
the ability of humans to make up their own minds about what treatment to accept
or refuse. In Kantian ethics autonomy refers to the ability of humans to decide what
to do with their lives and what moral rules to live by. The reader should be aware
that exactly what “autonomous” means is context-sensitive. Several meanings are
presented in this book. The unifying underlying idea is self-rule (from the Greek
words “auto” meaning self and “nomos” meaning rule).
On the first of these definitions, Siri is an autonomous agent that attempts to answer
spoken questions. Some questions Siri tries to answer require more intelligence,
meaning more background, reasoning ability and knowledge, than others. The chapter
that follows define and describe the characteristics that make something artificially
intelligent and an agent.

2.1 Introduction to AI

The field of artificial intelligence (AI) has evolved from humble beginnings to a
field with global impact. The definition of AI and of what should and should not be
included has changed over time. Experts in the field joke that AI is everything that
computers cannot currently do. Although facetious on the surface, there is a sense
8 2 What Is AI?

that developing intelligent computers and robots means creating something that does
not exist today. Artificial intelligence is a moving target.
Indeed, even the definition of AI itself is volatile and has changed over time.
Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external
data, to learn from such data, and to use those learnings to achieve specific goals and
tasks through flexible adaptation” (Kaplan and Haenlein 2019). Poole and Mackworth
(2010) define AI as “the field that studies the synthesis and analysis of computational
agents that act intelligently.” An agent is something (or someone) that acts. An agent
is intelligent when:
1. its actions are appropriate for its circumstances and its goals
2. it is flexible to changing environments and changing goals
3. it learns from experience, and
4. it makes appropriate choices given its perceptual and computational limitations.
Russell and Norvig define AI as “the study of [intelligent] agents that receive pre-
cepts from the environment and take action. Each such agent is implemented by
a function that maps percepts to actions, and we cover different ways to represent
these functions, such as production systems, reactive agents, logical planners, neural
networks, and decision-theoretic systems” Russell and Norvig (2010, p. viii).
Russell and Norvig also identify four schools of thought for AI. Some researchers
focus on creating machines that think like humans. Research within this school of
thought seeks to reproduce, in some manner, the processes, representations, and
results of human thinking on a machine. A second school focuses on creating
machines that act like humans. It focuses on action, what the agent or robot actually
does in the world, not its process for arriving at that action. A third school focuses on
developing machines that act rationally. Rationality is closely related to optimality.
These artificially intelligent systems are meant to always do the right thing or act
in the correct manner. Finally, the fourth school is focused on developing machines
that think rationally. The planning and/or decision-making that these machines will
do is meant to be optimal. Optimal here is naturally relevant to some problems that
the system is trying to solve.
We have provided three definitions. Perhaps the most basic element common to
all of them is that AI involves the study, design and building of intelligent agents that
can achieve goals. The choices an AI makes should be appropriate to its perceptual
and cognitive limitations. If an AI is flexible and can learn from experience as well
as sense, plan and act on the basis of its initial configuration, it might be said to
be more intelligent than an AI that just has a set of rules that guides a fixed set of
actions. However, there are some contexts in which you might not want the AI to
learn new rules and behaviours, during the performance of a medical procedure, for
example. Proponents of the various approaches tend to stress some of these elements
more than others. For example, developers of expert systems see AI as a repository of
expert knowledge that humans can consult, whereas developers of machine learning
systems see AI as something that might discover new knowledge. As we shall see,
each approach has strengths and weaknesses.
2.1 Introduction to AI 9

2.1.1 The Turing Test

In 1950 Alan Turing (see Fig. 2.2) suggested that it might be possible to determine
if a machine is intelligent based on its ability to exhibit intelligent behaviour which
is indistinguishable from an intelligent human’s behaviour. Turing described a con-
versational agent that would be interviewed by a human. If the human was unable
to determine whether or not the machine was a person then the machine would be
viewed as having passed the test. Turing’s argument has been both highly influen-
tial and also very controversial. For example, Turing does not specify how long the

Fig. 2.2 Alan Turing (1912–1954) (Source Jon Callas)


10 2 What Is AI?

human would have to talk to the machine before making a decision. Still, the Turing
Test marked an important attempt to avoid ill-defined vague terms such as “thinking”
and instead define AI with respect to a testable task or activity.

2.1.2 Strong and Weak AI

John Searle later divided AI into two distinct camps. Weak AI is limited to a single,
narrowly defined task. Most modern AI systems would be classified in this category.
These systems are developed to handle a single problem, task or issue and are gen-
erally not capable of solving other problems, even related ones. In contrast to weak
AI, Searle defines strong AI in the following way: “The appropriately programmed
computer with the right inputs and outputs would thereby have a mind in exactly the
same sense human beings have minds” (Searle 1980). In strong AI, Searle chooses to
connect the achievement of AI with the representation of information in the human
mind. While most AI researchers are not concerned with creating an intelligent agent
that meets Searle’s strong AI conditions, these researchers seek to eventually create
machines for solving multiple problems which are not narrowly defined. Thus one
of the goals of AI is to create autonomous systems that achieve some level of general
intelligence. No AI system has yet achieved general intelligence.

2.1.3 Types of AI Systems

There are many different types of AI systems. We will briefly describe just a few.
Knowledge representation is an important AI problem that tries to deal with how
information should be represented in order for a computer to organise and use this
information. In the 1960s, expert systems were introduced as knowledge systems that
can be used to answer questions or solve narrowly defined problems in a particular
domain. They often have embedded rules that capture knowledge of a human expert.
Mortgage loan advisor programs, for example, have long been used by lenders to
evaluate the credit worthiness of an applicant. Another general type of AI system
are planning systems. Planning systems attempt to generate and organise a series
of actions which may be conditioned on the state of the world and unknown uncer-
tainties. The Hubble telescope, for example, utilised an AI planning system called
SPIKE.
Computer vision is a subfield of AI which focuses on the challenge of converting
data from a camera into knowledge representations. Object recognition is a common
task often undertaken by computer vision researchers. Machine learning focuses
on developing algorithms the allow a computer to use experience to improve its
performance on some well-defined task. Machine learning is described in greater
detail in the sections below.
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
2.1 Introduction to AI 11

AI currently works best in constrained environments, but has trouble with open
worlds, poorly defined problems, and abstractions. Constrained environments include
simulated environments and environments in which prior data accurately reflects
future challenges. The real world, however, is open in the sense that new challenges
arise constantly. Humans use solutions to prior related problems to solve new prob-
lems. AI systems have limited ability to reason analogically from one situation to
another and thus tend to have to learn new solutions even for closely related prob-
lems. In general, they lack the ability to reason abstractly about problems and to use
common sense to generate solutions to poorly defined problems.

2.2 What Is Machine Learning?

Machine learning is a sub-field of AI focused on the creation of algorithms that use


experience with respect to a class of tasks and feedback in the form of a performance
measure to improve their performance on that task. Contemporary machine learning
is a sprawling, rapidly changing field. Typically machine learning is sub-categorised
into three types of learning.
Supervised learning centres on methods such as regression and classification. To
solve a classification problem experiences in the form of data are labelled with
respect to some target categorisation. The labelling process is typically accom-
plished by enlisting the effort of humans to examine each piece of data and to label
the data. For supervised learning classification problems performance is measured
by calculating the true positive rate (the ratio of the true positives over all positives,
correctly labelled or not) and the false positive rate (the ratio of false positives
over all negatively classified data, correctly and incorrectly labelled). The result
of this machine learning process is called a classifier. A classifier is software that
can automatically predict the label of a new piece of data. A machine learning
classifier that categorises labelled data with a true positive rate of 100% and a false
positive rate of 0% is a perfect classifier. The supervised learning process then is
the process by which unlabelled data is fed to a developing classifier and, over
the course of working through some training data, the classifier’s performance
improves. Testing the classifier requires the use of a second label data-set called
the test data set. In practice, often one overall data-set is carved into a training and
test set on which the classifier is then trained and tested. The testing and training
process may be time-consuming, but once a classifier is created it can be used to
quickly categorise incoming data.
Unsupervised learning is more focused on understanding data patterns and rela-
tions than on prediction. It involves methods such as principal components anal-
ysis and clustering. These are often used as exploratory precursors to supervised
learning methods.
Reinforcement learning is a third type of machine learning. Reinforcement learn-
ing does not focus on the labelling of data, but rather attempts to use feedback in
12 2 What Is AI?

the form of a reinforcement function to label states of the world as more or less
desirable with respect to some goal. Consider, for example, a robot attempting to
move from one location to another. If the robot’s sensors provide feedback telling
it its distance from a goal location, then the reinforcement function is simply a
reflection of the sensor’s readings. As the robot moves through the world it arrives
at different locations which can be described as states of the world. Some world
states are more rewarding than others. Being close to the goal location is more
desirable than being further away or behind an obstacle. Reinforcement learning
learns a policy, which is a mapping from the robot’s action to expected rewards.
Hence, the policy tells the system how to act in order to achieve the reward.

2.3 What Is a Robot?

Typically, an artificially intelligent agent is software that operates online or in a


simulated world, often generating perceptions and/or acting within this artificial
world. A robot, on the other hand, is situated in the real world, meaning that its
existence and operation occur in the real world. Robots are also embodied, meaning
that they have a physical body. The process of a robot making intelligent decisions
is often described as “sense-plan-act” meaning that the robot must first sense the
environment, plan what to do, and then act in the world.

2.3.1 Sense-Plan-Act

A robot’s embodiment offers some advantages in that its experiences tend to be with
real objects, but it also poses a number of challenges. Sensing in the real world is
extremely challenging. Sensors such as cameras, laser scanners, and sonar all have
limitations. Cameras, for example, suffer from colour shifts whenever the amount
of light changes. Laser scanners have difficulty perceiving transparent objects. Con-
verting sensor data into a usable representation is challenging and can depend on the
nature and limitations of the sensor. Humans use a wide array of integrated sensors
to generate perceptions. Moreover, the number of these sensors is (at least currently)
much higher than the number of sensors of any robot. The vast amount of sensors
available to a human is advantageous in terms of uncertainty reduction of percep-
tion. Humans also use a number different brain structures to encode information, to
perform experience-based learning, and to relate this learning to other knowledge
and experiences. Machines typically cannot achieve this type of learning.
Planning is the process by which the robot makes use of its perceptions and
knowledge to decide what to do next. Typically, robot planning includes some type
of goal that the robot is attempting to achieve. Uncertainty about the world must be
dealt with at the planning stage. Moreover, any background or historical knowledge
that the system has can be applied at this stage.
2.3 What Is a Robot? 13

Finally, the robot acts in the world. The robot must use knowledge about its own
embodiment and body schema to determine how to move joints and actuators in a
manner dictated by the plan. Moreover, once the robot has acted it may need to then
provide information to the sensing process in order to guide what the robot should
look for next.
It should be understood that AI agents and robots have no innate knowledge
about the world. Coming off the factory production line a robot or AI is a genuine
“blank slate” or to be more exact an unformatted drive. Babies, on the other hand,
enter the world “pre-programmed” so to speak with a variety of innate abilities
and knowledge. For example, at birth babies can recognise their mother’s voice. In
contrast, AI agents know nothing about the world that they have not been explicitly
programmed to know. Also in contrast to humans, machines have limited ability to
generate knowledge from perception. The process of generating knowledge from
information requires that the AI system creates meaningful representations of the
knowledge. As mentioned above, a representation is a way of structuring information
in order to make it meaningful. A great deal of research and debate has focused
on the value of different types of representations. Early in the development of AI,
symbolic representations predominated. A symbolic representation uses symbols,
typically words, as the underlying representation for an object in the world. For
example, the representation of the object apple would be little more than “Apple.”
Symbolic representations have the value of being understandable to humans but are
otherwise very limiting because they have no precise connection to the robot’s or
the agent’s sensors. Non-symbolic representations, on the other hand, tend not to be
easily understood, but tend to relate better to a machine’s sensors.

2.3.2 System Integration. Necessary but Difficult

In reality, to develop a working system capable of achieving real goals in the real
world, a vast array of different systems, programmes and processes must be integrated
to work together. System integration is often one of the hardest parts of building a
working robotic system. System integrators must deal with the fact that different
information is being generated by different sensors at different times. The different
sensors each have unique limitations, uncertainties, and failure modes, and the actu-
ators may fail to work in the real world. For all of these reasons, creating artificially
intelligent agents and robots is extremely challenging and fraught with difficulties.

2.4 What Is Hard for AI

The sections above have hinted at why AI is hard. It should also be mentioned that
not all software is AI. For example, simple sorting and search algorithms are not
considered intelligent. Moreover, a lot of non-AI is smart. For example, control
14 2 What Is AI?

algorithms and optimisation software can handle everything from airline reservation
systems to the management of nuclear power plants. But they only take well-defined
actions within strictly defined limits. In this section, we focus on some of the major
challenges that make AI so difficult. The limitations of sensors and the resulting lack
of perception have already been highlighted.
AI systems are rarely capable of generalising across learned concepts. Although
a classifier may be trained on very related problems, typically classifier performance
drops substantially when the data is generated from other sources or in other ways.
For example, face recognition classifiers may obtain excellent results when faces are
viewed straight on, but performance drops quickly as the view of the face changes
to, say profile. Considered another way, AI systems lack robustness when dealing
with a changing, dynamic, and unpredictable world. As mentioned, AI systems lack
common sense. Put another way, AI systems lack the enormous amount of experi-
ence and interactions with the world that constitute the knowledge that is typically
called common sense. Not having this large body of experience makes even the most
mundane task difficult for a robot to achieve. Moreover, lack of experience in the
world makes communicating with a human and understanding a human’s directions
difficult. This idea is typically described as common ground.
Although a number of software systems have claimed to have passed the Turing
test, these claims have been disputed. No AI system has yet achieved strong AI, but
some may have achieved weak AI based on their performance on a narrow, well-
defined task (like beating a grandmaster in chess or Go, or experienced players in
Poker). Even if an AI agent is agreed to have passed the Turing test, it is not clear
whether the passing of the test is a necessary and sufficient condition for intelligence.
AI has been subject to many hype cycles. Often even minor advancements have
been hailed as major breakthroughs with predictions of soon to come autonomous
intelligent products. These advancements should be considered with respect to the
narrowness of the problem attempted. For example, early types of autonomous cars
capable of driving thousands of miles at a time (under certain conditions) were already
being developed in the 1980s in the US and Germany. It took, however, another 30+
years for these systems to just begin to be introduced in non-research environments.
Hence, predicting the speed of progression of AI is very difficult—and in this regard,
most prophets have simply failed.

2.5 Science and Fiction of AI

Artificial Intelligence and robotics are frequent topics in popular culture. In 1968, the
Stanley Kubrick classic “2001” featured the famous example of HAL, a spacecraft’s
intelligent control system which turns against its human passengers. The Terminator
movies (since 1984) are based on the idea that a neural network built for military
defense purposes gains self-awareness and, in order to protect itself from deactiva-
tion by its human creators, turns against them. The Steven Spielberg’s movie “A.I.”
(2001), based on a short story by Brian Aldiss, explores the nature of an intelligent
2.5 Science and Fiction of AI 15

robotic boy (Aldiss 2001). In the movie “I, Robot” (2004), based on motives from
a book by Isaac Asimov, intelligent robots originally meant to protect humans are
turning into a menace. A more recent example is the TV show “Westworld” (since
2016) in which androids entertain human guests in a Western theme park. The guests
are encouraged to live out their deepest fantasies and desires.
For most people, the information provided through these shows is their first expo-
sure to robots. While these works of fiction draw a lot of attention to the field and
inspire our imagination, they also set a framework of expectations that can inhibit
the progress of the field. One common problem is that the computer systems or
robots shown often exhibit levels of intelligence that are equivalent or even supe-
rior to that of humans or current systems. The media thereby contributes to setting
very high expectations in the audience towards AI systems. When confronted with
actual robots or AI systems, people are often disappointed and have to revise their
expectations. Another issue is the frequent repetition of the “Frankenstein Complex”
as defined by Isaac Asimov. In this trope, bands of robots or an AI system achieve
consciousness and enslave or kill (all) humans. While history is full of examples of
colonial powers exploiting indigenous populations, it does not logically follow that
an AI system will repeat these steps. A truly intelligent system will (hopefully) have
learned from humanity’s mistakes. Another common and rather paradoxical trope is
the assumption that highly intelligent AI systems desire to become human. Often the
script writers use the agent’s lack of emotions as a the missing piece of the puzzle
that would make them truly human.
It is important to distinguish between science and fiction. The 2017 recommen-
dation to the European Parliament to consider the establishment of electronic per-
sonalities (Delvaux 2017) has been criticised by many as a premature reflex to the
depiction of robots in the media.1 For example, granting the robot “Sophia” Saudi
Arabian citizenship in October 2017 can in this respect be considered more as a
successful public relations stunt (Reynolds 2018) than as a contribution to the field
of AI or its ethical implications. Sophia’s dialogues are based on scripts and can-
not therefore be considered intelligent. It does not learn nor is it able to adapt to
unforeseen circumstances. Sophia’s presentation at the United Nation is an uncon-
vincing demonstration of artificial intelligence. People do anthropomorphise robots
and autonomous systems, but this does not automatically justify the granting of per-
sonhood or other forms of legal status. In the context of autonomous vehicles, it may
become practical to consider such a car a legal entity, similar to how we consider an
abstract company to be a legal person. But this choice would probably be motivated
more out of legal practicality than out of existential necessity.

1 http://www.robotics-openletter.eu/.
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Into the blue
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Into the blue

Author: F. Britten Austin

Release date: May 4, 2024 [eBook #73541]

Language: English

Original publication: Chicago, IL: The Consolidated Magazines


Corporation, 1924

Credits: Roger Frank and Sue Clark

*** START OF THE PROJECT GUTENBERG EBOOK INTO THE


BLUE ***
Into the Blue
The strange and tremendously dramatic story of an airplane
pilot, intoxicated with the exaltation of great altitudes, setting his
course, with his sweetheart, for the stars—by the distinguished
author of “Nach Verdun” and “Out of the Night.”

By F. BRITTEN AUSTIN

It was in a bitterly pessimistic frame of mind that, having seen my


baggage into the hotel, I went for a first walk along the asphalted
esplanade of Southbeach. I had no pleasure in the baking sun, in the
glittering stretch of the English Channel that veiled itself in a fine-
weather mist all around the half-horizon. The exuberant, bold-eyed
flappers, promenading in groups of three or four, the vivid
polychromatism of their taste in sports-coats, seemed to me merely
objectionable. The hordes of worthily respectable middle-class
families complete with children—with many children—that blackened
the sands and overflowed into the fringe of the water oppressed my
soul with their formidable multiplicity.
I thought, in a savage emphasis of contrast, of the neat little yacht
that should now be bearing me across the North Sea to the austere
perfection of the Norwegian fiords. And I cursed myself for the
childish imbecility of exasperation with which—when, at the last
moment, with my suitcases all packed, I had received a telegram
informing me that the yacht had come off second-best in a collision
with a coaltramp—I had picked up Bradshaw and sworn to myself to
go to whatever place I should blindly put my finger upon as I opened
the page. The oracle had declared for Southbeach—Southbeach in
mid-August! I shrugged my shoulders—so be it! My holiday was
spoiled anyhow. To Southbeach I would go. And now, as I
contemplated it, I was appalled. What was I going to do with myself?
A paddle-wheel excursion-steamer came up to the pier, listing over
with the black load aboard of her. Up and down the beach, in five-
minute trips, a seaplane went roaring some eight hundred feet above
the heads of the gaping crowd. I had done all the flying I wanted in
the war, thank you very much. Other potentialities of amusement
there were apparently none. If I could not discover a tolerably decent
golf-course, I was a lost man.
I am not going to give the chronicle of that first day. It would be a
study in sheer boredom. That night, after one of those execrable
dinners which are the peculiar production of an English seaside
hotel, I had pretty well made up my mind that—oracle or no oracle—I
would shake the sand of Southbeach off my feet on the morrow.
Sitting over my coffee in the lounge, I was in fact already consulting
the time-table for a morning train, when my cogitations were
suddenly interrupted by a violent slap on the shoulder.
“Hello, Jimmy!”
I looked up with a start, before my identification of the voice had
time to complete itself.
“Toby!—Toby Selwyn—by all that’s splendid!” It was years since I
had seen him, but in this dreary desert of uninteresting people he
came like an angel of companionship, and I welcomed him with
delight. “Sit down, man. Have a drink!”
He did so, ordered a whisky-and-soda from the hovering waiter. I
looked at him as one looks at an acquaintance of old times, seeking
for changes. I had not seen him since the Armistice, when our
squadron of fighting scouts was demobilized and a cheery crowd of
daredevil pilots was dispersed to the four quarters of the globe.
He had not greatly altered. His face was a little thinner, more
mature. His hair was still the same wild red mop. His eyes—peculiar
in that when he opened them upon you, you saw the whites all round
the pupil—had still that strange look in them, as though somewhere
deep down in them his soul was like a caged animal, supicious and
restless, which I so well remembered. The reason for his nickname
jumped back into my mind. It was from his little trick of suddenly and
disconcertingly going “mad dog,” not only when he swooped down,
against any sort of odds, upon a covey of Huns, but in the mess.
Some one had called him “Mad dog;” it had been affectionately
softened to “dog Toby;” and “Toby” he remained.
“And what on earth are you doing here?” I asked.
He smiled grimly.
“Earning my living, old bean. Introducing all the grocers in England
to the poetry of flying, at ten bob a head.”
“So that was your machine I saw going up and down the sea-front
today?”
“It was. Five-minute trips—two bob a minute, and cheap at the
price. Had to do something, you know. So I hit on this. There are
worse things. Put my last cent into buying the machine—ex-
Government, of course. She’s a topping bus!” His voice freshened
suddenly with enthusiasm. “It’s almost a shame to use her for
hacking up and down like this. You must come and have a look at
her.”
“Thanks,” I replied, “I’d like to, but—”

Our conversation was abruptly interrupted. Toby had jumped to his


feet. Coming in through the door of the lounge was—miracles never
happen singly!—an only-too-familiar, smiling and middle-aged
married couple and—Sylvia! Toby obscured me from them for an
instant as he went eagerly toward them—an instant where I weighed
the problem of whether to stay or bolt. The last time Sylvia and I had
met she had told me, with a pretty sympathy that ought to have
softened the blow, that she would always be glad to have me as a
friend, but— The problem was resolved for me, before I could
decide. Toby was leading the trio up to me.
“I want to introduce an old pal of mine—Jimmy Esdaile.”
Mr. and Mrs. Bryant shot a swift smile at each other and then to
me as we shook hands. Sylvia almost grinned. I felt a perfect fool.
“Good evening, Mr. Esdaile,” said Sylvia in her sweetest tones, her
gray eyes demurely alight.
Mr. Esdaile! The last time, it had still been “Jimmy.” It is true that
since I had somewhat boorishly informed her, upon that occasion,
that I had no manner of use for being her friend, I had scarcely a
legitimate grievance if now she chose to be frigid.
“Wont you sit down, all of you?” I suggested. “Mr. Bryant, you’ll
take a Grand Marnier with your coffee, I know.”
“Thanks, Jimmy, I will,” said Mr. Bryant, seating himself. I saw
Toby stare. His astonishment visibly increased as Mrs. Bryant,
having comfortably disposed herself upon the settee, added in her
motherly fashion: “And what in the world are you doing here,
Jimmy?”
“That’s what I’m asking myself,” I replied. Toby cut me short in
what might have been a witty answer had I been allowed to finish it.
“You people know each other, then?” he demanded.
Mr. Bryant smiled.
“Yes. We’ve met Jimmy before—haven’t we, Sylvia?”
“He used to be an acquaintance of ours in London,” corroborated
Sylvia imperturbably, delicately underlining the word acquaintance.
Toby probed me with a peculiar look, suddenly almost hostile. I
could guess that he was asking himself whether I had come to
Southbeach in pursuit of Sylvia. One did not need to be a detective
to discover his own eager interest in her. It was patent, with no
attempt at concealment. Those strange hungry restless eyes of his
seemed to devour her. Quite apart from any personal feelings—any
time during the last six months I could have assured you, with
perfect sincerity, that my heart was stone dead,—I didn’t like it. Toby
was not the sort of chap—
But I had no opportunity to intervene. Mr. and Mrs. Bryant, with a
genuine kindly interest in me and my doings that at any other time I
should have appreciated, monopolized me. And Sylvia flirted with
him, demurely but outrageously. She called him Toby with the most
natural ease in the world. He, poor devil, was awkward in an
uncertainty whether she were playing with him, jerkily spasmodic in
his answers, devouring her all the time with those strange eyes of
his, wherein I recognized that same caged-animal look familiar to me
as a preliminary to an outburst of “mad dog” on those nights when
there was ragging in the mess. She, I could see, was enjoying
herself at playing with fire.

At last I could stand it no longer. I switched off from the amiable


platitudes I was exchanging with her parents, interrupted her in her
markedly exclusive conversation with him.
“I didn’t know Toby was a friend of yours, Syl—Miss Bryant,” I said.
She turned candid eyes upon me.
“Oh, yes, we have known Toby quite a long time—soon after you
dropped us—nearly six months, isn’t it, Toby?”
She took, evidently, a malicious pleasure in reiterating his
Christian name. I messed up the end of my cigarette before I
remembered not to chew it. Toby looked up suspiciously.
“I had no idea, either, that you were a friend of the family, Esdaile,”
he said. He also had dropped the “Jimmy.”
Sylvia answered for me.
“Not exactly a close friend,” she said sweetly. “Are you, Mr.
Esdaile? We had almost forgotten each other’s existence.”
I could have smacked her.
Toby looked immensely relieved. I could see that, for the moment
at least, he definitely put certain doubts out of his mind. He seemed
to be trying to make up for his spasm of hostility when next he
spoke.
“He’s an old pal of mine, anyway, aren’t you, Jimmy? It’s like old
times to see you again. D’you remember that little scrap with a
dozen Huns over Charleroi? That was a good finish-up—the day
before the Armistice.”
I remembered well enough—remembered that after that last fight,
at the very end of the war, I had landed by a miracle with my nerve
suddenly gone. I had never been in the air since—for a long time
could not look at an airplane without a fit of trembling.
Sylvia glanced at me in surprise. The secret humiliation of that
finish had made me pretty close about my war-doings.
“Oh, you two knew each other in the war, then?” she said.
“I should rather think we did!” replied Toby. “Jimmy was my
squadron-leader—and he’s some scientist in the air, let me tell you.”
His tone of admiration smote me like a bitter irony. “Don’t forget
you’re coming to look over that bus of mine tomorrow morning,
Jimmy.”
“I don’t know that I can,” I replied. “I’m off back to town tomorrow.”
I said this with a glance to Sylvia which found her quite unmoved.
“Are you, really?” she said. “What, on a Sunday?” Her eyebrows
went up in mocking admiration for my courage.
Confound it! I remembered suddenly that tomorrow was Sunday. I
can put up with any reasonable amount of hardship, but the prospect
of a Sunday train on a South Coast railway!
“Kamerad!” I surrendered. “I go back on Monday.”
“Good!” said Toby. “The tender conscience of the local municipality
does not permit them to allow me to earn my living on the Sabbath.
Tomorrow is a dies non. We’ll spend the morning tinkering about the
machine together. It’ll be like old times, before we went up for a jolly
old scrap with the Hun-bird. She’s worth looking at, too—built for a
radius of a thousand miles and a ceiling of over twenty thousand
feet.”
“Really!” I said, with a touch old-time professional interest. “But
what on earth do you want a machine like that for? She’s surely
scarcely suitable for giving donkey-rides up and down a beach?”
“She does all right,” replied Toby. “And I like to feel that I’ve got
something with power to it. That I could if I wanted to—” His curious
restless eyes lost expression, as though the soul behind them no
longer saw me, contemplated something remote.
“Could what?” I challenged him.

He came back to perception of my presence.


“Eh? Oh, nothing.” He looked at me with that familiar sudden
suspiciousness which seemed to accuse one of attemped espionage
into the secrets of his soul. I remembered that even in the mess,
intimate as we had all been together, he had always been a queer
chap. One had never really known what he was thinking or planning.
He turned now to Sylvia.
“Miss Bryant has promised me that one day she will let me take
her for a flight,” he said, banishing the hardness of his eyes with that
little smile of his which was so peculiarly attractive when he chose to
exert his charm.
“I’ll come tomorrow,” she replied promptly. “And then you’ll have to
take me gratis.”
“Of course I will!” he answered, clutching at her promise with a
flash of eager delight in his eyes. “You didn’t imagine I was going to
charge you for it, did you? That’s settled, then.”
Mrs. Bryant interposed in motherly alarm.
“Oh, Sylvia! Don’t do any of your madcap tricks!—You will be
careful, wont you, Mr. Selwyn?” She turned to me. “Are you sure she
will be safe with him, Jimmy?”
“My dear Mrs. Bryant,” I assured her, “if there is a better pilot in the
world than Toby, I don’t know him.”
Mr. Bryant took the pipe from his mouth and glanced cautiously at
his wife.
“I’d rather like to go up too,” he said.
But Mrs. Bryant vetoed this volubly and emphatically.
“No, no, no!” she exclaimed. “Not two of you together! Suppose
anything happened!”
I smiled at her nervous fears.
“Nothing will happen, Mrs. Bryant—make your mind easy. Toby’s
perfectly safe. And if Mr. Bryant would like a flight, I’m sure Toby
would be pleased to take him.”
Toby was looking at Sylvia’s father with his enigmatic eyes.
“Of course I will,” he said. “But I don’t want to worry Mrs. Bryant. I
will take Mr. Bryant another time.”
The conversation drifted off to other topics. At last, Mrs. Bryant
rose for bed.
“And mind, Mr. Selwyn,” she warned him smilingly as she shook
hands with him, “I shall try hard to persuade Sylvia not to go.”
“But you wont succeed, Mother!” announced Sylvia radiantly.
“Good night, Toby. Good night, Mr. Esdaile!” With which parting shot
she left us, and the lounge was suddenly horribly empty.

We sat there for yet some time, Toby and I, puffing at our pipes in
silence. He leaned back on the settee, with his eyes closed. I was
thinking—never mind what I was thinking; but my thoughts ranged
far into the dreary future of my life. My glance fell on him, scrutinizing
him, probing him, weighing him, as he lay there all unconscious of it.
About his feelings I had no doubt. Were they reciprocated? I
remembered that peculiarly attractive smile of his, the alluring touch
of mystery about him—and almost hated him for them. That was the
kind of thing which appealed to women, I reflected bitterly.
He opened his eyes.
“‘Puro è disposto a salire alle stelle,’” he murmured to himself,
staring as at a vision where this somewhat gaudy hotel lounge had
no place.
“What’s that?” I said, not quite catching his words.
“Eh?” He looked at me as though he had forgotten my presence,
was only now reminded of it by my voice. “Oh, that’s the last line of
the Purgatorio—where Dante, having drunk forgetfulness of the
earth from Lethe, is ready to ascend with Beatrice into the stars of
the Paradiso. .... All right, Jimmy,” he added, with a smile of sardonic
superiority which irritated me, “don’t worry yourself with trying to
understand. You wont. You’re one of those whose idea of the fit
habitation for the divine soul shining through the eyes of your
beloved is a bijou residence in a London suburb. After a few years of
you, your wife, whoever she is, will be another Mrs. Bryant.”
“Many thanks!” I replied, somewhat nettled, and a little puzzled
also. This was a new Toby. We were not given to cultivating poetry in
our mess. “But since when have you taken to studying Dante in the
original?”
“Oh, I’ve had plenty of time,” he answered, his eyes straying away
from me evasively. “I’ve lived pretty much by myself these last few
years.” He rose to his feet, cutting short the subject. “Let’s go for a
stroll, shall we? Get a breath of fresh air into our lungs.”

I assented willingly enough. At the back of my mind was an obscure


idea that, in the stimulated sense of comradeship evoked between
two friends who walk together under a night sky, he might open
himself to some confidence that would help me to a more precise
definition of the relationship that subsisted between himself and
Sylvia. In this I was disappointed. He walked along the asphalt
promenade, now almost deserted, with the sea to our left marked
only by an irregular faintly gleaming line of white in the black
obscurity, without a word. He did not even respond to my efforts at
conversation. Apparently he did not hear them. Overhead, the
metallic blue-black heaven was powdered with a multitude of stars,
twinkling down upon us from their immense remoteness. He threw
his head back to contemplate them as we walked in silence. He
baffled me, kept me somehow from my own private thoughts.
Suddenly he switched upon me.
“There can’t be nothingness all the way, can there?” he demanded
of me with a curious vehemence of interrogation. His hand made an
involuntary half-gesture toward the scintillating dome of stars. “There
must be something!” His manner had the disconcerting intensity of a
man who has been brooding overlong in solitude. “At a distance
everything melts into the blue. I have seen blank blue sky where on
another day there’s a range of mountains sharp and clear across the
horizon. And they pretend that in all those millions of miles there is
nothing—nothing but empty space!” He finished on a note of scorn.
“But surely the astronomers—” I began.
“Pah!” he interrupted me. “What do you or the astronomers know
about it? Shut up!”
Shut up, I did. He was evidently not in the mood for reasonable
conversation. He also shut up, pursuing in silence thoughts I could
not follow. At last he brusquely suggested returning to the hotel.

Next morning, when I met him in the breakfast-room, he was quite


his old cheery self, and whatever resentment of his last night’s
rudeness still rankled in me, vanished in the odd charm of his smile.
He reminded me of my promise to spend the morning with him
tinkering about his seaplane. I acquiesced, for two reasons. First, I
had nothing else to do, and I still retained enough of the impress of
my old flying days to be genuinely interested in looking over a
machine. Secondly, Sylvia would be coming to it for her flight. An
uneasy night had not brought me to any satisfying theory of her real
attitude toward him.
It was a bright sunshiny morning as we left the hotel, but a
southwest breeze ruffled the surface of the sea; and the white
isolated clouds that drifted across the blue overhead were evidently
the advance-guards of a mass yet invisible beyond the horizon.
Within an hour or two the sky would almost certainly be overcast. For
the moment it was fine, however, and I enjoyed the fresh clarity of
the air as we walked down the pier together. At its extremity, on the
leeward side of the steamer landing-stage, the seaplane rode the
running waves like a great bird that had alighted with outspread
wings, the water splashing and sucking against her floats as she
jerked and slackened on her mooring-ropes.
We hauled in on them, clambered down into her. She was, as he
explained to me, intended for a super-fighting-scout, with an
immense radius, a great capacity for climb, and a second machine-
gun. The space where this second machine-gun had been, just
behind the pilot, was now filled with four seats, in pairs behind each
other, for the passengers, and he had had her landing-wheels
replaced by floats. The morning was still young—nine o’clock struck
just as we got on board the machine; and for the next two hours we
pottered about her, cleaning her powerful motor, tautening the wire
stays to her wings, looking into a hundred and one technical details
that would have no interest for anyone but the expert. I enjoyed
myself, and Toby was almost pathetically delighted to have some
one with him who could enter into his enthusiasms. He had, I could
guess, been leading a very solitary life for a long while.
Apparently he almost lived on board her. All sorts of gear were
stowed away in her. In one of the lockers I found quite a collection of
books, including the Dante he had quoted, and a number of others of
a distinctly mystical type—odd reading for a flying man. In another,
close to the pilot’s seat, was a German automatic pistol.
“Souvenir of the great war, Daddy!” he smiled at me as I handled
it.
“But do you know it’s loaded?” I objected disapprovingly.
“Yes,” he replied. “I shoot sea-gulls with it sometimes—chase ’em
in the air. It’s great sport.”
I shrugged my shoulders. Chasing seagulls with a pistol was just
one of those mad things I could well imagine Toby doing.
We gave her a dose of oil, filled up her petrol-tank—one of her
original pair had been removed to make space for the passengers,
but she still had a five-hundred-mile radius, he told me—and looked
round for something else to do.
“Would you like to take her up and see how she climbs?” he
invited me.
“No, thanks!” I replied hurriedly, uncomfortable in a sudden
embarrassment. I had, thanks to the Armistice, managed to conceal
my humiliating loss of nerve from the other fellows. “I’ve given up
flying.”
His queer eyes rested upon me for a penetrating glance, and I felt
pretty sure that he guessed. But he made no comment.
“All right,” he said. “I expect Miss Bryant will be along presently.
We’ll sit here and wait for her.”

We ensconced ourselves in the passengers’ seats and sat there


smoking our pipes. The mention of Miss Bryant’s name seemed to
have killed conversation between us. We sat in a silence that I, at
least, felt to be subtly awkward. The intimacy of the morning was
destroyed. Each of us withdrew into himself, each perhaps
preoccupied with the same problem. Once, certainly, I caught his
glance hostile upon me.
As I had expected, heavy clouds had come up from the southwest,
and the sky was now almost completely overcast. But immediately
overhead there was still a clear patch where, through a wide rift in
the gray wrack, one looked into the infinite blue. Leaning back in his
seat, he stared up at it with eyes that were dreamy in a peculiar fixity
of expression.
“Jimmy,” he said suddenly, in a voice that was far away with his
thoughts, “in the old days, when you were flying high to drop on a
stray Hun,—say, at twenty thousand feet, with the earth miles away
out of touch,—didn’t you ever feel that if you went a little higher—
climbed and climbed—you would come to something—some other
place? Didn’t it almost seem to you that it would be as easy as going
back?”
I glanced at him. Into my mind flitted a memory of his last night’s
wild talk about the stars. He had always been a little queer. Was he
—not quite right?
“I can’t say it did,” I replied curtly. “I was always jolly glad to get
down again.”
He looked at me.
“Yes—I suppose so!” he commented. There was almost an insult
in his tone.
Before I could decide whether to resent it or to humor him, I saw
Sylvia approaching us along the pier, charming in her summer dress,
but prudently with a raincoat over her arm.
“Here’s Miss Bryant!” I said, glad of this excuse to put an end to
the conversation.
He leaped to his feet with a peculiar alacrity.
“At last!” he ejaculated, as though an immeasurable time of waiting
was at an end. He quenched a sudden flash of excitement in his
eyes as he caught my glance on his face.
She stood above us on the pier, smiling.
“Here I am!” she said. “But it isn’t a very nice morning, is it?”
“It will be all right up above,” replied Toby. “Come along—down
that next flight of steps.” He was trembling with eagerness. I
wondered suddenly whether I was wise in letting her go up with him.
The man’s nerves were obviously strung to high pitch. On the other
hand, I had the greatest confidence in his skill—and it was only too
likely that she would misinterpret any objections from me, would
refuse to listen to them.
While I was hesitating, she had already descended to the lower
stage, and Toby had helped her along the gangplank into the
machine.
“You see I’ve brought my raincoat,” she said. “It’ll be cold up there,
wont it?”
“That’s no use,” replied Toby with brutal directness. “Here!” He
opened a locker where he kept the flying-coats for his passengers.
“Put that on.”

I helped her with it. She looked more charming than ever in the thick
leather coat, the close-fitting leather helmet framing her dainty
features. Then I made a step toward the gangplank.
“But aren’t you coming too?” she demanded in surprise.
Toby answered for me.
“Esdaile doesn’t care for flying,” he said with a sardonic smile,
looking me straight in the eyes. There was a sort of mocking triumph
in that unmistakable sneer.
“Oh—but please!” Sylvia turned to me pleadingly. “Do come!”
“I’d rather take you up alone,” said Toby in a stubborn voice,
looking up from the mooring-rope he had bent to untether.
She ignored him, laid a hand upon my arm.
“Wont you?” she asked.
“I should infinitely prefer not to,” I replied awkwardly. I cursed
myself for my imbecility, but the mere idea of going up in that
machine made me feel sick inside, still so powerful was the memory
of that moment long ago when, ten thousand feet up with a Hun just
below me plunging in flames to destruction, I had felt my nerve
suddenly break, my head go dizzy in an awful panic. “Please excuse
me.”
She could not, of course, guess my reason.
“I sha’n’t go without you,” she said obstinately. Her eyes seemed
to be telling me something I was not intelligent enough to catch. “And
I want to go. Please— Jimmy!”
I surrendered.
“All right,” I said, feeling ghastly. “I’ll come.”
Toby stopped in the act of pulling on his flying-coat, and looked at
me. His face was livid, his eyes almost insanely malignant in a
sudden fury of bad temper.
“Don’t think you’re going to spoil it!” he said, through his teeth. “I’ll
see to that!”
With that cryptic remark, he swung himself into the pilot’s seat and
started the engine with a jerk that almost threw me into the water. I
slid down to the seat beside Sylvia. Toby had already cast off the
one remaining mooring-rope, and with a whirring roar that gave me
an odd thrill of old familiarity, the propeller at our nose a dark blur in
its initial low-speed revolutions, we commenced to move over the
waves.
For a moment we had a slight sensation of their rise and fall as we
partly tore through them, partly floated on their lifting crests, and then
suddenly the engine note swelled to the deafening intensity of full
power; the blur of the propeller disappeared; a fount of white spray,
sunlit from a rift in the clouds, sprang up on either hand from the
floats beneath us, hung poised like jeweled curtains at our flanks,
stung our faces with flying drops. For yet a minute or two we raced
through the high-flung water; and then abruptly the glittering foam-
curtains vanished. Our nose lifted. We sagged for another splash,
lifted again, on a buoyancy that was not the buoyancy of the sea. I
glanced over the side, saw the tossing wave-crests already twenty
feet below us.
Instinctively I looked round to Sylvia to see how she was taking it.
Her eyes were bright, her face ecstatic. I saw her lips move as she
smiled. But her words were swallowed in the roar of the engine, and
the blast of air that almost choked one, despite the little mica wind-
screen behind which we crouched. I bent my ear close to her face,
just caught her comment as she repeated it.
“It’s—wonderful!” she gasped.
Then she clutched my arm in sudden nervousness as the machine
banked side-wise. Below us, diminished already, the pier, the long
promenade of Southbeach, whirled round dizzily in a complete circle,
got yet smaller as they went. Toby was putting the machine to about
as steep a spiral as it could stand. As we went round again and yet
again, with our nose seeming to point almost vertically up to the gray
ceiling of cloud and our bodies heavy against the backs of our seats,
I had a spasm of alarm that turned to anger. What was he playing
at? It was ridiculous to show off like this! I did not doubt his skill—but
it would not be the first airplane to stall at so steep an angle that it
slipped back in a fatal tail-spin. I noticed that Sylvia was not strapped
in her seat, and promptly rectified the omission. It might be all right,
but with an inexperienced lady-passenger, it was as well to take
precautions if he was going to play tricks of this sort.

Up and up we went in those dizzy spirals, Southbeach—


disconcertingly never on the side on which one expected it—
miniature below us; and I could not help admiring, despite my
sickening nervousness, the masterly audacity with which he piloted
his machine on the very limit of the possible. He never turned for a
glance at us, but sat, lifted slightly above us by our slant, doggedly
crouched at his controls. I could imagine his face, his lips pressed
tight together, his queer eyes alight with the boyish exultation of
showing us—or perhaps showing me?—what he could do. I did not
need the demonstration. I had seen him climb often enough like a
circling hawk, gaining height in an almost sheer ascent, racing a Hun
to that point of superior elevation which meant victory.
There had been a time when I could have beaten him at it. But
there was no necessity to play these circus-tricks now—above all,
with a lady on board. Why could he not take her for an ordinary safe
flight over the sea, gaining, in the usual way, a reasonable margin of
height on an angle that would have been almost imperceptible? I
quivered to clamber forward and snatch the controls from him as still
we rose, perilously high-slanted, in sweep after circular sweep. The
gray-black stretch of cloud was now close above us, the rounded
modeling of its under-surface like a low roof that seemed to forbid
further ascent.
Again Sylvia clutched at my arm, her face alarmed, and I bent my
head down to catch the words she shouted against the all-
swallowing roar of the engine. They came just audible.
“Is he—going—through this?”
Toby was still holding her nose up, plainly intending to get above
the clouds. I saw no sense in making her uneasy. I put my mouth
close to her head.
“Blue sky—above!” I shouted.
She nodded, reassured.
The next moment we had plunged into the mass. Except for the
sudden twists as we banked, we seemed to be motionless in a
dense fog. But the engine still roared, and drops of congealed
moisture, collecting on the stays of the upper wings, blew viciously
into our faces. The damp cold struck through me to my bones, and I
remembered suddenly that I was in my extremely unsuitable ordinary
clothes. There was no saying to what height this mad fool might take
us—he was still climbing steeply—and I had no mind to catch my
death of cold. Hanging on with one hand to the side of the canted-up
machine that threatened to fling me out directly I rose from my seat, I
managed to reach the locker where he kept the flying-coats for his
passengers, wriggled somehow into one of them.
It was only by setting my teeth that I did it, for my head was
whirling dizzily and, cursing the day I had strained my nerves beyond
breaking-point, I had to fight back desperately an almost
overmastering panic that came upon me in gusts from a part of me
beyond my will. I could not have achieved it, had it not been for the
fog which, blotting out the earth beneath us, obliterated temporarily
the sense of height. I was shaking all over as I got back into my seat.
I glanced at Sylvia. She was sitting quiet and brave, a little strained,
perhaps, staring at the blank fog through which we drove in steadily
upward sweeps.

Suddenly we emerged into dazzling sunshine, warm despite the cold


rush of the air. All above us was an infinite clarity of blue. Sylvia—I
guessed rather than heard—shouted something, waved her arm in
delighted surprise, pointing around and beneath. Close below us
was no longer the earth, but that magical landscape which is only
offered by the upper surface of the clouds. We rose for yet a minute
or two before we could get the full impression of it. At our first
emergence, great swelling banks of sunlit snow overtopped us here

You might also like