100% found this document useful (2 votes)
25 views

Systems Engineering and Artificial Intelligence instant download

The document discusses the intersection of Systems Engineering and Artificial Intelligence, highlighting the importance of interdisciplinary collaboration for developing autonomous human-machine teams. It outlines the organization of two symposia aimed at advancing the science of AI in systems engineering and the resulting edited volume that explores various theories, models, and applications of AI. The book aims to prepare society for the impacts of AI and machine learning on future systems and emphasizes the need for a shared context between humans and machines.

Uploaded by

lulushpesin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
25 views

Systems Engineering and Artificial Intelligence instant download

The document discusses the intersection of Systems Engineering and Artificial Intelligence, highlighting the importance of interdisciplinary collaboration for developing autonomous human-machine teams. It outlines the organization of two symposia aimed at advancing the science of AI in systems engineering and the resulting edited volume that explores various theories, models, and applications of AI. The book aims to prepare society for the impacts of AI and machine learning on future systems and emphasizes the need for a shared context between humans and machines.

Uploaded by

lulushpesin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Systems Engineering and Artificial Intelligence

install download

https://ebookmeta.com/product/systems-engineering-and-artificial-
intelligence/

Download more ebook from https://ebookmeta.com


We believe these products will be a great fit for you. Click
the link to download now, or visit ebookmeta.com
to discover even more!

Applications of Artificial Intelligence in Process


Systems Engineering 1st Edition Jingzheng Ren

https://ebookmeta.com/product/applications-of-artificial-
intelligence-in-process-systems-engineering-1st-edition-
jingzheng-ren/

Artificial Intelligence Expert Systems Sourcebook Hunt


V Daniel

https://ebookmeta.com/product/artificial-intelligence-expert-
systems-sourcebook-hunt-v-daniel/

Artificial Intelligence for Renewable Energy Systems


Artificial Intelligence and Soft Computing for
Industrial Transformation 1st Edition Ajay Kumar Vyas

https://ebookmeta.com/product/artificial-intelligence-for-
renewable-energy-systems-artificial-intelligence-and-soft-
computing-for-industrial-transformation-1st-edition-ajay-kumar-
vyas/

A Meeting In the Devil s House Other Stories 1st


Edition Richard Dansky

https://ebookmeta.com/product/a-meeting-in-the-devil-s-house-
other-stories-1st-edition-richard-dansky/
Enthrall Whispers From the Bayou 3 1st Edition Sandra R
Neeley

https://ebookmeta.com/product/enthrall-whispers-from-the-
bayou-3-1st-edition-sandra-r-neeley/

Designing Smart Manufacturing Systems 1st Edition


Chaudhery Mustansar Hussain

https://ebookmeta.com/product/designing-smart-manufacturing-
systems-1st-edition-chaudhery-mustansar-hussain/

Filthy Jealous Heir Part One Heirs of All Hallows 1 1st


Edition Caitlyn Dare

https://ebookmeta.com/product/filthy-jealous-heir-part-one-heirs-
of-all-hallows-1-1st-edition-caitlyn-dare/

Social Networks: Modeling and Analysis 1st Edition


Niyati Aggrawal

https://ebookmeta.com/product/social-networks-modeling-and-
analysis-1st-edition-niyati-aggrawal/

Inventive Minds: Marvin Minsky on Education (The MIT


Press) Marvin Minsky

https://ebookmeta.com/product/inventive-minds-marvin-minsky-on-
education-the-mit-press-marvin-minsky/
Freedom from Health Anxiety: Understand and overcome
obsessive worry about your health or someone else's 1st
Edition Karen Lynn Cassiday

https://ebookmeta.com/product/freedom-from-health-anxiety-
understand-and-overcome-obsessive-worry-about-your-health-or-
someone-elses-1st-edition-karen-lynn-cassiday/
William F. Lawless · Ranjeev Mittu ·
Donald A. Sofge · Thomas Shortell ·
Thomas A. McDermott Editors

Systems
Engineering
and Artificial
Intelligence
Systems Engineering and Artificial Intelligence
William F. Lawless · Ranjeev Mittu ·
Donald A. Sofge · Thomas Shortell ·
Thomas A. McDermott
Editors

Systems Engineering
and Artificial Intelligence
Editors
William F. Lawless Ranjeev Mittu
Paine College Information Technology Division
Augusta, GA, USA United States Naval Research Laboratory
Washington, DC, USA
Donald A. Sofge
United States Naval Research Laboratory Thomas Shortell
Washington, DC, USA Lockheed Martin Space Systems
King of Prussia, PA, USA
Thomas A. McDermott
Stevens Institute of Technology
Hoboken, NJ, USA

ISBN 978-3-030-77282-6 ISBN 978-3-030-77283-3 (eBook)


https://doi.org/10.1007/978-3-030-77283-3

© Springer Nature Switzerland AG 2021


This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright
protection may apply
All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specif-
ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction
on microfilms or in any other physical way, and transmission or information storage and retrieval, elec-
tronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter
developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

We began this book by asking representatives from Systems Engineering (SE) to


participate with us in an Association for the Advancement of Artificial Intelligence
(AAAI) Symposium in the Spring of 2020. We addressed our request for participation
to representatives of the International Council on Systems Engineering (INCOSE).
The symposium was intended to advance the science of autonomous human-machine
teams (A-HMTs). After systems engineers agreed to participate, we included “sys-
tems” in our call. The symposium was scheduled to occur at Stanford University
during March 23–25, 2020. Our agenda included AI scientists, system engineers, and
interested participants and organizations from around the world. Unfortunately, the
Covid-19 pandemic intervened. But AAAI gave us two opportunities: hold the March
event as scheduled virtually, or have a Replacement Symposium in the Washington,
DC area. We took advantage of both offers.
We gave our scheduled speakers the choice of participating in the virtual Spring
Symposium, the Fall Replacement Symposium, or both. The agenda for the Spring
Symposium was reduced to under 2 days, roughly replicated for the Replacement Fall
Symposium, which also became a virtual event. However, the number of participants
for both the Spring and Fall events slightly exceeded 100, a larger audience than we
would have expected to attend in person at Stanford.
Both symposia had the same title:1 “AI welcomes systems Engineering: Towards
the science of interdependence for autonomous human-machine teams.”2 The orig-
inal list of topics in our call for the Spring Symposium had sought potential speakers
to give talks on “AI and machine learning, autonomy; systems engineering; Human-
Machine Teams (HMT); machine explanations of decisions; and context.” For the
Replacement Symposium, we revised our list of topics for potential speakers to
consider in addition: “machine explanations of decisions.” For both symposia, we
sought participants from across multiple disciplines who were willing to work
together to contribute to the advancement of AI in welcoming SE to build a science of

1 https://aaai.org/Symposia/Spring/sss20symposia.php#ss03.
2 Michael Wollowski designed and built our supplementary website ([email protected]),
found at https://sites.google.com/view/scienceofinterdependence.

v
vi Preface

interdependence for autonomous human-machine teams and systems. Our thinking


continued to evolve, leading us to name the title of this book, “Systems Engineering
and Artificial Intelligence.”
The list of topics in this book expanded well beyond the listed agendas for our
two symposia. That said, the theme of systems and AI has continued to motivate the
chapters in this book. Our goal for the symposium was, and for this book is, to deal
with the current state of the art in autonomy and artificial intelligence (AI) from a
systems perspective for the betterment of society.
In advertising for our symposium and then for the chapters in this book, we sought
contributors who could discuss the meaning, value, and interdependent effects on
context wherever these AI-driven machines interact with humans to form autonomous
human-machine teams or systems. We had called for extended abstracts (1–2 pages)
or longer manuscripts of up to 8 pages in length. Our plan was to publish lengthy
manuscripts as chapters in a book after the symposium. We hope that this resulting
edited book will advance the next generation of systems that are being designed to
include autonomous humans and machines operating as teams and systems interde-
pendently with AI. By focusing on the gaps in the research performed worldwide and
addressed in this book, we hope that autonomous human-machine systems wherever
applied will be used safely.
In this edited volume, we explore how AI is expanding opportunities to increase
its impact on society, which will significantly increase with autonomous human-
machine teams and systems. With this book, we offer to the curious and professional
alike a review of the theories, models, methods, and applications of AI systems to
provide a better understanding, a more integrated perspective of what is in play and
at stake from the autonomous humans-machine teams and systems soon to cause
major disruptions. But our aim with this book is to help society, practitioners, and
engineers to prepare for the extraordinary changes coming.
Machine Learning (ML) is a subset of Artificial Intelligence (AI). Already
exceeding trillions of dollars invested, ML and AI have already wrought change
across many fields with even greater impacts yet to come. As autonomous machines
arrive on the scene, some of the new problems that have accompanied them are
discussed in this book. For example, Judea Pearl warned AI scientists to “build
machines that make sense of what goes on in their environment” to be able to commu-
nicate with humans. Self-driving vehicles have already been involved in fatalities, and
yet AI/ML is still trying to explain to humans the contexts within which it operates.
This edited book reflects our belief that only an interdisciplinary approach can
fully address Pearl’s warning. At our two symposia, we had papers presented by AI
computer scientists, systems engineers, social scientists, entrepreneurs, philosophers,
and other specialists address how humans make decisions in large systems; how they
determine context especially when facing unfamiliar environments or unanticipated
events; how autonomous machines may be taught to understand shared contexts; and
how human-machine teams may interdependently affect human awareness, other
teams, systems, and society, and be affected consequently. For example, in the Uber
self-driving fatality of a pedestrian in 2018, the car should have alerted its teammate,
a human operator, of an object in the road ahead. As with the case of the Uber
Preface vii

fatality, to best protect society, we need to know what happens if the context shared
by human-machine teams is incomplete, malfunctions, or breaks down.
This book also includes one of the first, if not the very first, chapters coauthored by
an artificially intelligent coauthor. Her name is Charlie. Her fellow coauthors address
the value of recognizing Charlie and treating her with respect to build a context that
is shared by all participants. For autonomous teams and systems involving humans
and machines, constructing a shared context is fundamental, meaning that joint inter-
pretations of reality must be addressed, requiring the interdisciplinary approach that
we have adopted, so that we too can learn from Charlie, a significant moment for us,
our fellow contributors, and we hope for you the reader, too.

The Organizers of Our Symposium

William F. Lawless, ([email protected]), corresponding, Professor, Math-


ematics & Psychology, Paine College, GA, Special Topics Editor, Entropy, and
Review Board, ONR (AI; Command Decision Making).
Ranjeev Mittu ([email protected]), Branch Head, Information Manage-
ment & Decision Architectures Branch, Information Technology Division, U.S.
Naval Research Laboratory, Washington, DC.
Donald Sofge ([email protected]), Computer Scientist, Distributed
Autonomous Systems Group, Navy Center for Applied Research in Artificial
Intelligence, Naval Research Laboratory, Washington, DC.
Thomas Shortell ([email protected]), Certified Systems Engineering
Professional, Lockheed Martin Space Systems, King of Prussia, PA.
Thomas A. McDermott ([email protected]), Deputy Director, Systems
Engineering Research Center, Stevens Institute of Technology, Hoboken, NJ.

Participants at Our Symposium

We had several more participants than the speakers who attended our symposium.
We wanted speakers and participants who could assess the foundations, metrics, or
applications of autonomous AI/ML, human-machine teams, and systems and how
these teams and systems affect or may be affected themselves. We kept both of
the symposia open-ended for the topics and for this book. We considered all papers
submitted for the two symposia and several afterwards for the book as long as they had
a systems perspective. Accompanied by contributions from non-symposium partici-
pants, too, our goal then and now is to advance AI theory and concepts to improve the
performance of autonomous human-machine teams and systems to improve society.
viii Preface

Program Committee for Our 2020 AAAI Symposia

• Manisha Misra, U Connecticut, Ph.D. graduate student, manisha.uconn@gmail.


com
• Shu-Heng Chen, Taiwan, [email protected]
• Beth Cardier, Sirius-Beta, VA; School Health Professions, Eastern Virginia
Medical School, [email protected]
• Michael Floyd, Lead AI Scientist, Knexus Research, michael.floyd@knexusres
earch.com
• Boris Galitsky, Chief Scientist, Oracle Corp., [email protected]
• Matt Johnson, Institute for Human and Machine Cognition, Research scientist in
human-machine teaming for technologies, [email protected]
• Georgiy Levchuk, Aptima Fellow, Senior Principal, Simulation & Optimization
Engineer, [email protected]
• Patrick J. Martin, MITRE Corporation, Autonomous Systems Engineer, pmarti
[email protected]
• Manisha Mishra, University of Connecticut, Systems Engineering, manisha.uco
[email protected]
• Krishna Pattipati, University of Connecticut, Board of Trustees Distinguished
Professor, Professor in Systems Engineering.

After the AAAI-Spring and Fall Replacement Symposia in 2020 were completed,
speakers were asked to revise their talks into manuscripts for the chapters in this
book. After the symposium, other authors who did not participate in the symposium
were also invited and they agreed to participate. The following individuals were
responsible for the proposal submitted to Springer for the book before the symposia,
for the divergence between the topics considered by the two, and for editing this
book that has resulted.

Augusta, GA, USA William F. Lawless


Washington, DC, USA Ranjeev Mittu
Washington, DC, USA Donald A. Sofge
King of Prussia, PA, USA Thomas Shortell
Hoboken, NJ, USA Thomas A. McDermott
Contents

1 Introduction to “Systems Engineering and Artificial


Intelligence” and the Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
William F. Lawless, Ranjeev Mittu, Donald A. Sofge,
Thomas Shortell, and Thomas A. McDermott
2 Recognizing Artificial Intelligence: The Key to Unlocking
Human AI Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Patrick Cummings, Nathan Schurr, Andrew Naber, Charlie,
and Daniel Serfaty
3 Artificial Intelligence and Future of Systems Engineering . . . . . . . . . 47
Thomas A. McDermott, Mark R. Blackburn, and Peter A. Beling
4 Effective Human–Artificial Intelligence Teaming . . . . . . . . . . . . . . . . . 61
Nancy J. Cooke and William F. Lawless
5 Toward System Theoretical Foundations for Human–
Autonomy Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Marc Steinberg
6 Systems Engineering for Artificial Intelligence-based Systems:
A Review in Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
James Llinas, Hesham Fouad, and Ranjeev Mittu
7 Human-Autonomy Teaming for the Tactical Edge: The
Importance of Humans in Artificial Intelligence Research
and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Kristin E. Schaefer, Brandon Perelman, Joe Rexwinkle,
Jonroy Canady, Catherine Neubauer, Nicholas Waytowich,
Gabriella Larkin, Katherine Cox, Michael Geuss,
Gregory Gremillion, Jason S. Metcalfe, Arwen DeCostanza,
and Amar Marathe

ix
x Contents

8 Re-orienting Toward the Science of the Artificial: Engineering


AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Stephen Russell, Brian Jalaian, and Ira S. Moskowitz
9 The Department of Navy’s Digital Transformation
with the Digital System Architecture, Strangler Patterns,
Machine Learning, and Autonomous Human–Machine
Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Matthew Sheehan and Oleg Yakimenko
10 Digital Twin Industrial Immune System: AI-driven
Cybersecurity for Critical Infrastructures . . . . . . . . . . . . . . . . . . . . . . . 197
Michael Mylrea, Matt Nielsen, Justin John,
and Masoud Abbaszadeh
11 A Fractional Brownian Motion Approach to Psychological
and Team Diffusion Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Ira S. Moskowitz, Noelle L. Brown, and Zvi Goldstein
12 Human–Machine Understanding: The Utility of Causal
Models and Counterfactuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Paul Deignan
13 An Executive for Autonomous Systems, Inspired by Fear
Memory Extinction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Matt Garcia, Ted Goranson, and Beth Cardier
14 Contextual Evaluation of Human–Machine Team
Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Eugene Santos Jr, Clement Nyanhongo, Hien Nguyen,
Keum Joo Kim, and Gregory Hyde
15 Humanity in the Era of Autonomous Human–machine Teams . . . . . 309
Shu-Heng Chen
16 Transforming the System of Military Medical Research:
An Institutional History of the Department of Defense’s
(DoD) First Electronic Institutional Review Board Enterprise
IT System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
J. Wood and William F. Lawless
17 Collaborative Communication and Intelligent Interruption
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Nia Peters, Margaret Ugolini, and Gregory Bowers
18 Shifting Paradigms in Verification and Validation
of AI-Enabled Systems: A Systems-Theoretic Perspective . . . . . . . . . 363
Niloofar Shadab, Aditya U. Kulkarni, and Alejandro Salado
Contents xi

19 Toward Safe Decision-Making via Uncertainty Quantification


in Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Adam D. Cobb, Brian Jalaian, Nathaniel D. Bastian,
and Stephen Russell
20 Engineering Context from the Ground Up . . . . . . . . . . . . . . . . . . . . . . . 401
Michael Wollowski, Lilin Chen, Xiangnan Chen, Yifan Cui,
Joseph Knierman, and Xusheng Liu
21 Meta-reasoning in Assembly Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Priyam Parashar and Ashok K. Goel
22 From Informal Sketches to Systems Engineering Models
Using AI Plan Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Nicolas Hili, Alexandre Albore, and Julien Baclet
23 An Analogy of Sentence Mood and Use . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Ryan Phillip Quandt
24 Effective Decision Rules for Systems of Public Engagement
in Radioactive Waste Disposal: Evidence from the United
States, the United Kingdom, and Japan . . . . . . . . . . . . . . . . . . . . . . . . . 509
Mito Akiyoshi, John Whitton, Ioan Charnley-Parry,
and William F. Lawless
25 Outside the Lines: Visualizing Influence Across Heterogeneous
Contexts in PTSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Beth Cardier, Alex C. Nieslen, John Shull, and Larry D. Sanford
Chapter 1
Introduction to “Systems Engineering
and Artificial Intelligence”
and the Chapters

William F. Lawless, Ranjeev Mittu, Donald A. Sofge, Thomas Shortell,


and Thomas A. McDermott

Abstract In this introductory chapter, we first review the science behind the two
Association for the Advancement of Artificial Intelligence (AAAI) Symposia that
we held in 2020 (“AI welcomes Systems Engineering. Towards the science of inter-
dependence for autonomous human-machine teams”). Second, we provide a brief
introduction to each of the chapters in this book.

1.1 Introduction. The Disruptive Nature of AI

Presently, the United States is facing formidable threats from China and Russia. In
response to these threats, the Director of the Defense Intelligence Agency (Ashley,
2019) and DNI stated:
China ... [is] acquiring technology by any means available. Domestic [Chinese] laws forced
foreign partners of Chinese-based joint ventures to release their technology in exchange
for entry into China’s lucrative market, and China has used other means to secure needed
technology and expertise. The result … is a PLA on the verge of fielding some of the
most modern weapon systems in the world. ... China is building a robust, lethal force with
capabilities spanning the air, maritime, space and information domains which will enable
China to impose its will in the region. (p. V) ... From China’s leader, Xi Jinping, to his 19th
Party Congress (p. 17) “We must do more to safeguard China’s sovereignty, security, and
development interests, and staunchly oppose all attempts to split China or undermine its
ethnic unity and social harmony and stability.”

W. F. Lawless (B)
Paine College, Augusta, Georgia
e-mail: [email protected]
R. Mittu · D. A. Sofge · T. Shortell · T. A. McDermott
Systems Engineering Research Center, Stevens Institute of Technology, Hoboken, NJ, USA
e-mail: [email protected]
D. A. Sofge
e-mail: [email protected]
T. Shortell
e-mail: [email protected]

© Springer Nature Switzerland AG 2021 1


W. F. Lawless et al. (eds.), Systems Engineering and Artificial Intelligence,
https://doi.org/10.1007/978-3-030-77283-3_1
2 W. F. Lawless et al.

To address these and other competitive threats, artificial intelligence (AI), espe-
cially machine learning (ML) that we discuss with fusion next, is a major factor. The
U.S. Department of Defense (DoD), industry, commerce, education, and medicine
among many other fields are seeking to use AI to gain a comparative advantage for
systems. From the perspective of DoD (2019):
AI is rapidly changing a wide range of businesses and industries. It is also poised to change
the character of the future battlefield and the pace of threats we must face.

Simultaneously, the DoD recognizes the disruptive nature of AI (Oh et al., 2019).
To mitigate this disruption while taking advantage of the ready-made solutions AI
already offers to commerce, the current thinking appears to first use AI in areas
that are less threatening to military planners, the public, and potential users; e.g.,
back-office administration; finance (e.g., Airbus is using AI to cut its financial costs
by increasing efficiency, reducing errors, and freeing up humans for more strategic
tasks such as planning, analysis, and audits; in Maurer, 2019); data collection and
management; basic personnel matters; virtual assistants for basic skills training (i.e.,
Military Occupational Specialties, or MOSs); personal medical monitoring (e.g.,
drug compliance, weight reduction, sleep cycles); military maintenance; and simple
logistics (e.g., ordering, tracking, maintaining supplies).
Second, when the DoD and other fields address the more disruptive aspects of AI,
like autonomy and autonomous human–machine teams, many more social changes
and impacts will arise, including the adverse threats posed by the use of AI, such as
the “consequences of failure in autonomous and semi-autonomous weapon systems
that could lead to unintended engagements” (DoD, 2019).
Machine Learning (ML) and Fusion: Machine learning has already had an extraor-
dinary economic impact worldwide estimated in the trillions of dollars with even
more economic and social impact to come (Brynjolfsson & Mitchell, 2017). The
basic idea behind traditional ML methods is that a computer algorithm is trained
with data collected in the field to learn a behavior presented to it as part of previous
experience (e.g., self-driving cars) or with a data set to an extent that an outcome can
be produced by the computer algorithm when it is presented with a novel situation
(Raz et al., 2019).
Autonomy is changing the situation dramatically in the design and operational
contexts for which future information fusion (IF) systems are evolving. There are
many factors that influence or define these new contexts but among them are:
movement to cloud-based environments involving possibly many semi-autonomous
functional agents (e.g., the Internet of Things or IoT; Lawless et al., 2019b), the
employment of a wide range of processing technologies and methods spread across
agents and teams, an exceptional breadth of types and modalities of available data,
and diverse and asynchronous communication patterns among independent and
distributed agents and teams. These factors describe the contexts of complex adap-
tive systems (CAS) for “systems in which a perfect understanding of the individual
parts does not automatically convey a perfect understanding of the whole system’s
behavior” (Raz et al., 2019).
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 3

Managing these disruptions must justify the need for speedy decisions; a systems
approach; the commonality of interdependence in systems and social science; social
science, including trust; the science of human–human teams (HHT); and human–
machine teams (HMT). We discuss these topics in turn.

1.1.1 Justifying Speedy Decisions

Now is the time when decisions may need to be made faster than humans can
process (Horowitz, 2019), as with the military development of hypersonic weapons
by competitor nations (e.g., China; in Wong, 2018); the push for quicker command,
control, and communication upgrades for nuclear weapons (NC-3; in DoD, 2018);
and the common use of AI in public conveyances like self-driving cars, trucks, ships,
or subways.
Many systems are approaching an operational status that use AI with humans
“in-the-loop,” characterized by when a human can override decisions by human–
machine or machine–machine teams in combat, such as the Navy’s new Ghost fleet
(LaGrone, 2019); the Army’s autonomous self-driving combat convoy (Langford,
2018); and the Marine Corps’ remote ordinance disposal by human–machine teams
(CRS, 2018).
Even more dramatic changes are to occur with human “on-the-loop” decisions,
characterized by when decisions must be made faster than humans can process and
take action based on the incoming information. Among the new weapon systems,
these decisions may be made by a human–machine team composed of an F-35
teaming with the Air Force’s aggressive, dispensable “attritable” drones flying in
a wing or offensive position (Insinna, 2019); moreover, hypersonic weapons are
forcing humans into roles as passive bystanders until a decision and its accompa-
nying action have been completed. From an article in the New York Times Magazine
(Smith, 2019),
One of the two main hypersonic prototypes now under development in the United States is
meant to fly at speeds between Mach 15 and Mach 20 ... when fired by the U.S. submarines
or bombers stationed at Guam, they could in theory hit China’s important inland missile
bases ... in less than 15 minutes ...

By attacking the United States at hypersonic speeds, however, these speeds would
make ballistic missile interceptors ineffective (e.g., Aegis ship-based, Thad ground-
based, and Patriot systems). If launched by China or Russia against the United States
(Smith, 2019), these missiles:
would zoom along in the defensive void, maneuvering unpredictably, and then, in just a few
final seconds of blindingly fast, mile-per-second flight, dive and strike a target such as an
aircraft carrier from an altitude of 100,000 feet.

Human “on-the-loop” observations of autonomous machines making self-directed


decisions carry significant risks. On the positive side, since most accidents are caused
4 W. F. Lawless et al.

by human error (Lawless et al., 2017), self-directed machines may save more lives.
But an editorial in the New York Times (Editors, 2019) expressed the public’s concerns
that AI systems can be hacked, suffer data breaches, and lose control to adversaries.
The Editors quoted the UN Secretary General, Antonio Guterres, that “machines
with the power and discretion to take lives without human involvement … should
be prohibited by international law.” The editorial recommended that “humans never
completely surrender life and decision choices in combat to machines.” (For a review
of the U.N.’s failure to manage “killer robots,” see Werkhäuser, 2019.)
Whether or not a treaty to manage threats from the use of “on the loop” decisions is
enacted, the violations of existing treaties by nuclear states (e.g., NATO’s judgment
about suspected Russian treaty violations; in Gramer & Seligman, 2018) suggest
the need to understand the science of autonomy for “on the loop” decisions and to
counter the systems that use them.
Furthermore, the warning by the Editors of the New York Times is similar to
those that arose during the early years of atomic science, balanced by managing the
threats posed while at the same time allowing scientists to make numerous discov-
eries leading to the extraordinary gifts to humanity that have followed, crowned by
the Higgs (the so-called “God”) particle and quantum computing. The science of
autonomy must also be managed to balance its threats while allowing scientists to
make what we hope are similar advances in the social sphere ranging from Systems
Engineering and social science to international affairs.

1.1.2 Systems Engineering (SE)

SE is also concerned about whether AI and ML will replace humans in the decision
loop (Howell, 2019). System engineers prefer that humans and machines coexist
together, that machines be used to augment human intelligence, but that if decisions
by machines overtake human decision-making as is happening with “on-the-loop”
decisions, at least humans should audit the machine decisions afterward (viz., see the
Uber car fatality case below). SE also raises a series of other concerns and questions.
In addition to the public’s concerns about AI expressed by the Editors in the New
York Times, the application of AI/ML raises several concerns and questions for SE.
One concern is whether or not to use a modular approach to build models (Rhodes,
2019). System engineers note that safety is an emergent property of a system (Howell,
2019). When a team “emerges,” the whole has become more than the sum of its parts
(Raz et al., 2019); in contrast, when a collective fails, as appears to be occurring in
Europe today, it creates “a whole significantly less than the sum of its parts” (Mead,
2019). But if SE using AI/ML is to be transformed through model-centric engineering
(Blackburn, 2019), how is that to be accomplished for autonomous teams? Systems
often do not stand alone; in those cases where systems are a network of networks,
how shall system engineers assure that the “pieces work together to achieve the
objectives of the whole” (Thomas, 2019)? From retired General Stanley McCrystal’s
book, Team of teams, “We needed to enable a team operating in an interdependent
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 5

environment to understand the butterfly-effect ramifications of their work and make


them aware of the other teams with whom they would have to cooperate” (in Long,
2019). Continuing with the emphasis added by Long (2019), in the attempt by the
Canadian Armed Forces to build a shared Communication and Information Systems
(CIS) with networked teams and teams of teams in its systems of organizations,
Systems must be specifically designed to enable resilient organizations, with the designer
and community fully aware of the trade-offs that must be made to functionality, security,
and cost. However, the benefits of creating shared consciousness, lowering the cost of
participation, and emulating familiar human communication patterns are significant
(Long’s emphasis).

For more concerns, along with metrics for autonomous AI systems, formal verifi-
cation (V&V), certification and risk assessments of these systems at the design, oper-
ational, and maintenance stages will be imperative for engineers (Lemnios, 2019;
Richards, 2019). Is there a metric to assess the risk from collaboration, and if so, can it
be calculated (Grogan, 2019)? The risk from not deploying AI systems should also be
addressed (DeLaurentis, 2019); while an excellent suggestion, how can this concern
be addressed?1 Measured in performance versus expectations, when will these risks
preclude humans from joining teams with machines; and what effect will machine
redundancy have in autonomous systems (Barton, 2019)? Because data are dumb,
how will the operational requirements and architectures be tested and evaluated for
these systems over their lifecycle (Dare, 2019; Freeman, 2019)?
Boundaries and deception: AI can be used to defend against outsiders, or used
with deception to exploit vulnerabilities in targeted networks (Yampolskiy, 2017). A
team’s system boundaries must be protected (Lawless, 2017a). Protecting a team’s
networks is also a concern. In contrast, deception functions by not standing out (i.e.,
fitting in structurally; in Lawless, 2017b). Deception can be used to compromise
a network. From the Wall Street Journal (Volz & Youssef, 2019), the Department
of Homeland Security’s top cybersecurity official, Chris Krebs, issued a statement
warning that Iran’s malicious cyberactivities were on the rise. “What might start as
an account compromise … can quickly become a situation where you’ve lost your
whole network.”
Caution: In the search for optimization, tradeoffs occur (Long, 2019); however,
an optimized system should not tradeoff resilience.

1.1.3 Common Ground: AI, Interdependence, and SE

Systems engineers know about interdependence from a system’s perspective. They


claim to know little about human teams, which they hope can be improved by working

1 One possibility is to use global metrics. In the case of the Uber car accident that killed a pedestrian
discussed below, the industry’s first pedestrian fatality, the company’s self-driving section did not
suffer until the accident, and then Uber and the rest of the self-driving industry have been significantly
slowed by the fatality (Gardner, 2019).
6 W. F. Lawless et al.

with social scientists and by studying their own SE teams and organizations (DeLau-
rentis, 2019). Their own teams and organizations, however, are systems of social
interdependence.
Systems Engineering addresses the interactions of systems too complex for an
analysis of their independent parts without taking a system as a whole into account
across its life cycle. System complexity from the “interdependencies between …
constituent systems” can produce unexpected effects (Walden et al., 2015, p. 10),
making the management of systemic interdependence critical to a system’s success.
For example, the interactions for complex systems with numerous subsystems, like
the International Space Station (ISS), interact interdependently (i.e., interdependence
affected how the ISS modules were assembled into an integrated whole, how module
upgrades affected each other, how interfaces between ISS modules were determined
to be effective, how the overall configuration of the modules was constructed, how
modules were modeled, etc.; in Stockman et al., 2010). From the ISS, in SE, we
can see that interdependence transmits the interactions of subsystems. The study of
interdependence in systems is not a new idea. For example, Llinas (2014, pp. 1, 6)
issued a:
call for action among the fusion, cognitive, decision-making, and computer-science commu-
nities to muster a cooperative initiative to examine and develop [the] … metrics involved in
measuring and evaluating process interdependencies … [otherwise, the design of] modern
decision support systems … will remain disconnected and suboptimal going forward.

Similarly, in the social sciences, interdependence is the means of transmitting


social effects (Lawless, 2019), such as the construction of a shared context between
two humans, and, we propose, for human–machine teams (HMT). Interdependence
then is the phenomenon that not only links Systems Engineering, AI, and other
disciplines (e.g., social science, law, philosophy, etc.) but also, if interdependence
can be mastered, it will provide a means to assist AI and SE in the development of a
science of interdependence for human–machine teams.
The application of interdependence in a system to analyze an accident: In 2018, an
Uber2 self-driving car struck and killed a pedestrian. From the investigation report
(NTSB, 2018; NTSB, 2019b), the machine saw the pedestrian about 6 s before
striking her, selected the brakes 1.2 s before impact, but new actions like the brakes
had a 1 s interlock to prevent precipitous action by (since corrected). The human
operator saw the victim 1 s before impact and hit her brakes 1 s after impact. Of
the conclusions to be drawn, first, although poorly designed, the Uber car performed
faster than the human; but, second and more important, the Uber car was a poor team
player by not updating the context it should have shared with its human operator
(Sofge et al., 2019).
Trust as part of the accident analysis. When will machines be qualified to be
trusted remains an important question. As we pointed out in a bet in AI Magazine
(Sofge et al., 2019), despite the complexity and costs of validating these systems,
according to a New York Times (Wakabayashi, 2018) investigation of the pedestrian’s
death in 2018 by the Uber self-driving car, Waymo self-driving cars:

2 On December 7th, Uber sold its self-driving unit to Aurora Innovation Inc. (Somerville, 2020).
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 7

went an average of nearly 5,600 miles before the driver had to take control from the computer
to steer out of trouble. As of March [2018, when the accident happened], Uber was struggling
to meet its target of 13 miles per “intervention” in Arizona …

It must be kept in mind, however, that as incompletely and poorly trained as was
the Uber car, it still responded to the situation as it had been designed; further, its
response was faster than its human operator.

1.1.4 Social Science

The National Academy of Sciences (2019) Decadal Survey of Social and Behav-
ioral Sciences finds that the social sciences want to be included in research using
computational social science for human and AI agents in teams. In their thinking,
social scientists are concerned about ethical and privacy issues with the large digital
databases being collected. For systems of social networks, they recommended further
study on:
how information can be transmitted effectively … [from] change in social networks …
network structure of online communities, the types of actors in those communities …

In addition, social scientists want more research to counter social cyberattacks,


research on emotion, and, for our purposes (see below in Bisbey et al., 2019 for
similar issues with research on human teams),
… how to assemble and divide tasks among teams of humans and AI agents and measure
performance in such teams. …

More importantly, while social scientists want to be included in the AI/ML revo-
lution, they have had setbacks in their own disciplines with the reproducibility of
experiments (e.g., Nosek, 2015; also, Harris, 2018). For our purposes, unexpect-
edly, research has indicated that the poorest performing teams of scientists were
interdisciplinary teams (Cummings, 2015).3 In addition, however, Cummings added
that the best scientist teams maximized interdependence. Based on Cummings and
our research (e.g., Lawless, 2019), we conclude that for interdisciplinary teams to
function optimally, their team members must also be operating under maximum
interdependence (Lawless, 2017a). By extension, for the optimum size of a team
to maximize interdependence, a team’s size must be the minimum size to solve a
targeted problem (Lawless, 2017a), contradicting the Academy’s two assertions that
“more hands make light work” (Cooke & Hilton, 2015, Chap. 1, p. 13) and that the
optimal size of a scientific team is an open problem (p. 33).
The advent of human–machine teams has elevated the need to determine context
computationally, yet social science has offered little guidance for their design, oper-
ation, or to prevent accidents (see the Uber self-driving car accident described above
that killed a pedestrian in 2018), let alone the means to construct a computational

3 Cummings studied about 500 teams of scientists in the National Science Foundation’s data base.
8 W. F. Lawless et al.

context (Lawless et al., 2019a). Recognizing their plight, social scientists argue, and
we agree, that their science is the repository of an extraordinary amount of statistical
and qualitative experience in determining and evaluating contexts for humans and
human teams (NAS, 2019). Nonetheless, this situation leaves engineers to seek a
quantitative path on their own. Instead, we foresee an integrated path as the better
course going forward (Lawless, 2019).
Trust and machine autonomy: In the rapid decision-making milieux where trust
between machine and human members of a team becomes a factor (Beling, 2019),
to build trust, each member of a human–machine team must be able not only to
exchange information about their status between teammates but also to keep that
information private (Lawless et al., 2019a). In that humans cause most accidents
(Lawless et al., 2017), trust can be important outside of the team, as when a human
operator threatens passengers being transported, which happened with the crash of
GermanWings Flight 9525 in the Alps in March 2015, killing all 150 aboard at the
hands of its copilot who committed suicide (BEA, 2016); or the engineer on the
train in the Northeast Corridor in the United States who allowed his train rounding
a curve to speed above the track’s limits (NTSB, 2016); or the ship’s captain on
the bridge of the McCain at the time the destroyer was turning out of control in a
high-traffic zone (NTSB, 2019). In these and numerous other cases, it is possible
with current technology and AI to authorize a plane, train, other public vehicle or
military vehicle or Navy ship as part of a human–machine team to take control from
its human operator (the bet that a machine will be authorized to take control from a
dysfunctional human operator, Sofge et al., 2019).

1.1.5 The Science of Human Teams

From our review of human teams, Proctor and Vu (2019) conclude that the best
forecasts improve with competition (Mellers & Tetlock, 2019). They also conclude
that teams are formed by “extrinsic factors, intrinsic factors, or a combination of
both.” Extensive motivation is often generated from the collective consensus of many
stakeholders (the public, researchers, and sponsoring agencies) that there is an urgent
problem that needs to be solved. But they asserted that solutions require “a multi-
disciplinary team that is large in score … [with] the resources required to carry
out the research … to appropriate subject-matter experts, community organizations
and other stakeholders … [and] within an organization, administrative support for
forming, coordinating, and motivating multidisciplinary teams …”.
Salas and his colleagues (Bisbey et al., 2019) conclude that “Teamwork allows
a group of individuals to function effectively as a unit by using a set of interre-
lated knowledge, skills and attitudes (KSAs; p. 279). [On the other hand] … poor
teamwork can have devastating results … plane crashes, … friendly fire, … surgical
implications … When the stakes are high, survival largely depends on effective team-
work.” One of the first successes with human teams was: “Crew resource manage-
ment [CRM] prompted by not “human error,” but crew phenomena outside of crew
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 9

member competencies such as poor communication in United Flight 173 led the
Captain to disregard fuel state. … CRM required the crew to solve its problems
as a team” (p. 280). Another success for team science occurred in the attempts to
understand the shoot-down of an Iranian commercial airliner by the USS Vincennes
in 1988, leading to the study of stress in decision-making. Subsequently, following
the combination of a significant number of unrelated human errors that led to new
research after President Clinton’s Institute of Medicine (IOM) review of medical
errors in hospitals; the coordination errors with the BP/Deepwater Horizon oil spill
in 2011; Hurricane Katrina in 2005; and the NASA accidents Columbia in 2003
and Challenger in 1986 space shuttle accidents. Based on this new research, human
team scientists separated task-work from teamwork. Task work dealt with skills or
a skills’ domain (flying a plane), teamwork skills with team effectiveness across
contexts (e.g., how to communicate with others; p. 282).

1.1.6 Human–Machine Teams

A précis of our research on mathematical models of interdependence and future


directions follows. From our hypothesis that the best teams maximize interdepen-
dence to communicate information via constructive and destructive interference, we
have established that the optimum size of teams and organizations occurs when they
are freely able to choose to minimize redundant team members (Lawless, 2017a);
we replicated the finding about redundancy and freedom in making choices, adding
that redundancy in over-sized teams is associated with corruption (Lawless, 2017b),
and that the decision-making of teams and organizations in interdependent states
under the pressure of competition implies tradeoffs that require intelligence to navi-
gate around the obstacles that would otherwise preclude a team from reaching its
goal such as producing patents (Lawless, 2019). Our findings on redundancy contra-
dict network scientists (Centola & Macy, 2007, p. 716) and the Academy (Cooke &
Hilton, 2015, Chap. 1, p. 13); we have also found that interdependence identified in
tracking polls indicates that it interferes adversely with predictions based on those
polls (Lawless, 2017a, b); e.g., Tetlock and Gardiner’s first super-forecasters failed
in their two predictions in 2016, first that Brexit would not occur, followed by their
second in 2016 that Trump would not be elected President.
In a recent article (Lawless, 2019), we found evidence that intelligence measured
by levels of education is significantly associated with the production of patents;
however, in earlier research from 2001 reviewed in the same article, we reported that
education specific to air-combat maneuvering was unrelated to the performance of
fighter pilots engaged in air-to-air combat, indicating that intelligence and physical
skills tap orthogonal phenomena, offering a new model of mathematics and thermo-
dynamics for teams, which also accounts for the failure of complementarity to be
established; viz., for the latter, the best teams are composed of agents in orthogonal
roles, measured by Von Neumann subadditivity, whereas agents in the worst teams
are in roles measured by Shannon information (e.g., the conflict between CBS and
10 W. F. Lawless et al.

Viacom during 2016–18). Finally, orthogonality figures into our proposed next study
on fundamental decision processes and emotion for a model of a social harmonic
oscillator where we hypothesize that the best teams operate in a ground state while
underperforming teams operate in excited states (Lawless, 2019).

1.2 Introduction to the Chapters

Artificial intelligence has already brought significant changes to the world; will the
impact of human–machine teams be even greater? The first of the contributed chap-
ters, Chap. 2, “Recognizing Artificial Intelligence: The Key to Unlocking Human
AI Teams,” was written by a team at Aptima, Inc., headquartered in Woburn, MA.
The authors consist of Patrick Cummings, Nathan Schurr, Andrew Naber, Charlie,
and Daniel Serfaty (Aptima’s CEO and Founder). Readers, please recognize that
one of the coauthors from Aptima, “Charlie,” has no last name; she is an artificial
embodiment. Charlie has made contributions to public before (e.g., at a workshop
and a panel), but her contributions to Chap. 2 may be one of the first, if not the
very first, chapters contributed to or co-authored by, as she is aptly described by
her fellow coauthors, an “intelligent coworker.” Interacting with Charlie in public
over the past year has produced several insights signified and discussed by all of the
authors in their chapter. Interestingly, several of these insights are based on the treat-
ment of Charlie’s spoken ideas and written contributions with deep respect, which
they have described as “recognizing” Charlie as an equal contributor. The authors
provide details about how Charlie came into existence and how she operates in public
(e.g., her architecture, her public persona, her ability to brainstorm). The stated goal
of all of the authors of Chap. 2 is to bring human and intelligent coworkers together
to build an effective system in the future, not only one that recognizes human and
artificial coworkers but also one that can be influenced by both human and artificial
coworkers and by the contributions from both. We add: “Welcome, Charlie!”.
Chapter 3 was written by three Systems Engineers, namely by Thomas A. McDer-
mott and Mark R. Blackburn at the Stevens Institute of Technology in Hoboken, NJ;
and by Peter A. Beling at the University of Virginia in Charlottesville, VA. (McDer-
mott is one of the co-editors of this book.) Their chapter is titled, “Artificial Intel-
ligence and Future of Systems Engineering.” In it, the authors address the major
transformation of their profession now occurring that is being driven by the new
digital tools for modeling, data and the extraordinary “digital twins” resulting in
the integration of data and modeling. These new tools include the artificial intelli-
gence (AI) and machine learning (ML) software programs that are becoming key to
the new processes arising during this period of transformation. Yes, Systems Engi-
neering (SE) is being transformed, but the hope of the authors is that SE is able to
guide these new tools and their applications to increase the benefits so that society
welcomes this transformation. To help guide this transformation, the authors provide
a roadmap being developed by the Systems Engineering Research Center (SERC);
SERC is a University-Affiliated Research Center of the US Department of Defense.
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 11

The roadmap sets out a series of goals in the attempt by SERC to identify the oppor-
tunities and the risks ahead for the research community to guide Systems Engineers
in preparation for the journey to the emergence of autonomy safely and ethically.
The fourth chapter, “Effective Human-Artificial Intelligence Teaming,” was
written by Nancy J. Cooke and William Lawless. Cooke is a Professor of Human
Systems Engineering and Director of the Center for Human, Artificial Intelligence,
and Robot Teaming at Arizona State University. Lawless is a Professor of Mathe-
matics and Psychology at Paine College; he is also on two Navy Boards (the Science
of AI and Command Decision Making); and he is a new Topics Editor of the journal
Entropy (“The entropy of autonomy and shared context. Human–machine teams,
organizations and systems”). They begin their chapter with a review of the history
of interdependence. It has long been known to be present in every social interaction
and central to understanding the social life of humans, but interdependence has been
difficult to manage in the laboratory, producing effects that have “bewildered” social
scientists. Since then, however, along with her colleagues and students, Cooke, the
first author, has studied in detail the effects of interdependence in the laboratory
with detailed studies. She has explored many of the aspects of interdependence and
its important implications with her team. She was also the lead author in a review
published by the National Academy of Sciences on what is known theoretically
and experimentally about interdependence in a team, finding that interdependence
enhances the performance of individuals (Cooke & Hilton, 2015). Writing Chap. 4
has provided her with the perspective she has gained from the considerable research,
she and her colleagues have conducted over the years. This perspective allows her to
estimate the additional research necessary before artificial intelligence (AI) agents
and machines can replace a human teammate on a team.
Chapter 5, “Towards Systems Theoretical Foundations for Human-Autonomy
Teams,” was written by Marc Steinberg with the Office of Naval Research (ONR)
in Arlington, VA. Steinberg is ONR’s Program Officer for its Science of Autonomy
program. In his chapter, he writes about the challenges posed by developing the
autonomy of human and intelligent systems. These are new ones on how to best
specify, model, design, and verify the correctness of systems. He discusses the
real-time monitoring and repairing of autonomous systems over life times, all the
while detecting problems and rebooting properties. These challenges entail Systems
Engineering methods to model system life cycles by abstracting and decomposing
systems in the design and development of components for intelligent autonomy.
Exploring these higher-level abstractions, models, and decompositions may inspire
solutions and lead to autonomy. These inspirations may integrate systems and humans
and provide the means to assure safety. He samples perspectives across scientific
fields, including biology, neuroscience, economics, game theory, and psychology. He
includes methods for developing and assessing complex human–machine systems
with human factors and organizational psychology, and engineering teams with
computer science, robotics, and engineering. He discusses team organizational struc-
tures, allocating roles, functions, responsibilities, theories for teammates working on
long-lived tasks, and modeling and composing autonomous human–machine teams
and systems, and their implications.
12 W. F. Lawless et al.

The sixth chapter was written by James Llinas, Ranjeev Mittu, and Hesham Fouad.
It is titled, “Systems Engineering for Artificial Intelligence-based Systems: A Review
in Time.” Llinas is the Director Emeritus at the Center for Multi-source Information
Fusion as well as a Research Professor Emeritus, with both positions in the University
at Buffalo. Ranjeev Mittu is the current Branch Head, Information Management &
Decision Architectures Branch, Information Technology Division at the U.S. Naval
Research Laboratory in Washington, DC; and Hesham Fouad is a Computer Scientist
in the same branch at the Naval Research Laboratory. Their backgrounds include
information systems, the science of information fusion, and information technology.
In their chapter, they provide a review of Systems Engineering (SE) for artificial
intelligence (AI) across time, starting with a brief history of AI (e.g., narrow, weak,
and strong AI, including expert systems and machine learning). Regarding SE, based
on the systems perspective by the lead author’s experience with information fusion
processes, and the experience of his coauthors with the technology in information
systems, they introduce SE and discuss how it has evolved over the years but how
much further it must evolve to become fully integrated with AI. In the future, they
believe that both disciplines can help each other more if they co-evolve or develop
new technology systems together. They also review several SE issues such as risk,
technical debt (e.g., maintaining sophisticated software in information systems over
ever longer periods of time), software engineering, test and evaluation, emergent
behavior, safety, and explainable AI. The authors close by discussing the challenge
of AI explanations and explainability.
Chapter 7 was an invited chapter written by Kristin Schaefer and her team,
including Brandon Perelman, Joe Rexwinkle, Jonroy Canady, Catherine Neubauer,
Nicholas Waytowich, Gabriella Larkin, Katherine Cox, Michael Geuss, Gregory
Gremillion, Jason Metcalfe, Arwen DeCostanza, and Amar Marathe. Schaefer’s
team is part of the Combat Capabilities Development Command (DEVCOM)
Army Research Laboratory (ARL). The title of their chapter is, “Human-Autonomy
Teaming for the Tactical Edge: The Importance of Humans in Artificial Intelligence
Research and Development.” From their perspective, the authors address the impor-
tance of understanding the human when integrating artificial intelligence (AI) with
intelligent agents embodied (i.e., robotic) and embedded (i.e., software) into mili-
tary teams to improve team performance. The authors recognize that they and the
Army are breaking new ground, confronting fundamental problems under uncertainty
and with unknown solutions. In their chapter, they provide an overview of ARL’s
research in human-autonomy teaming. They address the major research areas neces-
sary to integrate AI into systems for military operations along with examples of these
areas and the four known research gaps: enabling Soldiers to predict AI actions and
decisions; quantifying Soldier understanding for AI; Soldier-guided AI adaptation;
and characterizing Soldier-AI performance. These four areas have organized their
research efforts to explain AI, integrate AI, and build effective human-autonomy
teams.
The eighth chapter, titled “Re-orienting towards the Science of the Artificial:
Engineering AI Systems,” was written by Stephen Russell, Brian Jalaian, and Ira
S. Moskowitz. Russell is Chief of the Information Sciences Division, U.S. Army
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 13

Research Laboratory (ARL) in Adelphi, MD; Jalaian is a Test and Evaluation Lead
with the Department of Defense Joint Artificial Intelligence Center (JAIC); and
Moskowitz is a mathematician working for the Information Management & Decision
Architectures Branch, Information Technology Division, at the U.S. Naval Research
Laboratory in Washington, DC. In their chapter, they write that, on the one hand, while
systems enabled by AI are becoming pervasive, on the other hand, these systems face
challenges in engineering and deployment in the military for several reasons. To begin
to address these limitations, the authors discuss what it means to use hierarchical
component composition in a system-of-systems context. In addition, they discuss
the importance of bounding data for stable learning and performance required for
the use of AI in these complex systems. After a review of the literature, the authors
also address the changes that will be required to address the design/engineering
problems of interoperability, uncertainty, and emergent system behaviors needed to
allow AI to be safely deployed in embodied or fully virtualized autonomous systems.
Their perspective, illustrated with a Natural Language Processing example, allows
the authors to draw comparisons across their posits, in an attempt to offer a means to
make AI–Systems Engineering more rigorous, and the use of autonomy in the field
safer and more reliable.
Chapter 9 was written by Matthew Sheehan and Oleg Yakimenko; both researchers
work in the Department of Systems Engineering at the U.S. Naval Postgraduate
School in Monterey, CA. The title of their chapter is: “The Department of Navy’s
Digital Transformation with the Digital System Architecture, Strangler Patterns,
Machine Learning, and Autonomous Human–Machine Teaming.” In their chapter,
the authors describe the extraordinary changes caused by the U.S. Department of
Navy’s (DON) adoption of new software like the machine learning (ML) programs
designed for warfighters to assist in the performance of their missions. Some of
these “new” software products, however, are already beginning to mature and are
becoming obsolete. Still, machine learning (ML) software programs are central to
their discussions, including the need in the Fleet to provide access to the data neces-
sary to allow ML programs to operate and perform satisfactorily at sea. If adopted
and managed properly, these ML algorithms will enhance the existing applications
and will also enable new warfighting capabilities for the Navy. As rapid as are the
changes that are occurring, however, the DON system architectures and platforms
presently provide inadequate infrastructures for deployment at scale not only for
some of the new digital tools like ML but also for many of the forthcoming areas
including autonomous human–machine teams (AHMT). As the Navy transforms
itself digitally, the authors discuss the goals and barriers with a path forward to
implement successfully the Navy’s new digital platforms.
Chapter 10, “AI Driven Cyber Physical Industrial Immune Sytem for Critical
Infrastructures,” was written by a team at General Electric (GE): Michael Mylrea,
Matt Nielsen, Justin John and Masoud Abbaszadeh. Mylrea is the Director of Cyber-
security in the Cybersecurity R&D for Operational Technology at General Elec-
tric Global Research in Washington, DC. Nielsen, John and Abbaszadeh work in
the same department. In their chapter, the authors review many advances being
14 W. F. Lawless et al.

driven by machine learning (ML) and artificial intelligence (AI) to detect cyber-
physical anomalies. The advances brought about by the detection of these anomalies
are improving the security, reliability, and resilience of the power grid across the
United States. This improvement is occurring at the same time that adversaries are
using advanced techniques to mount sophisticated cyberattacks against infrastruc-
tures in the United States, especially the power grid that is the focus of their applied
research. The distributed energy resources in the power grid must be defended. The
authors discuss how new technology is being deployed to enable cyberdefenses to
protect the grid against even rapidly evolving threats. Their chapter explores how
AI combines with physics to produce the next-generation system that they liken to
an industrial immune system to protect critical energy infrastructures. They discuss
the new cybertechnology and its applications for cyberdefenders, including human–
machine teams and processes. The authors review the design and application of
GE’s Digital Ghost technology to cyberdefend the world’s largest gas turbines. They
discuss the situational awareness, explanations, and trust needed to use AI to defend
against cyberthreats. The authors look into the future to prepare for the new chal-
lenges coming to make human–machine teams effectively against any threat, cyber,
or physical.
Chapter 11 was written by Ira Moskowitz and Noelle Brown while working for
the Information Management and Decision Architectures Branch, Information Tech-
nology Division, U.S. Naval Research Laboratory in Washington, DC; their coauthor
was Zvi Goldstein in the Electrical Engineering Department at Columbia University
in New York City. The title of their chapter is “A fractional Brownian motion approach
to psychological and team diffusion problems.” Their mathematical approach is moti-
vated by AI, but with the goal of establishing that fractional Brownian motion can
become a metric to measure the diffusion processes existing in teams. In their chapter,
they review the mathematics for their proposed metric as a step toward building a
science of interdependence for autonomous human–machine teams. In their chapter,
the authors discuss various random walks, including those with Wiener and Gaussian
processes, and then they discuss drift-diffusion and extensions (stopping times and
absorbing boundaries) to make fractional Brownian motion into a metric of interde-
pendence. Before closing, the authors revisit Ratcliff diffusion, and then they present
their hybrid approach in preparation for a future application to the science of teams.
Chapter 12, “Human–Machine Understanding: The Utility of Causal Models and
Counterfactuals,” was authored by Paul Deignan; he is a Research Engineer working
with the Lockheed Martin Corporation in Bethesda, Maryland. His research interest
is focused on predictive analytics. He begins with the assertion that trust is a human
condition. The author proposes that for a human to trust a machine, the human must
understand the capabilities and functions of the machine in a context spanning the
domain of trust so that the actions of the machine are predictable for a given set of
inputs. In general, however, he believes that the domain of trust must be expanded
so that the human–machine system can be optimized to operate in the widest range
of situations. This reasoning motivates his desire to cast the operations of a machine
into a knowledge structure tractable to its human users, operators, and the human
teammates of machines. At the present time, machine behaviors are deterministic;
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 15

thus, for every action, there is a reaction and this means to the author that the dynamics
of a machine can be described through a structured causal model, which enables the
author to formulate the counterfactual queries upon which he anchors human trust.
Chapter 13, “An Executive for Autonomous Systems, Inspired by Fear Memory
Extinction,” was written by Matt Garcia at Northeastern University; Ted Goranson
with the Australian National University; and Beth Cardier at the Eastern Virginia
Medical School in the United States and at the Griffith University in Australia.
To overcome the many unknowns that autonomous systems may face, the authors
explore a category-theoretic, second-sorted executive reasoner in their chapter to
perform the adaptive, introspective reasoning needed by autonomous systems to solve
the challenging situations that they may see (i.e., decisions under uncertainty, such
as those encountered in combat at sea, electronic warfare, or with clinical traumas).
They base their ideas on complex mathematics, but they illustrate them with cartoon
examples of submarine surveillance, electronic warfare, and post-traumatic stress
disorder (PTSD). The authors provide a case study of the neural changes occurring
during therapy for PTSD as a model for executive reasoning, the main thrust of their
ideas. Their goal is to develop, simulate, and generalize a technique for autonomous
reasoning by human–machine systems facing uncertainty using virtual and physical
agent models.
The title of Chap. 14 is “Contextual Evaluation of Human–Machine Team Effec-
tiveness.” It was written by Eugene Santos, Clement Nyanhongo, Hien Nguyen,
Keum Joo Kim, and Gregory Hyde. Except for Nguyen, the authors are at the
Thayer School of Engineering at Dartmouth College in Hanover, NH; Nguyen is
in the Department of Computer Science at the University of Wisconsin-Whitewater
in Whitewater, WI. The authors address the rapid adoption of human–machine
teams across domains like healthcare and disaster relief. These machines are more
autonomous and aware than previous generations, allowing them to collaborate with
humans as partners. Despite this progress, human–machine team performance is
poorly defined, especially the explanations for team performance. These explana-
tions are necessary, however, to predict team performance and identify shortcom-
ings. The authors introduce a method using interference to measure the cohesiveness
and compatibility between humans and machines in various contexts. They rely on a
classifier trained to map human–machine team behaviors to attributes directly linked
to team performance along with explanations and insights. The authors test and vali-
date their techniques in experiments with human–machine teams. The results suggest
that their predictions of team attributes reflect actual team behaviors, increasing
confidence in being able to design future human–machine teams.
Chapter 15 was written by Shu-Heng Chen. He titled his chapter, “Humanity in the
Era of Autonomous Human–Machine Teams.” Shu is affiliated with the AI-ECON
Research Center in the Department of Economics at National Chengchi Univer-
sity in Taipei, Taiwan. He is concerned with the meaning arising from the rapid
development of autonomous human–machine teams. Mindful of the philosophy and
history of science and technology, the author examines this potential meaning from
an evolutionary perspective. He argues that the meaning determined will affect the
individuality of humans, their democracy, and their ability to develop as autonomous
16 W. F. Lawless et al.

humans. He wants this meaning to be positive and supportive, and he does not want
the future of humanity to be dominated and determined solely by machines. To
protect the future, he argues that scholars and citizens must become involved in the
development of autonomous human–machine teams. He recognizes that the human-
ities are changing, but with awareness, these changes can lead to more autonomy for
future generations.
Chapter 16, “Transforming the system of military medical research: An Institu-
tional History of the Department of Defense’s (DoD) first electronic Institutional
Review Board Enterprise IT system,” was written by Joseph C. Wood, US Army Col
(Ret.), MD, Ph.D., Augusta, GA and W.F. Lawless, Paine College, Augusta, GA.
This chapter, by these two authors, is about the history of their attempt to modernize
what was primarily a paper-based collection of medical research protocols, reviews,
and publications by medical research review boards and medical researchers at a
single medical research center in the U.S. Army that grew beyond their expectations
to become one of the largest electronic databases of medical reviews and research
results in the world at that time. Presenting metrics as a preview of a research
agenda on the use of AI for autonomous metrics in large systems, for the future
practice of ethics, and for the mitigation of risks, this history of their endeavors
brings out several points when dealing with large systems, including the value of
standardization, metrics, goal-based, and performance-based evaluations.
Chapter 17, “Collaborative communication and intelligent interruption systems,”
was written by Nia Peters, Margaret Ugolini, and Gregory Bowers. Peters is with the
711th Human Performance Wing, Air Force Research Laboratory, Wright Patterson
Air Force Base in Ohio. Ugolini and Bowers are with Ball Aerospace & Tech-
nologies in Fairborn, OH. The authors discuss the adverse effects of poorly timed
interruptions on collaborative environments for humans managing technology while
interacting with other humans. The literature to manage the adverse timings of inter-
ruptions, however, is focused on single users in multi-tasking interactions. There is
less research on multi-user, multi-tasking environments, which they address. To miti-
gate the disruptiveness from interruptions in multi-user, mutlti-tasking workloads,
the authors propose and evaluate timings at low mental workloads in a dual-user, dual-
task paradigm. Compared with high cognitive workload interruptions, they found that
performance is optimum when interruptions occur during low cognitive workloads,
a contribution to the literature.
Chapter 18, “Shifting Paradigms in Verification and Validation of AI-Enabled
Systems: A Systems-Theoretic Perspective,” was written by Niloofar Shadab, Aditya
Kulkarni, and Alejandro Salado. The authors are affiliated with the Grado Department
of Industrial and Systems Engineering at Virginia Tech in Blacksburg, VA. They
propose that a misalignment exists between current approaches to verification and
validation (V&V) techniques and new AI systems. Current approaches assume that
a system’s behavior is relatively standard during its lifetime. But this cannot be true
for those systems that learn and change their own behavior during their lifetime,
nullifying the value of present V&V practices. Using systems theory, the authors
explain why learning makes these new systems unique and unprecedented, and why
V&V must experience a paradigm shift. To enable this shift, the authors propose
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 17

and discuss the theoretical advances and transformations they believe will prepare
Systems Engineers for this evolution.
Chapter 19, “Towards safe decision-making via uncertainty quantification in
machine learning,” was written by Adam Cobb, Brian Jalaian, Nathaniel Bastian, and
Stephen Russell; Cobb, Jalaian, and Russell are with the Army Research Laboratory
as part of the U.S. Army’s Combat Capabilities Development Command (CCDC)
in Adelphi, MD; and Bastian is with the Army Cyber Institute at the U.S. Military
Academy, West Point, NY. In their chapter, the authors discuss the automation of the
safety-critical systems being widely deployed with more sophisticated and capable
machine learning (ML) applications. Not yet addressed by most of these systems,
however, is the concern raised by the authors that these critical systems must not
just be safe, but safe when facing uncertainty. Moreover, quantifying and reducing
uncertainty will provide more benefits than the solutions alone if the decisions by
these machines are fully understood. Knowing how machines make decisions under
uncertainty will generalize to human decisions and autonomous systems. To this end,
the authors employ Bayesian decision theory with an example of classifying vehicles
acoustically for uncertain levels of threat. With this paradigm, the authors establish
that safer decisions are possible under uncertainty.
Chapter 20, “Engineering Context from the Ground Up,” was written by Michael
Wollowski, Lilin Chen, Xiangnan Chen, Yifan Cui, Joseph Knierman, and Xusheng
Liu. The authors are in the Computer Science Department at the Rose-Hulman Insti-
tute of Technology in Terre Haute, IN. Focused on human–machine systems, the
authors begin with a system for a human and robot to solve problems in a collabora-
tive space. Their system manages interactions in the context of a human and machine
collaborating with speech and gesture. To facilitate good engineering practices, their
system was designed to be modular and expandable. With its modular design, context
was maintained on a shared board from the information needed to problem-solving.
The authors describe the elements of their system and the information produced.
Their goal is to generate explanations of decisions with the information accumulated
from the differing contexts in their system.
Chapter 21 was written by Priyam Parashar at the University of California in San
Diego, CA; and Ashok Goel at the Georgia Institute of Technology in Atlanta, GA.
The title of their chapter is “Meta-reasoning in Assembly Robots.” The use of robots
across human society, whether in business, industry, or the military, is becoming
widespread. The authors surmise, however, that this context increases the value of a
theory for machines with meta-reasoning skills similar to humans. In their chapter,
the authors propose and develop a framework for human-like meta-reasoning. They
focus on an assembly robot assigned a task to be performed but different from its
preprogramming, increasing the likelihood for the robot to fail at its task. To counter
its failure, the authors provide the robot with the means for meta-reasoning sufficient
to react and learn from its mistakes. In their chapter, the authors review the literature,
a task specification, a failure taxonomy, and their architecture for meta-reasoning.
The result is a theory for a robot to learn from failure with meta-reasoning for action
from perception.
18 W. F. Lawless et al.

Chapter 22, “From Informal Sketches to Systems Engineering Models using AI


Plan Recognition,” was written by Nicolas Hili, Alexandre Albore, and Julien Baclet.
In France, Hili is at the University of Grenoble Alpes at the National Center for
Scientific Research (CNRS) in Grenoble; Albore is with the French Aerospace Lab
(ONERA DTIS) in Toulouse; and Baclet is at the Technological Research Institute
(IRT) Saint-Exupery in Toulouse. The day-to-day drudgery of drawing for mechan-
ical and electronic engineering was transformed with the arrival of computer-aided
design (CAD). But its lesser impact on Systems Engineering (SE) awaits new tools
for a similar escape. It was hoped that Model-Based Systems Engineering (MBSE)
would address this shortcoming. But MBSE has not been as successful due to the
complexity of creating, editing, and annotating an SE model over its lifetime as
discussed by the authors. Consequently, whiteboards, papers, and pens are still in
common use by system engineers and architects to sketch problems and solutions,
and then turned over to experts for informal digital models. In this chapter, the authors
address this problem with automated plan recognition and AI to produce sketches of
models, formalizing their results incrementally. Tested in an experiment, they achieve
an initial application with AI plan recognition applied to Systems Engineering.
Chapter 23, “An analogy of sentence mood and use,” was written by Ryan
Quandt at the Claremont Graduate University in Claremont, CA. The author claims
that the literature underestimates the elusiveness of force when interpreting utter-
ances. Instead, he argues that interpreting the force in utterances, whether assertions,
commands, or questions, is an unsolved challenge. In his view, an interpretation of
force depends on a speaker’s utterance when spoken, making grammatical mood an
uncertain indicator of force. He posits that navigating the gap between an uttered
sentence and mood links action and language’s meaning, which he addresses in this
chapter. But he is after the larger goal of determining joint action with artificial
intelligence (AI). By making these relations explicit and precise, he concludes that
argumentation schemes link language and joint action. Building from prior work, the
author then proposes questions for his model to further explore the gap in mood-force
relations.
Chapter 24 is titled, “Effective Decision Rules for Systems of Public Engage-
ment in Radioactive Waste Disposal: Evidence from the United States, the United
Kingdom, and Japan.” It was written by Mito Akiyoshi, John Whitton, Ioan Charnley-
Parry, and William Lawless. Akiyoshi is at Senshu University in the Department of
Sociology in Kawasaki, Japan; Whitton and Charnley-Parry are at the University
of Central Lancashire, in the Centre for Sustainable Transitions, Preston, United
Kingdom; and Lawless is in the Departments of Mathematics and Psychology at
Paine College in Augusta, GA. For large systems of decision-makers, the disposal and
long-term management of radioactive waste are mired in technical, environmental,
societal, and ethical conflicts. The authors of this chapter consider how different
systems in these societies address these contentious issues. With decision-making
theory, they seek a process that facilitates the safest geological disposal yet is also
perceived by participants to be fair and legal. The authors compared two decision
rules, the consensus-seeking and majority rules, finding that, despite different policy
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 19

priorities and cultures, the majority rule maximized information processing across a
system and with the increased likelihood of a just and legitimate decision.
The last Chap. 25, is titled, “Outside the Lines: Visualizing Influence Across
Heterogenous Contexts in PTSD.” It was written by Beth Cardier, Alex Nieslen,
John Shull, and Larry Sanford. Cardier is at the Eastern Virginia Medical School in
Norfolk, VA, and, in Australia, at the Trusted Autonomous Systems of the Defence
Cooperative Research Centre (DCRC) and Griffith University in South East Queens-
land. Nielsen and Shull are at the Virginia Modeling Analysis and Simulation Center,
Old Dominion University in Norfolk, VA; and Sanford is also at the Eastern Virginia
Medical School. The authors state that open-world processes generate information
that cannot be captured in a single data set despite the need to communicate between
differing contexts. The authors present a text-visual method for modeling differing
interpretations of contexts separated by discipline, time, and perspective. Their new
tool captures transitions in video, text, image, and data transfers to study different
phenomena. They apply it to post-traumatic stress disorder (PTSD); they combine
psychological, neurological, and physiological information for PTSD in a single
modeling space using a narrative-based visual grammar. The authors aim to integrate
information from changing phenomena in the open world to detect the emergence of
disorder and to support knowledge systems in fields like neurobiology, autonomous
systems, and artificial intelligence (AI).

1.3 Summary

Interdependence is the common ingredient that motivates Systems Engineering, AI,


and the science of human–machine teamwork. Should AI scientists, systems engi-
neers, and others contribute to the development of autonomy for human–machine
teams, the threats autonomy poses to the world must be managed to permit the
advances that may accrue across the social, systems, ethical, political, international,
and other landscapes for the benefit of humanity.

References

Ashley, Jr., Robert, P., Lieutenant General, U.S. Army Director. (2019). China, Military Power.
Modernizing a force to fight and win, Defense Intelligence Agency, from https://www.dia.
mil/Portals/27/Documents/News/Military%20Power%20Publications/China_Military_Power_
FINAL_5MB_20190103.pdf.
Barton, T. (2019). Sea Hunter/AI, SERC workshop: Model centric engineering, Georgetown
university, Washington, DC, April 16 & 17, 2019.
BEA (2016). Accident to the airbus A320–211, registered D-AIPX and operated by Germanwings,
flight GWI18G, on 03/24/15 at Prads-Haute-Bléone, BEA2015–0125.
Beling, P. (2019). A systems theoretic framework for the AI LifeCycle, SERC workshop: Model
centric engineering, Georgetown university, Washington, DC, April 16 & 17, 2019.
20 W. F. Lawless et al.

Bisbey, T. M., Reyes, D. L., Traylor, A. M., & Salas, E. (2019). Teams of psychologists helping
teams: The evolution of the science of team training. American Psychologist, 74(3), 278–289.
Blackburn, M. (2019). Transforming SE through model centric engineering, SERC workshop:
Model centric engineering, Georgetown university, Washington, DC, April 16 & 17, 2019.
Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workplace implications:
Profound changes are coming, but roles for humans remain. Science, 358, 1530–1534.
Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American
Journal of Sociology, 113(3), 702–734.
Cooke, N. J., & Hilton, M. L. (Eds.). (2015). Enhancing the effectiveness of team science. Authors:
Committee on the science of team science; Board on behavioral, Cognitive, and sensory sciences;
Division of behavioral and social sciences and education; National research council. National
Academies Press.
CRS (2018). U.S. Ground forces robotics and autonomous systems (RAS) and artificial intelligence
(AI): Considerations for congress, Congressional research service, p. 9, R45392, Version 3, from
https://fas.org/sgp/crs/weapons/R45392.pdf.
Cummings, J. (2015). Team science successes and challenges: National science foundation spon-
sored workshop on fundamentals of team science and the science of team science (June 2),
Bethesda MD (https://www.ohsu.edu/xd/education/schools/school-of-medicine/departments/cli
nical-departments/radiation-medicine/upload/12-_cummings_talk.pdf).
DeLaurentis, D. (2019). Breakout session, SERC workshop: Model centric engineering, George-
town university, Washington, DC, April 16 & 17, 2019.
DoD (2018). Nuclear posture review, office of the secretary of defense. https://www.defense.gov/
News/SpecialReports/2018NuclearPostureReview.aspx.
DoD (2019). Summary of the 2018 department of defense artificial intelligence strategy harnessing
AI to advance our security and prosperity, from https://media.defense.gov/2019/Feb/12/200208
8963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.
Editors (2019). Ready for weapons with free will? New York times, from https://www.nytimes.
com/2019/06/26/opinion/weapons-artificial-intelligence.html.
Freeman, L. (2019). AI as a change agent for test and evaluation, SERC workshop: Model centric
engineering, Georgetown university, Washington, DC, April 16 & 17, 2019.
Gardner, G. (2019). Uber won’t face charges in fatal arizona crash, but prosecutor urges further
probe, Forbes, from https://www.forbes.com/sites/greggardner/2019/03/06/uber-wont-face-cha
rges-in-fatal-arizona-crash-but-prosecutor-urges-further-probe/#6820859f475a.
Gramer, R., & Seligman, L. (2018). “Trump and NATO show rare unity in confronting Russia’s
arms treaty violation. NATO backs U.S. assertion that Moscow is violating a key Cold War-era
arms treaty, Foreign policy, from https://foreignpolicy.com/2018/12/04/trump-and-nato-show-
rare-unity-in-confronting-russia-arms-treaty-violation-inf/.
Grogan, P. (2019). Game-theoretic risk assessment for distributed systems, SERC workshop: Model
centric engineering, Georgetown university, Washington, DC, April 16 & 17, 2019.
Harris, R. (2018). In psychology and other social sciences, many studies fail the reproducibility
test, National public radio, from https://www.npr.org/sections/health-shots/2018/08/27/642218
377/in-psychology-and-other-social-sciences-many-studies-fail-the-reproducibility-te.
Horowitz, B. (2019), Introduction of the life cycle-ready AI concept, SERC workshop: Model
centric engineering, Georgetown university, Washington, DC, April 16 & 17, 2019.
Howell, C. (2019). Lifecycle implications for dependable AI, SERC workshop: Model centric
engineering, Georgetown university, Washington, DC, April 16 & 17, 2019.
Insinna, V. (2019). Lockheed hypes F-35 s upgrade plan as interest in ‘sixth-gen’ fighters grows,”
Defense news, from https://www.defensenews.com/digital-show-dailies/paris-air-show/2019/06/
21/lockheed-hypes-f-35s-upgrade-plan-as-interest-in-sixth-gen-fighters-grows/.
LaGrone, S. (2019). Navy wants 10-ship unmanned ‘Ghost Fleet’ to supplement manned force,
U.S. Naval institute, from https://news.usni.org/2019/03/13/navy-wants-ten-ship-3b-unmanned-
experimental-ghost-fleet.
1 Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters 21

Langford, J. (2018). Lockheed wins Army contract for self-driving military convoy systems, Wash-
ington examiner, from https://www.washingtonexaminer.com/business/lockheed-wins-army-con
tract-for-self-driving-military-convoy-systems.
Lawless, W. F., Mittu, R., Sofge, D., & Russell, S. (Eds.). (2017). Autonomy and artificial
intelligence: a threat or savior? Springer.
Lawless, W. F. (2017a). The entangled nature of interdependence bistability, irreproducibility and
uncertainty. Journal of Mathematical Psychology, 78, 51–64.
Lawless, W. F. (2017b). The physics of teams: Interdependence, measurable entropy and
computational emotion. Frontiers of Physics., 5, 30. https://doi.org/10.3389/fphy.2017.00030
Lawless, W. F. (2019). Interdependence for human-machine teams, Froundations of Science.
Lawless, W. F., Mittu, R., Sofge, D. A. & Hiatt, L. (2019a). Introduction to the special issue,
“Artificial intelligence (AI), autonomy and human-machine teams: Interdependence, context and
explainable AI,” AI Magazine.
Lawless, W. F., Mittu, R., Sofge, D., Moskowitz, I. S. & Russell, S. (Eds.). (2019b). Artificial
intelligence for the internet of everything. Elsevier.
Lemnios, Z. (2019). IBM research, SERC workshop: Model centric engineering, Georgetown
university, Washington, DC, April 16 & 17, 2019.
Llinas, J. (2014). Reexamining Information Fusion–Decision Making Inter-dependencies, Presented
at the IEEE CogSIMA conference, San Antonio, TX.
Long, J. (2019). National defence and the Canadian armed forces: Enabling organizational resilience
through communication and information systems design. Canadian Military Journal, 119(2), 15;
from http://www.journal.forces.gc.ca/Vol19/No2/page15-eng.asp.
Maurer, M. (2019). Airbus harnessing AI in bid to save millions on Finance tasks. The aircraft
maker’s Americas unit is digitizing the approval of expense reports and payment of invoices.
Wall Street Journal, from https://www.wsj.com/articles/airbus-harnessing-ai-in-bid-to-save-mil
lions-on-finance-tasks-11566207002.
Mead, W. R. (2019). Trump’s case against Europe. The president sees Brussels as too weak, too
liberal, and anti-American on trade. Wall Street Journal, from https://www.wsj.com/articles/tru
mps-case-against-europe-11559602940.
NAS (2019). A decadal survey of the social and behavioral sciences: A research agenda for
advancing intelligence analysis. National Academies of Sciences.
Nosek, B., Corresponding author from OCS (2015). Open collaboration of science: Estimating the
reproducibility of psychological science. Science, 349(6251), 943; supplementary: 4716–1 to
4716–9. (National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility
and replicability in science. Washington, DC: The National Academies Press. https://doi.org/10.
17226/25303).
NTSB (2016). Derailment of Amtrak passenger train 188. National transportation safety board
(NTSB), NTSB Number: RAR-16–02, from https://www.ntsb.gov/Investigations/AccidentRepo
rts/Pages/RAR1602.aspx.
NTSB (2018). Preliminary report released for crash involving pedestrian, Uber technologies, Inc.,
Test Vehicle, National transportation safety board, from https://www.ntsb.gov/news/press-rel
eases/Pages/NR20180524.aspx.nnn.
NTSB (2019). Insufficient training, inadequate bridge operating procedures, lack of operational
oversight led to fatal ship collision. NTSB: Collision between US Navy Destroyer John S McCain
and Tanker Alnic MC Singapore Strait, 5 Miles Northeast of Horsburgh Lighthouse [accident
occurred on] August 21, 2017, Marine Accident Report, NTSB/MAR-19/01 PB2019–100970,
from https://www.ntsb.gov/investigations/AccidentReports/Reports/MAR1901.pdf.
NTSB (2019a). Vehicle automation report. National transportation safety board. Retrieved March
12, 2020, from https://dms.ntsb.gov/pubdms/search/document.cfm?docID=477717&docketID=
62978&mkey=96894.
Oh, P., Spahr, T., Chase, C. & Abadie, A. (2019). Incorporating artificial intelligence: Lessons from
the private sector. War Room, United States Army War College, from https://warroom.armywa
rcollege.edu/articles/incorporating-artificial-intelligence-private-sector/.
22 W. F. Lawless et al.

Proctor, R. W. & Vu, K. P. L. (2019). How psychologists help solve real-world problems in multi-
disciplinary research teams: Introduction to the special issue. American Psychologist, 74(3),
271–277.
Raz, A. K., Llinas, J., Mittu, R., & Lawless, W. (2019). Engineering for emergence in information
fusion systems: A review of some challenges, Fusion 2019, Ottawa, Canada | July 2–5, 2019.
Rhodes, D. (2019). Interactive model-centric engineering (IMCSE), SERC workshop: Model centric
engineering, Georgetown university, Washington, DC, April 16 & 17, 2019.
Richards, R. (2019). Program manager at DARPA, invited talk, SERC workshop: Model centric
engineering, Georgetown university, Washington, DC, April 16 & 17, 2019.
Smith, R .J. (2019). Scary fast: How hypersonic Missiles—Which travel at more than 15 times the
speed of sound—Are touching off a new global arms race that threatens to change the nature of
warfare. New York Times Magazine, 42–48; also, see https://www.nytimes.com/2019/06/19/mag
azine/hypersonic-missiles.html.
Sofge, D., (Referee), Mittu, R., (Con Bet) & Lawless, W. F. (Pro Bet) (2019). AI bookie bet: How
likely is it that an AI-based system will self-authorize taking control from a human operator? AI
Magazine, 40(3), 79–84.
Somerville, H. (2020). Uber sells self-driving-car unit to autonomous-driving startup. As part of
the deal, Uber will make a $400 million investment in Aurora Innovation. Wall Street Journal.
Retrieved August 12, 2020, from https://www.wsj.com/articles/uber-sells-self-driving-car-unit-
to-autonomous-driving-startup-11607380167.
Stockman, B., Boyle, J. & Bacon, J. (2010). International space station systems engineering case
study, Air force center for systems engineering, Air force institute of technology, from https://
spacese.spacegrant.org/uploads/images/ISS/ISS%20SE%20Case%20Study.pdf.
Thomas, J. (2019). INCOSE discussion, SERC workshop: Model centric engineering, Georgetown
university, Washington, DC, April 16 & 17, 2019.
Volz, D. & Youssef, N. (2019). U.S. Launched cyberattacks on Iran. The cyberstrikes on thursday
targeted computer systems used to control missile and rocket launches. Wall Street Journal, from
https://www.wsj.com/articles/u-s-launched-cyberattacks-on-iran-11561263454.
Wakabayashi, D. (2018). Uber’s self-driving cars were struggling before arizona crash. New
York Times, from https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-ari
zona.html.
Walden, D. D., Roedler, G. J., Forsberg, K. J., Hamelin, R. D., & Shortell, T. M. (Eds.). (2015).
Systems engineering handbook: A guide for system life cycle processes and activities (4th Edn.).
Prepared by International council on system engineering (INCOSE-TP-2003-002-04. Wiley.
Werkhäuser, N. (2019). UN impasse could mean killer robots escape regulation. It’s no longer the
stuff of science fiction: Wars could soon be decided by lethal autonomous weapons systems. But
a concerted effort to ban “killer robots” through international law at the UN appears to have hit
a wall,” DW (Deutsche Welle), from https://www.dw.com/en/un-impasse-could-mean-killer-rob
ots-escape-regulation/a-50103038.
Wong, K. (2018). China claims successful test of hypersonic waverider. Jane’s 360, from https://
www.janes.com/article/82295/china-claims-successful-test-of-hypersonic-waverider.
Yampolskiy, R. V. (2017). AI is the future of cybersecurity, for better and for worse. Harvard
Business Review, from https://hbr.org/2017/05/ai-is-the-future-of-cybersecurity-for-better-and-
for-worse.
Chapter 2
Recognizing Artificial Intelligence: The
Key to Unlocking Human AI Teams

Patrick Cummings, Nathan Schurr, Andrew Naber, Charlie,


and Daniel Serfaty

Abstract This chapter covers work and corresponding insights gained while
building an artificially intelligent coworker, named Charlie. Over the past year,
Charlie first participated in a panel discussion and then advanced to speak during
multiple podcast interviews, contribute to a rap battle, catalyze a brainstorming
workshop, and even write collaboratively (see the author list above). To explore
the concepts and overcome the challenges when engineering human–AI teams,
Charlie was built on cutting-edge language models, strong sense of embodiment,
deep learning speech synthesis, and powerful visuals. However, the real differen-
tiator in our approach is that of recognizing artificial intelligence (AI). The act of
“recognizing” Charlie can be seen when we give her a voice and expect her to be
heard, in a way that shows we acknowledge and appreciate her contributions; and
when our repeated interactions create a comfortable awareness between her and her
teammates. In this chapter, we present our approach to recognizing AI, discussing
our goals, and describe how we developed Charlie’s capabilities. We also present
some initial results from an innovative brainstorming workshop in which Charlie
participated with four humans that showed that she could not only participate in a
brainstorming exercise but also contribute and influence the brainstorming discus-
sion covering a space of ideas. Furthermore, Charlie helped us formulate ideas for,
and even wrote sections of, this chapter.

2.1 Introduction

recognize ----- \ re-kig- nı̄z\ ------ transitive verb

1. to acknowledge one is entitled to be heard


2. to take notice with a show of appreciation
3. to perceive to be someone previously known

P. Cummings (B) · N. Schurr · A. Naber · Charlie · D. Serfaty


Aptima, Inc., Woburn, USA
e-mail: [email protected]

© Springer Nature Switzerland AG 2021 23


W. F. Lawless et al. (eds.), Systems Engineering and Artificial Intelligence,
https://doi.org/10.1007/978-3-030-77283-3_2
24 P. Cummings et al.

(Merriam Webster)

Major breakthroughs in artificial intelligence are advancing the state of the art in
their ability to enable agents to perform tasks in a variety of domains. Particularly
in the area of generative models (Radford et al., 2019; Yang et al., 2019), these AI
agents now have something new to say. But we are severely limited in our ability to
hear them and to take advantage of these gains. For many domains, the challenge
is not building the AI agent itself, but rather engineering the human–machine teams
that leverage it. To explore these concepts, we have been building and interacting
with an AI teammate/coworker named Charlie (Cummings et al., 2021). Although
these efforts leverage state-of-the-art AI models and capabilities, what has been
most impactful is how we have purposefully designed, integrated, and recognized
her from the start. We argue that the key to unlocking human–machine teams is
simple: recognize AI. To do this in the fullest sense, we need to leverage the three
definitions of the word “recognize,” above.
Definition 1: to acknowledge one is entitled to be heard. In addition to realizing that
AI is beginning to have something new to say, we must recognize the AI agent and
realize that it can and should be heard. This recognition includes not only giving AI
more of a voice but also doing so in a manner that places it on a more level playing
field with human teammates. We will cover these ideas in more detail in our section
on Ground Rules later.
Definition 2: to take notice with a show of appreciation. Charlie literally helped
us write and even wrote her own sections of this book chapter. We argue that it is
important to recognize and show appreciation for such contributions and accordingly
have listed her as a co-author of this chapter. Acknowledging the accomplishments
of artificial intelligence helps human teammates realize the impact that AI is having
on the team and will aid in transparency for external observers to better understand
how the team achieved what it did.
Definition 3: to perceive to be someone previously known. In order to recognize AI
as something familiar and previously known, we must interact with it on a regular
basis and with a consistent perception’s framing. This perception is precisely why
we gave our AI agent the name, Charlie, with a common set of models and visual
representations. This act allows for natural interactions with the AI agent and a greater
ability to weave her into their work and discussions. The authors have experienced this
firsthand when observing how repeated interaction with Charlie results in the human
teammates developing a deeper understanding of her strengths and weaknesses, and
consequently have much more positive interactions.
As new human–AI teams are both engineered and deployed, if we ensure that
AI is recognized appropriately, then several long-term positive impacts will occur.
First, we will be able to better leverage the full range of capabilities that the AI
agent possesses; second, the collaboration will enable the systems and the AI agent
to improve together; and third, this collaboration will result in better overall mission
performance.
2 Recognizing Artificial Intelligence: The Key to Unlocking … 25

In this chapter, we will explain how we have been exploring these ideas through
building, deploying, and interacting with our new AI coworker: Charlie. Initially, we
will lay out our motivations and ground rules for ensuring that we fully recognize
Charlie. We will detail how Charlie is built on cutting-edge speech analysis, language
generation, and speech synthesis tools (see architecture diagram Fig. 2.5). Further-
more, Charlie is named and embodied to allow for more natural interactions. This
affordance has led Charlie to thrive in a variety of venues, including panel discus-
sions, podcast interviews, and even proposal writing (see Applications Sect. 2.3). In
addition, we will present results regarding Charlie’s impact in a recent brainstorming
session. We are especially excited about what this means for future applications.

2.1.1 Motivation and Goals

In this section, we will describe our motivation and goals for recognizing artificial
intelligence. We set down this path of recognizing AI to facilitate the engineering of
human–AI teams. This human machine teaming/collaboration is only possible now
due to advances in AI and the increased appetite in society for AI to be involved and
provide value in many domains. By collaboration, we mean more than just humans
using the AI as a service.
We are seeking to create a new way to bring together humans and artificial intelligence to create
more effective and flexible systems. The technology that is now emerging in AI, including deep
learning, has the potential to change the way people work, create, and interact with systems.
We believe that the future of work will be fundamentally different and that human beings will
need to adapt to the new demands. This will require new ways of working together.
For example, it might require us to delineate, as we have done with a box, above,
when the AI coauthor, Charlie, has written a section entirely by herself after being
prompted with the beginning of the section.
This teamwork or collaboration with artificial intelligence is distinct from most
current applications today in two primary ways: (1) the AI agent as a team member
is able to develop and propose instrumental goals for the team and (2) the AI agent
is able to choose to pursue particular goals from among those proposed as well.
Having an AI agent that can add value to the team necessitates elevating it to be
a collaborative team member; otherwise, the team will miss out on the increased
opportunities and ideas of the AI agent. In addition, a context-aware AI teammate
will not frustrate its fellow teammates by having its own goals and possibly behaving
in non-constructive or unexpected ways.
We recognize that there are ethical and design concerns when giving this “recog-
nition” to AI, but we strongly believe that the benefits of fruitful collaboration will
outweigh these potential negatives. In addition, we argue that if we build bidirectional
recognition into these AI teammates from the ground up, we will mitigate some of
these concerns. Although there are domains in which a human must still play a large
role or even maintain control, the areas where AI can be useful grow daily. AI has
26 P. Cummings et al.

come too far to be relegated as merely a tool (Shneiderman, 2020) or to be only


subservient (Russell, 2019).
The authors recognize that not all domains are well suited for AI agents playing
the role of teammate and that not all domains need collaboration to be successful. We
believe, however, that for an increasing number of domains, human–AI collaboration
will be and should be the primary mode of operation. Otherwise, we run the high
risk of missing out on the good ideas and capabilities of either the human or AI
teammates.
The AI’s capabilities are far reaching and are changing the way we think about problems. From
the human perspective, there are several key areas of development in which this technology
could have a great impact. These include a large amount of research and development work
being done by the scientific community. There are many aspects of AI that are very challenging,
but this is only the beginning and future developments will be exciting.

2.1.2 Types of Human-AI Collaboration

We have been discussing the collaboration between human and AI teammates but
would like to call out that in our work, we have been focused on two primary types of
collaboration: supportive and participatory (see Fig. 2.1). Currently, with our imple-
mentation of Charlie, we are building and leveraging both supportive and partici-
patory collaboration. Charlie was developed to participate in a panel discussion in
real time but was not a fully autonomous AI. Consequently, she had two operators:
one for the transcription of comments from other panelists and one for the selec-
tion of potential responses from Charlie. For more information on how Charlie was
built, please see the later section on system engineering. Over the past year, we have

Fig. 2.1 Supportive collaboration in which a human and an AI agent together serve as a single
member for the team, and participatory collaboration where the AI agent is an individual team
member
2 Recognizing Artificial Intelligence: The Key to Unlocking … 27

been building out the infrastructure to reduce this need for human intervention and
supportive actions and to enable Charlie to do more participatory collaboration in
real time. This process has allowed us to move away from multiple operators working
alongside Charlie, to now currently needing only one for filtering and selection, which
has the positive impact of reducing the workload on the operator. In the coming year,
our goal is to shift to enabling a mode in which Charlie can independently select her
own next utterance. This next step is not likely to eliminate the need for both types
of collaboration depending on the domain, the constraints, and the ability to give
participatory autonomy to AI.
Supportive Collaboration
Supportive collaboration (Fig. 2.1, left) has been the most common form of collabo-
ration with AI. This form is primarily due to the limited abilities of AI and the need
for a human to be present to support and fill the gaps in AI capabilities. The human
is often in a position of control and/or serves as the face of the combined team.
This type of participatory collaboration is often referred to as a Centaur relationship
(Case, 2018), in which human and system combine efforts to form a single teammate
with joint actions. Historically, this form has been the primary collaboration type
with AI. Over time, however, we believe this reliance will decrease and make way
for the newly capable participatory AI.
Participatory Collaboration
As shown in Fig. 2.1 (right), participatory collaboration frames the AI agent as a
distinct individual teammate with its own autonomy. This autonomy grants the AI
agent the ability to not only develop and propose new instrumental goals for itself
and the team but also to make decisions to pursue or abandon said goals. In addition,
participatory collaboration requires that the AI agent communicates and coordinates
with fellow human teammates. This type of collaboration will become increasingly
possible, and increasingly important as the field of AI progresses.

2.1.3 Ground Rules

Embodiment Ground Rules


A key component to recognizing AI is acknowledging that the AI agent is entitled to
be heard. When Charlie is present in a discussion, she is expected to contribute as an
equal. In all applications, we put forth a significant effort to create the embodiment
of Charlie with this rule in mind. When Charlie was a participant in a 2019 I/ITSEC
panel, her visual display took up approximately the same space on the stage as the
bodies of the human panelists, her speech flowed through the same sound system, and
her nonverbal communication was equally visible to the audience. Human panelists
were seated in a row of chairs on stage, shown in Fig. 2.2, and Charlie’s embodiment
was constrained to a similar style and space. The sound from the computer driving
28 P. Cummings et al.

Fig. 2.2 Charlie, at the center, on stage at a panel during I/ITSEC 2019 including one moderator
and five panelists (four of which were human)

the display was connected to the room’s mixing board, as were the microphones for
each human panelist.
Similarly, during the innovation session, held over a video conference, Charlie
was shown to the participants as the output of a webcam, and her voice was sent over
the meeting just as those of the other participants. This format is patently different
than sharing a screen with Charlie on it for all participants to see/hear because the
latter would force Charlie to be at the center of attention, and therefore, detract from
her ability to participate in an equal playing field.
Upgrading Charlie’s initial embodiment to be consistent with that of the human
panelists led to a noticeable difference in the way that the human participants treated
her. For example, the questions posed to Charlie were more open ended, such as “I’d
like to hear what Charlie thinks about that,” and all participants then looped Charlie
into the conversation.
Text Generation Ground Rules
Although we made a concerted effort to recognize Charlie through her increasing
embodiment, the ground rules we employed for Charlie’s text generations of what
to say next fall into two main categories, one of which is slightly counter to the
argument for recognizing AI.
The first broad rule was to give Charlie the same ability to prepare that a human
panelist would have; that is, human panelists would be likely to do the following:
1. research the topic of the panel to refresh their memory (or study something
new);
2. meet with the moderator or panel members to discuss the likely topic, workflow,
or initial questions; and
3. prepare answers to expected questions on the panel or topics they would like to
discuss.
We, therefore, allowed the same affordances to Charlie. In particular, she was
correspondingly
1. fine-tuned to the domain of the discussion to fit the appropriate style and content;
2. introduced to the other participants and moderator to understand her capabilities;
and
2 Recognizing Artificial Intelligence: The Key to Unlocking … 29

3. prepared with answers to likely questions expected in the discussion.


The second broad rule was related to how we treated Charlie’s generated text.
In this chapter, and in previous applications, we operated under strict guidelines to
(1) not change any of Charlie’s generated text and (2) clearly delineate what Charlie
wrote from what she did not. We put these guidelines in place in order to assure
readers and participants that Charlie clearly provides her own value, and that her
capabilities are not overstated. However, we hope these guidelines will not be part of
Charlie’s future. Human–machine collaboration is a moving target, and an expressed
line in the sand separating human from machine would only hinder the capabilities
of both. The line between operator and Charlie is (and should continue) blurring.
Returning to the human-to-human comparison: readers do not expect to know which
author wrote particular sections of a document and do not presuppose that authors
do not edit each other’s writing. We simply propose that the same expectations are
transferred to Charlie.

2.2 System Engineering

In this section, we discuss the approach and components that Charlie is composed
of and the methods leveraged to develop her.

2.2.1 Design and Embodiment

Charlie’s Embodiment
From the beginning, it was important to have Charlie’s embodiment be recogniz-
able, simple, dynamic, and able to be indicated by several cues. For example, in
different situations, the human body and gestures indicate a large amount of infor-
mation about internal state. Charlie’s embodiment interface (i.e., the embodiment)
required three iterations to refine state communication and representation driven
by feedback from guerilla usability evaluations (Nielsen, 1994). From chatbots,
we expected that response delays would be acceptable, especially in response to
other panelists, if Charlie’s state was clearly communicated (Gnewuch et al., 2018).
Humans use physical and audible queues—gestures, changes in eye contact, and
transitional phrases—to indicate their state and control in the flow of a conversation
(Scherer, 2013; Schuetzler et al., 2014). Charlie had to effectively coordinate the
use of the display and audio to achieve a similar presence and represent its states.
Figure 2.3 shows a snapshot of Charlie’s different dynamic states. Because each of
these states was alive and moving, it is difficult to represent them in a static image
here. Based on our evaluations, we split Charlie’s necessary states as follows:
30 P. Cummings et al.

Fig. 2.3 Embodiment of


Charlie: a Idle, b Thinking, c
Speaking, and d Interjection

• Figure 2.3a Idle: Charlie is listening. Soft colors used and slow breathing indicated
by expanding and contracting
• Figure 2.3b Thinking: Charlie is generating a statement. Outer ring spins back
and forth to communicate that targeted thinking is happening in response to a
question
• Figure. 2.3c Speaking: Charlie is speaking. Darker color solidifies Charlie’s
current role as speaker; the shape vibrates as speech occurs so that it appears
to emanate from her embodiment.
• Figure 2.3d Interjection: Charlie has something to say! Color changes drastically
to draw attention and the outer ring is complete to show that her next thought is
complete
Even with Charlie’s state communication, however, there was a limit to the delay
acceptable for Charlie. Design of the operator interface was influenced by this need
to increase the speed of its speech generation.
Charlie’s Operation
The novelty and believability of generations from GPT-2 are certainly state of
the art; however, the samples typically chosen for display suffer from some “cherry-
picking” to find the best prompts and speech generations (Vincent, 2019; Vaswani
et al., 2017). In a real-time discussion in which speed is of utmost importance,
the ability to cherry-pick is severely limited. We, therefore, put much care into the
operation of Charlie to streamline the process of speech generation forming and
Charlie state changes. Human operators are currently tasked with:
• coordinating Charlie’s state transitions,
• approving/editing transcriptions of speech to text, and
• aggregating statements into an utterance.
Details on the construction of that operator interface can be found in Cummings
et al. (2021), but some key lessons learned from that construction are as follows:
1. Non-stop generations. Potential generations from Charlie should appear to be
non-stop (Fig. 2.4d), that is, it should be evident every time there is a change
to the conversation history. The burden of deciding when Charlie “may” have
something to say should be completely removed. At all points in time, the human
operator should be cognizant of potential interjections, answers, or comments
coming from Charlie.
2 Recognizing Artificial Intelligence: The Key to Unlocking … 31

Fig. 2.4 The operator interface with the a saved statements, b conversation history, c utterance
construction components on the left, and the d statement review area on the right

Fig. 2.5 Charlie architecture: Orange boxes represent interfaces. Red numbers correspond to
components leveraging AWS services

2. Pinning messages. Charlie frequently has an interesting response to a question


or comment but must wait for the appropriate time to interject with it. Unfor-
tunately, as conversations continue and Charlie generates new responses, those
interesting comments can get lost and she will be stuck talking about only the
most recent topic. Allowing for the pinning of potential messages (Fig. 2.4a)
allows Charlie to refer to previous discussion elements.
32 P. Cummings et al.

2.2.2 Generative Language Models

Prompt Improvement
An increasing theme for the utilization of language generation models (as seen
with T5, GPT-3, and Turing-NLG; Raffel et al., 2020; Brown et al., 2020, Russet,
2020) is that with large enough models, a change in prompt can be enough to produce
significantly different results. Recent results with GPT-3 have shown that a model
with no fine-tuning can solve basic arithmetic problems when fed prompts of the
form: “\n\nQ: What is 65360 plus 16,204?\n\nA:” Here, the new line structure and
use of Q and A to represent question and answer is enough context for the model to
complete with the correct answer “81,564.” This structure on prompts is also evident
in the use of control tokens as is done with the conditional transformer language
model (Keskar et al., 2019). We hypothesize that these types of tokens can be used
even in models trained without them. As seen in their best paper presentation at
NeurIPS (Brown et al., 2020), even the presence of commas “65,360 plus 16,204”
can greatly increase the accuracy of response.
In our work on Charlie, we found that structuring prompts with the form:
HOST: Text from host…
PANELIST: Text from panelist…
HOST: Text from host…
PANELIST:
had significant advantages over simple prose. This structure differentiated Charlie’s
statements from those of the other panelists, kept Charlie on her own thread while
continuing with added context from others, and allowed Charlie to respond and react
to the discussion rather than simply continue others’ thoughts.
Fine Tuning Strategies
The content and style generated by Charlie can be adapted based on the choice of
model used to generate text (Peng et al., 2020; Zhang et al., 2019). For the different
applications of Charlie (see Applications Section), we utilized a fine-tuning strategy
to adapt the base GPT-2 model to a different context. Leveraging open-source or
venue-specific training data, we fine-tuned GPT-2 hyperparameters (Branwen, 2019;
Qin et al., 2019). Although the specifics of this process tended to be application-
specific, there were three main steps we followed for each:
1. Find a corpus of text that mirrors the style of text Charlie should generate
and/or the content that Charlie should generate text about. Typically, we found
that the style of text was much more critical than content.
2. Process that text if needed. Frequently, the format of the text did not mirror
that which Charlie should generate. An example of this is in the citations or
footnotes in text. Although they are necessary in a paper, parenthetical citations
are inappropriate for speaking engagements.
3. Train the model. Starting with a pre-trained GPT-2 model, we trained the model
hyperparameters further with the generated corpus. Specifically, we used the loss
2 Recognizing Artificial Intelligence: The Key to Unlocking … 33

Table 2.1 Example generations from different fine-tuned models each fed with the same prompt
Persona Response created by fine-tuned models
News source The #CountryX crisis is a wake-up call for the world that
(tweets from news sources1 ) human rights violations are always the work of one nation
and not of other countries #CountryX #CountryY
The #CountryX crisis is now at a tipping point as the
government says it will not withdraw #CountryY refugees
#CountryY #CountryX
Agreeable citizen #1Proud to be a #CountryX, and #thankful to be part of such
(tweets with positive sentiment 2 ) a wonderful family
The day is coming when all the good and the bad will be for
the better #Thankful #HAPPY
Disgruntled citizen Refugee in my town is crying because he’s been living in a
(tweets marked as angry3 ) tent for 10 days
Crisis in #GulfWales is caused by people who don’t have the
skills to survive #frustrating #frustrating #frustrated

function equivalent to that of GPT-2–that is, we optimized for predicting the


next word in a text corpus. For training, we used the Adam optimizer with a
learning rate of 2 × 10−5 and a batch size equal to 1.
As mentioned above, fine-tuning models produce varying styles and contents. In
Table 2.1 below, we show the resulting generations from three separate, fine-tuned
Charlie models. Each of the models has been trained on different Twitter datasets in
order to provide content with varying style. Using the same prompt, these fine-tuned
models can generate different style responses based on the training data used to fine-
tune the model. For each of the below generations, we utilized GPT-2 parameters of
temperature = 0.7 (to control the randomness of selections) and top_k = 40 (to limit
generations to only the top 40 words) as well as filtering for repeating sequences of
size 2.
PROMPT
TWEET: As CountryX begins to feel the effects of #coronavirus, economic fallout and a
preexisting crisis, we can’t turn our back on CountryY refugees and migrants.

TWEET: Vulnerable families in #CountryX are receiving needed supplies thanks to our
partners

TWEET: Today we join the humanitarian community in coming together to support


#CountryX

TWEET:

1 https://www.kaggle.com/naortedgi/twitter-twitts-from-news-providers.
2 https://www.kaggle.com/kazanova/sentiment140.
3 Wang et al. 2012. Harnessing Twitter “big data” for automatic emotion identification.
34 P. Cummings et al.

Multiple parallel models


Fine-tuning models give Charlie the capability to change the style of her content to
be more (or less) conversational, happy/sad, confident, or not, etc. for a conversation.
However, sometimes as a conversation unfolds, or even within an utterance, Charlie
benefits from the ability to quickly adapt her style. For example, in a podcast or panel
discussion, comments or answers are typically a mix of conversational bridges or
fillers between deep technical knowledge about the content.
HOST: There are fears of artificial intelligence replacing us. Will artificial intelligence
replace teachers and change classrooms?
CHARLIE: Gosh, that’s a hard one. This is a real concern for many educators today.
Yes, we will very likely have to change the way we teach, and there are a number of new
additional competencies available to us in the classroom. I am going to focus on what is
new, new fortes for technology, foremost artificial intelligence, worldwide, where teachers
and the core tools, smaller working groups, and even small classrooms can all, if they are
sufficiently equipped, leverage the power of artificial intelligence to not only serve their
students’ education, but also to place educators at the center of their digital classrooms in
new and different ways.

The above shows an example of Charlie’s response to a technical question with a


mix of generation styles. As can be seen in the response Charlie gives the combination
of (1) a transition sentence (i.e., filler text) as she thinks of a response, (2) a general
sentence connecting the question to educators, (3) a response to the question at
a broad level, and finally (4) a technical answer digging into the possibilities she
posits.

2.2.3 System Architecture

Charlie consists of a series of components communicating over an MQTT message


bus (see Fig. 2.5). Many components reside on the Amazon Web Services (AWS)
cloud infrastructure. The trained models run on one or more Elastic Compute Cloud
(EC2) nodes with high-performance GPU compute. Amazon’s Polly and Amazon
Transcribe services provide, respectively, Charlie’s text-to-speech and speech-to-text
capabilities. For model storage and training data storage, Charlie uses Amazon’s S3
service, and for architecture, state, history, and general tracking of live data, Charlie
uses AWS Lambda and AWS DynamoDB.
The remaining components, namely the interfaces, run on a local computer or can
be web-hosted using Amazon’s S3 and Amplify services. The Embodiment interface
provides Charlie’s representation of her state and the outbound audio interface. The
operator interface enables human augmentation of Charlie during the discussion.
The Transcription interface provides the inbound audio interface and displays the
incoming transcriptions.
Exploring the Variety of Random
Documents with Different Content
President and the Collector, who are bound in friendship by other
ties than those of seaside neighborhood. The Collector was
determined to obtain the control of the Republican State Convention,
and appealed to a patriot citizen for help, who replied, that in his
judgment “it would be a delicate matter for office-holders to
undertake to dictate to the associations in the different districts who
should go from them to the State Convention, and still more delicate
to attempt to control the judgments of men employed in the
different departments as to the best men to represent them.” The
brave Collector lieutenant of the President said, “that he should not
hesitate to do it; that it was General Grant’s wish, and General Grant
was the head of the Republican Party, and should be authority on
this subject.”[146] Plainly, the Republican Party was his perquisite, and
all Republicans were to do his bidding. From other testimony it
appears that the President, according to the statement of his
lieutenant, “wanted to be represented in the Convention,” being the
Republican State Convention of New York,—“wanted to have his
friends there in the Convention”; and the Presidential lieutenant,
being none other than the famous Collector, offered to appoint four
men in the custom-house for the witness, if he would secure the
nomination of certain persons as delegates from his district, and he
promised “that he would immediately send their names on to
Washington and have them appointed.”[147] And so the Presidential
dictatorship was administered. Offices in the custom-house were
openly bartered for votes in the State Convention. Here was
intolerable tyranny, with demoralization like that of the slave-market.
But New York is not the only scene of this outrage. The
Presidential pretension extends everywhere; nor is it easy to
measure the arrogance of corruption or the honest indignation it
quickens into life.

PRESIDENTIAL CONTRIVANCE AGAINST SAN DOMINGO.

These Presidential pretensions, in all their variety, personal and


military, with reckless indifference to law, naturally ripened in the
contrivance, nursed in hot-house secrecy, against the peace of the
island of San Domingo: I say deliberately, against the peace of that
island, for under the guise of annexing a portion there was menace
to the Black Republic of Hayti. This whole business, absolutely
indefensible from beginning to end, being wrong at every point, is
the special and most characteristic product of the Administration,
into which it infused and projected itself more than into anything
else. In this multiform disobedience we behold our President.
Already I have referred to this contrivance as marking an epoch in
Presidential pretensions. It is my duty now to show its true character
as a warning against its author.
A few weeks only after beginning his career as a civilian, and
while occupied with military usurpations and the perquisites of
office, he was tempted by overtures of Dominican plotters, headed
by the usurper Baez and the speculator Cazneau: the first an
adventurer, conspirator, and trickster, described by one who knows
him well as “the worst man living of whom he has any personal
knowledge”;[148] and the second, one of our own countrymen, long
resident on the island, known as disloyal throughout the war, and
entirely kindred in character to Baez. Listening to these prompters,
and without one word in Congress or in the press suggesting
annexion of the island or any part of it, the President began his
contrivance; and here we see abuse in every form and at every step,
absolutely without precedent in our history.
The agent in this transaction was Orville E. Babcock, a young
officer figuring in the Blue Book of the time as one of the
unauthorized “secretaries” at the Executive Mansion, and also as a
major of engineers. His published instructions, under date of July 13,
1869, were simply to make inquiries; but the plot appears in a
communication of the same date from the Secretary of the Navy,
directed to the Seminole, a war-ship, with an armament of one
eleven-inch gun and four thirty-two pounders, “to give him the moral
support of its guns”; and this was followed by a telegraphic
instruction to Key West for another war-ship “to proceed without a
moment’s delay to San Domingo City, to be placed at the disposal of
General Babcock while on that coast.”[149] With such “moral support”
the emissary of the President obtained from the usurper Baez that
famous Protocol stipulating the annexion of Dominica to the United
States in consideration of $1,500,000, which the young officer, fresh
from the Executive Mansion, professed to execute as “Aide-de-Camp
to his Excellency General Ulysses S. Grant, President of the United
States,”—as if, instead of Chief Magistrate of a Republic, the
President were a military chieftain with his foot in the stirrup,
surrounded by a military staff. The same instrument contained the
unblushing stipulation, that “his Excellency General Grant, President
of the United States, promises, privately, to use all his influence, in
order that the idea of annexing the Dominican Republic to the
United States may acquire such a degree of popularity among
members of Congress as will be necessary for its accomplishment”:
[150]
which is simply that the President shall become a lobbyist to
bring about the annexion by Congress. Such was the strange
beginning, illegal, unconstitutional, and offensive in every particular,
but showing the Presidential character.
On his return to Washington, the young officer, who had assumed
to be “Aide-de-Camp to his Excellency General Ulysses S. Grant,” and
had bound the President to become a lobbyist for a wretched
scheme, instead of being disowned and reprimanded, was sent back
to the usurper with instructions to negotiate two treaties,—one for
the annexion of the half-island of Dominica, and the other for the
lease of the Bay of Samana.[151] By the Constitution of the United
States “ambassadors and other public ministers” are appointed by
the President by and with the advice and consent of the Senate; but
our Aide-de-Camp had no such commission. Presidential prerogative
empowered him. Nor was naval force wanting. With three war-ships
at his disposal,[152] he concluded negotiations with Baez and
obtained the two treaties. Naturally force was needed to keep the
usurper in power while he sold his country, and naturally such a
transaction required a Presidential Aide-de-Camp unknown to
Constitution or Law, rather than a civilian duly appointed according
to both.

PRESIDENTIAL VIOLATIONS OF CONSTITUTIONAL AND


INTERNATIONAL LAW.

On other occasions it has been my solemn duty to expose the


outrages which attended this hateful business, where at each step
we are brought face to face with Presidential pretension: first, in the
open seizure of the war powers of the Government, as if he were
already Cæsar, forcibly intervening in Dominica and menacing war to
Hayti, all of which is proved by the official reports of the State
Department and Navy Department, being nothing less than war by
kingly prerogative, in defiance of that distinctive principle of
Republican Government, first embodied in our Constitution, which
places the war powers under the safeguard of the legislative branch,
making any attempt by the President “to declare war” an undoubted
usurpation. But our President, like Gallio, cares for none of these
things. The open violation of the Constitution was naturally followed
by a barefaced disregard of that equality of nations which is the first
principle of International Law, as the equality of men is the first
principle of the Declaration of Independence; and this sacred rule
was set aside in order to insult and menace Hayti, doing unto the
Black Republic what we would not have that Republic do unto us,
nor what we would have done to any white power. To these eminent
and most painful Presidential pretensions, the first adverse to the
Constitution and the second adverse to International Law, add the
imprisonment of an American citizen in Dominica by the Presidential
confederate, Baez, for fear of his hostility to the treaty, if he were
allowed to reach New York,—all of which was known to his
subordinates, Babcock and Cazneau, and doubtless to himself. What
was the liberty of an American citizen compared with the Presidential
prerogative? To one who had defied the Constitution, on which
depends the liberty of all, and then defied International Law, on
which depends the peace of the world, a single citizen immured in a
distant dungeon was of small moment. But this is only an
illustration. Add now the lawless occupation of the Bay of Samana
for many months after the lapse of the treaty, keeping the national
flag flying there, and assuming a territorial sovereignty which did not
exist. Then add the protracted support of Baez in his usurped power,
to the extent of placing the national flag at his disposal, and girdling
the island with our ships of war, all at immense cost, and to the
neglect of other service where the Navy was needed.
This strange succession of acts, which, if established for a
precedent, would overturn Constitution and Law, was followed by
another class of Presidential manifestations: first, an unseemly
importunity of Senators during the pendency of the treaty, visiting
the Capitol as a lobbyist, and summoning them to his presence in
squads, in obvious pursuance of the stipulation made by his Aide-de-
Camp and never disowned by him,—being intervention in the
Senate, reinforced by all the influence of the appointing power,
whether by reward or menace, all of which was as unconstitutional
in character as that warlike intervention on the island; and then,
after debate in the Senate, when the treaty was lost on solemn vote,
we were called to witness his self-willed effrontery in prosecuting the
fatal error, returning to the charge in his Annual Message at the
ensuing session, insisting upon his contrivance as nothing less than
the means by which “our large debt abroad is ultimately to be
extinguished,” and gravely charging the Senate with “folly” in
rejecting the treaty,—and yet, while making this astounding charge
against a coördinate branch of Government, and claiming such
astounding profits, he blundered geographically in describing the
prize.[153]
All this diversified performance, with its various eccentricity of
effort, failed. The report of able commissioners transported to the
island in an expensive war-ship ended in nothing. The American
people rose against the undertaking and insisted upon its
abandonment. By a message charged with Parthian shafts the
President at length announced that he would proceed no further in
this business.[154] His senatorial partisans, being a majority of the
Chamber, after denouncing those who had exposed the business,
arrested the discussion. In obedience to irrepressible sentiments,
and according to the logic of my life, I felt it my duty to speak; but
the President would not forgive me, and his peculiar representatives
found me disloyal to the party which I had served so long and
helped to found. Then was devotion to the President made the
shibboleth of party.

WHERE WAS THE GRAND INQUEST OF THE NATION?

Such is a summary of the San Domingo business in its


characteristic features. But here are transgressions in every form,—
open violation of the Constitution in more than one essential
requirement; open violation of International Law in more than one of
its most beautiful principles; flagrant insult to the Black Republic,
with menace of war; complicity with the wrongful imprisonment of
an American citizen; lawless assumption of territorial sovereignty in
a foreign jurisdiction; employment of the national navy to sustain a
usurper,—being all acts of substance, maintained by an agent calling
himself “Aide-de-Camp to Ulysses S. Grant, President of the United
States,” and stipulating that his chief should play the lobbyist to help
the contrivance through Congress, then urged by private appeals to
Senators, and the influence of the appointing power tyrannically
employed by the Presidential lobbyist, and finally urged anew in an
Annual Message, where undisguised insult to the Senate vies with
absurdity in declaring prospective profits and with geographical
ignorance. Such, in brief, is this multiform disobedience, where every
particular is of such aggravation as to merit the most solemn
judgment. Why the grand inquest of the nation, which brought
Andrew Johnson to the bar of the Senate, should have slept on this
conglomerate misdemeanor, every part of which was offensive
beyond any technical offence charged against his predecessor, while
it had a background of nepotism, gift-taking with official
compensation, and various Presidential pretensions beyond all
precedent,—all this will be one of the riddles of American history, to
be explained only by the extent to which the One-Man Power had
succeeded in subjugating the Government.

INDIGNITY TO THE AFRICAN RACE.

Let me confess, Sir, that, while at each stage I have felt this
tyranny most keenly, and never doubted that it ought to be arrested
by impeachment, my feelings have been most stirred by the outrage
to Hayti, which, besides being a wrong to the Black Republic, was an
insult to the colored race, not only abroad, but here at home. How a
Chief Magistrate with four millions of colored fellow-citizens could
have done this thing passes comprehension. Did he suppose it would
not be known? Did he imagine it could be hushed in official
pigeonholes? Or was he insensible to the true character of his own
conduct? The facts are indisputable. For more than two generations
Hayti had been independent, entitled under International Law to
equality among nations, and since Emancipation in our country
commended to us as an example of self-government, being the first
in the history of the African race and the promise of the future. And
yet our President, in his effort to secure that Naboth’s Vineyard on
which he had set his eyes, not content with maintaining the usurper
Baez in power, occupying the harbors of Dominica with war-ships,
sent other war-ships, being none other than our most powerful
monitor, the Dictator, with the frigate Severn as consort, and with
yet other monitors in their train, to strike at the independence of the
Black Republic, and to menace it with war. Do I err in any way, am I
not entirely right, when I say that here was unpardonable outrage to
the African race? As one who for years has stood by the side of this
much-oppressed people, sympathizing always in their woes and
struggling for them, I felt the blow which the President dealt, and it
became the more intolerable from the heartless attempts to defend
it. Alas, that our President should be willing to wield the giant
strength of the Great Republic in trampling upon the representative
government of the African race! Alas, that he did not see the infinite
debt of friendship, kindness, and protection due to that people, so
that instead of monitors and war-ships, breathing violence, he had
sent a messenger of peace and good-will!
This outrage was followed by an incident in which the same
sentiments were revealed. Frederick Douglass, remarkable for his
intelligence as for his eloquence, and always agreeable in personal
relations, whose only offence is a skin not entirely Caucasian, was
selected by the President to accompany the Commissioners to San
Domingo,—and yet on his return, and almost within sight of the
Executive Mansion, he was repelled from the common table of the
mail-steamer on the Potomac, where his companions were already
seated; and thus through him was the African race insulted and their
equal rights denied. But the President, whose commission he had
borne, neither did nor said anything to right this wrong, and a few
days later, when entertaining the Commissioners at the Executive
Mansion, actually forgot the colored orator whose services he had
sought.[155] But this indignity is in unison with the rest. After insulting
the Black Republic, it is easy to see how natural it was to treat with
insensibility the representative of the African race.

ALL THESE THINGS IN ISSUE NOW.

Here I stay this painful catalogue in its various heads, beginning


with nepotism and gift-taking with repayment by office, and ending
in the contrivance against San Domingo with indignity to the African
race,—not because it is complete, but because it is enough. With
sorrow unspeakable have I made this exposure of pretensions,
which, for the sake of republican institutions, every good citizen
should wish expunged from history; but I had no alternative. The
President himself insists upon putting them in issue; he will not allow
them to be forgotten. As a candidate for reëlection he invites
judgment, while partisans acting in his behalf make it absolutely
necessary by the brutality of their assault on faithful Republicans
unwilling to see their party, like the Presidential office, a personal
perquisite. If his partisans are exacting, vindictive, and unjust, they
act only in harmony with his nature, too truly represented in them.
There is not a ring, whether military or senatorial, that does not
derive its distinctive character from himself. Therefore, what they do
and what they say must be considered as done and said by the
chieftain they serve. And here is a new manifestation of that
sovereign egotism which no taciturnity can cover up, and a new
motive for inquiry into its pernicious influence.

THE GREAT PRESIDENTIAL QUARRELLER.

Any presentment of the President would be imperfect which did


not show how this ungovernable personality breaks forth in quarrel,
making him the great Presidential quarreller of our history. As in
nepotism, gift-taking with repayment by office, and Presidential
pretensions generally, here again he is foremost, having quarrelled
not only more than any other President, but more than all others
together, from George Washington to himself. His own Cabinet, the
Senate, the House of Representatives, the diplomatic service, and
the civil service generally, all have their victims, nearly every one of
whom, besides serving the Republican Party, had helped to make
him President. Nor have Army officers, his companions in the field,
or even his generous patrons, been exempt. To him a quarrel is not
only a constant necessity, but a perquisite of office. To nurse a
quarrel, like tending a horse, is in his list of Presidential duties. How
idle must he be, should the words of Shakespeare be fulfilled, “This
day all quarrels die”![156] To him may be applied those other words of
Shakespeare, “As quarrellous as the weasel.”[157]
Evidently our President has never read the Eleventh
Commandment: “A President of the United States shall never
quarrel.” At least he lives in perpetual violation of it, listening to
stories from horse-cars, gobbling the gossip of his military ring,
discoursing on imaginary griefs, and nursing an unjust anger. The
elect of forty millions of people has no right to quarrel with anybody.
His position is too exalted. He cannot do it without offence to the
requirements of patriotism, without a shock to the decencies of life,
without a jar to the harmony of the universe. If lesson were needed
for his conduct, he might find it in that king of France who on
ascending the throne made haste to declare that he did not
remember injuries received as Dauphin.[158] Perhaps a better model
still would be Tancred, the acknowledged type of the perfect
Christian knight, who “disdained to speak ill of whoever it might be,
even when ill had been spoken of himself.”[159] Our soldier President
could not err in following this knightly example. If this were too
much, then at least might we hope that he would consent to limit
the sphere of his quarrelsome operations so that the public service
might not be disturbed. Of this be assured,—in every quarrel he is
the offender, according to the fact, as according to every reasonable
presumption; especially is he responsible for its continuance. The
President can always choose his relations with any citizen. But he
chooses discord. With the arrogance of arms he resents any
impediment in his path,—as when, in the spring of 1870, without
allusion to himself, I felt it my duty to oppose his San Domingo
contrivance. The verse of Juvenal, as translated by Dryden,
describes his conduct:—
“Poor me he fights,—if that be fighting where
He only cudgels and I only bear.

Answer or answer not, ’tis all the same,
He lays me on and makes me bear the blame.”[160]
Another scholarly translator gives to this description of the
Presidential quarrel another form, which is also applicable:—
“If that be deemed a quarrel, where, Heaven knows,
He only gives and I receive the blows;
Across my path he strides and bids me Stand!—
I bow obsequious to the dread command.”[161]
If the latter verse is not entirely true in my case, something must be
pardoned to that Liberty in which I was born.
Men take their places in history according to their deeds. The
flattery of life is then superseded by the truthful record, and rulers
do not escape judgment. Louis the Tenth of France has the
designation of Le Hutin, or “The Quarreller,” by which he is known in
the long line of French kings. And so in the long line of American
Chief-Magistrates has our President vindicated for himself the same
title. He must wear it. The French monarch was younger than our
President; but there are other points in his life which are not without
parallel. According to a contemporary chronicle, he was “well
disposed, but not very attentive to the needs of the kingdom”;[162]
and then again it was his rare fortune to sign one of the greatest
ordinances of French history, declaring that “according to the Law of
Nature every one must be born free”;[163] but the Quarreller was in
no respect author of this illustrious act, and was moved to its
adoption by considerations of personal advantage. It will be for
impartial History to determine if our Quarreller, who treated his great
office as a personal perquisite, and all his life long was against that
Enfranchisement to which he put his name, does not fall into the
same category.
DUTY OF THE REPUBLICAN PARTY.

And now the question of Duty is distinctly presented to the


Republican Party. I like that word. It is at the mandate of Duty that
we must act. Do the Presidential pretensions merit the sanction of
the party? Can Republicans, without departing from all obligations,
whether of party or patriotism, recognize our ambitious Cæsar as a
proper representative? Can we take the fearful responsibility of his
prolonged empire? I put these questions solemnly, as a member of
the Republican Party, with all the earnestness of a life devoted to the
triumph of this party, but which I served always with the conviction
that I gave up nothing that was meant for country or mankind. With
me, the party was country and mankind; but with the adoption of all
these Presidential pretensions the party loses its distinctive character
and drops from its sphere. Its creed ceases to be Republicanism and
becomes Grantism; its members cease to be Republicans and
become Grant-men. It is no longer a political party, but a personal
party. For myself, I say openly, I am no man’s man, nor do I belong
to any personal party.

ONE TERM FOR PRESIDENT.

The attempt to change the character of the Republican Party


begins by assault on the principle of One Term for President.
Therefore must our support of this requirement be made manifest;
and here we have the testimony of our President, and what is
stronger, his example, showing the necessity of such limitation.
Authentic report attests that before his nomination he declared that
“the liberties of the country cannot be maintained without a One-
Term Amendment of the Constitution.” At this time Mr. Wade was
pressing this very Amendment. Then after his nomination, and while
his election was pending, the organ of the Republican Party at
Washington, where he resided, commended him constantly as
faithful to the principle. The “Morning Chronicle” of June 3, 1868,
after the canvass had commenced, proclaimed of the candidate,—
“He is, moreover, an advocate of the One-Term
principle, as conducing toward the proper
administration of the law,—a principle with which so
many prominent Republicans have identified
themselves that it may be accepted as an article of
party faith.”
Then again, July 14th, the same organ insisted,—
“Let not Congress adjourn without passing the One-
Term Amendment to the Constitution. There has never
been so favorable an opportunity. All parties are in
favor of it.… General Grant is in favor of it. The party
which supports General Grant demands it; and above
all else public morality calls for it.”
Considering that these pledges were made by an organ of the
party, and in his very presence, they may be accepted as proceeding
from him. His name must be added to the list with Andrew Jackson,
William Henry Harrison, Henry Clay, and Benjamin F. Wade, all of
whom are enrolled against the reëligibility of a President.
But his example as President is more than his testimony in
showing the necessity of this limitation. Andrew Jackson did not
hesitate to say that it was required in order to place the President
“beyond the reach of any improper influences,” and “uncommitted to
any other course than the strict line of constitutional duty.”[164]
William Henry Harrison followed in declaring that with the adoption
of this principle “the incumbent would devote all his time to the
public interest, and there would be no cause to misrule the
country.”[165] Henry Clay was satisfied, after much observation and
reflection, “that too much of the time, the thoughts, and the
exertions of the incumbent are occupied during his first term in
securing his reëlection.”[166] Benjamin F. Wade, after denouncing the
reëligibility of the President, said: “There are defects in the
Constitution, and this is among the most glaring.”[167]
And now our President by his example, besides his testimony,
vindicates all these authorities. He makes us see how all that has
been predicted of Presidents seeking reëlection is fulfilled: how this
desire dominates official conduct; how naturally the resources of the
Government are employed to serve a personal purpose; how the
national interests are subordinate to individual advancement; how all
questions, foreign or domestic, whether of treaties or laws, are
handled with a view to electoral votes; how the appointing power
lends itself to a selfish will, acting now by the temptation of office
and then by the menace of removal; and, since every office-holder
and every office-seeker has a brevet commission in the predominant
political party, how the President, desiring reëlection, becomes the
active head of three coöperating armies,—the army of office-holders,
eighty thousand strong, the larger army of office-seekers, and the
army of the political party, the whole constituting a consolidated
power which no candidate can possess without peril to his country.
Of these vast coöperating armies the President is commander-in-
chief and generalissimo. Through these he holds in submission even
Representatives and Senators, and makes the country his vassal with
a condition not unlike that of martial law, where the disobedient are
shot, while the various rings help secure the prize. That this is not
too strong appears from testimony before a Senate Committee,
where a Presidential lieutenant boldly denounced an eminent New
York citizen, who was a prominent candidate for Governor, as
“obnoxious to General Grant,”—and then, with an effrontery like the
Presidential pretension, announced that “President Grant was the
representative and head of the Republican Party, and all good
Republicans should support him in all his measures and
appointments, and any one who did not do it should be crushed
out.”[168] Such things teach how wise were those statesmen who
would not subject the President to the temptation or even the
suspicion of using his vast powers in promoting personal ends.
Unquestionably the One-Man Power has increased latterly beyond
example,—owing partly to the greater facilities of intercourse,
especially by telegraph, so that the whole country is easily reached,
—partly to improvements in organization, by which distant places
are brought into unity,—and partly through the protracted
prevalence of the military spirit created by the war. There was a time
in English history when the House of Commons, on the motion of
the famous lawyer Mr. Dunning, adopted the resolution, “That the
influence of the Crown has increased, is increasing, and ought to be
diminished.”[169] The same declaration is needed with regard to the
President; and the very words of the Parliamentary patriot may be
repeated. In his memorable speech, Mr. Dunning, after saying that
he did not rest “upon proof idle to require,” declared that the
question “must be decided by the consciences of those who as a
jury were called upon to determine what was or was not within their
own knowledge.”[170] It was on ground of notoriety cognizable to all
that he acted. And precisely on this ground, but also with specific
proofs, do I insist that the influence of the President has increased,
is increasing, and ought to be diminished. But in this excellent work,
well worthy the best efforts of all, nothing is more important than
the limitation to one term.
There is a demand for reform in the civil service, and the President
formally adopts this demand; but he neglects the first step, which
depends only on himself. From this we may judge his little
earnestness in the cause. Beyond all question Civil-Service Reform
must begin by a limitation of the President to one term, so that the
temptation to use the appointing power for personal ends may
disappear from our system, and this great disturbing force cease to
exist. If the President is sincere for reform, it will be easy for him to
set the example by declaring again his adhesion to the One-Term
principle. But even if he fails, we must do our duty.
Therefore, in opposing the prolonged power of the present
incumbent, I begin by insisting, that, for the good of the country,
and without reference to any personal failure, no President should
be a candidate for reëlection; and it is our duty now to set an
example worthy of republican institutions. In the name of the One-
Term principle, once recognized by him, and which needs no other
evidence of its necessity than his own Presidency, I protest against
his attempt to obtain another lease of power. But this protest is on
the threshold.

HIS UNFITNESS FOR THE PRESIDENTIAL OFFICE.

I protest against him as radically unfit for the Presidential office,


being essentially military in nature, without experience in civil life,
without aptitude for civil duties, and without knowledge of
republican institutions,—all of which is perfectly apparent, unless we
are ready to assume that the matters and things set forth to-day are
of no account, and then, in further support of the candidate, boldly
declare that nepotism in a President is nothing, that gift-taking with
repayment in official patronage is nothing, that violation of the
Constitution and of International and Municipal Law is nothing, that
indignity to the African race is nothing, that quarrel with political
associates is nothing, and that all his Presidential pretensions in their
motley aggregation, being a new Cæsarism or personal government,
are nothing. But if these are all nothing, then is the Republican Party
nothing, nor is there any safeguard for Republican Institutions.

APOLOGIES FOR THE PRESIDENT.

Two apologies I hear. The first is that he means well, and errs
from want of knowledge. This is not much. It was said of Louis the
Quarreller, that he meant well; nor is there a slate head-stone in any
village burial-ground that does not record as much of the humble
lodger beneath. Something more is needed for a President. Nor can
we afford to perpetuate power in a ruler who errs so much from
ignorance. Charity for the past I concede, but no investiture for the
future.
The other apology is, that his Presidency has been successful.
How? When? Where? Not to him can be attributed that general
prosperity which is the natural outgrowth of our people and country;
for his contribution is not traced in the abounding result. Our golden
fields, productive mines, busy industry, diversified commerce, owe
nothing to him. Show, then, his success. Is it in the finances? The
national debt has been reduced, but not to so large an amount as by
Andrew Johnson in the same space of time. Little merit is due to
either, for each employed the means allowed by Congress. To the
American people is this reduction due, and not to any President. And
while our President in this respect is no better than his predecessor,
he can claim no merit for any systematic effort to reduce taxation or
restore specie payments. Perhaps, then, it is in foreign relations that
he claims the laurels he is to wear. Knowing something of these from
careful study and years of practical acquaintance, I am bound to say
that never before has their management been so wanting in ability
and so absolutely without character. With so much pretension and so
little knowledge, how could it be otherwise? Here the President
touches nothing which he does not muddle. In every direction is
muddle,—muddle with Spain, muddle with Cuba, muddle with the
Black Republic, muddle with distant Corea, muddle with Venezuela,
muddle with Russia, muddle with England,—on all sides one
diversified muddle. If there is not muddle with Germany and France,
it must be from their forbearance. To this condition are we reduced.
When before in our history have we reached any such bathos as that
to which we have been carried in our questions with England? Are
these the laurels for a Presidential candidate?
But where else shall we look for them? Are they found on the
Indian frontier? Let the cry of massacre and blood from that distant
region answer. Are they in reform of the civil service? But here the
initial point is the limitation of the President to one term, so that he
may be placed above temptation; yet this he opposes. Evidently he
is no true reformer. Are these laurels found in the administration of
the Departments? Let the discreditable sale of arms to France in
violation of neutral duties and of municipal statute be the answer;
and let the custom-houses of New York and New Orleans, with their
tales of favoritism and of nepotism, and with their prostitution as
agencies, mercenary and political, echo back the answer; while
senatorial committees, organized contrary to a cardinal principle of
Parliamentary Law as a cover to these scandals, testify also. And
again, let the War Department recall the disappearance of important
archives bearing on an important event of the war, so that empty
boxes remain like a coffin without a corpse. Where, then, are the
laurels? At last I find them, fresh and brilliant, in the harmony which
the President has preserved among Republicans. Harmony, do I say?
This should have been his congenial task; nor would any aid or
homage of mine have been wanting. But instead he has organized
discord, operating through a succession of rings, and for laurels we
find only weeds and thistles.
But I hear that he is successful in the States once in rebellion.
Strange that this should be said while we are harrowed by the
reports of Ku-Klux outrages. Here, as in paying the national debt,
Congress has been the effective power. Even the last extraordinary
measure became necessary, in my judgment, to supplement his little
efficiency. Had the President put into the protection of the colored
people at the South half the effort and earnest will with which he
maintained his San Domingo contrivance, the murderous Ku-Klux
would have been driven from the field and peace assured. Nor has
he ever exhibited to the colored people any true sympathy. His
conduct to Frederick Douglass on his return from San Domingo is an
illustration; and so also was his answer to the committee of colored
fellow-citizens seeking his countenance for the pending measure of
Civil Rights. Some thought him indifferent; others found him
insulting. Then came his recent letter to the great meeting at
Washington, May 9, 1872, called to assert these rights, where he
could say nothing more than this: “I beg to assure you, however,
that I sympathize most cordially in any effort to secure for all our
people, of whatever race, nativity, or color, the exercise of those
rights to which every citizen should be entitled.”[171] Of course
everybody is in favor of “the rights to which every citizen should be
entitled.” But what are these rights? And this meaningless juggle of
words, entirely worthy of the days of Slavery, is all that is
vouchsafed by a Republican President for the equal rights of his
colored fellow-citizens.
I dismiss the apologies with the conclusion, that in the matters to
which they invite attention his Presidency is an enormous failure.

THE PRESIDENT AS CANDIDATE.

Looking at his daily life as it becomes known through the press or


conversation, his chief employment seems the dispensation of
patronage, unless society is an employment. For this he is visited
daily by Senators and Representatives bringing distant constituents.
The Executive Mansion has become that famous “Treasury trough”
described so well by an early Congressional orator:—
“Such running, such jostling, such wriggling, such
clambering over one another’s backs, such squealing,
because the tub is so narrow and the company is so
crowded.”[172]
To sit behind is the Presidential occupation, watching and feeding
the animals. If this were an amusement only, it might be pardoned;
but it must be seen in a more serious light. Some nations are
governed by the sword,—in other words, by central force
commanding obedience. Our President governs by offices,—in other
words, by the appointing power, being a central force by which he
coerces obedience to his personal will. Let a Senator or
Representative hesitate in the support of his autocracy, or doubt if
he merits a second term, and forthwith some distant consul or
postmaster, appointed by his influence, begins to tremble. The
“Head Centre” makes himself felt to the most distant circumference.
Can such tyranny, where the military spirit of our President finds a
congenial field, be permitted to endure?
In adopting him as a candidate for reëlection we undertake to
vindicate his Presidency, and adopt in all things the insulting,
incapable, aide-de-campish dictatorship which he has inaugurated.
Presenting his name, we vouch for his fitness, not only in original
nature, but in experience of civil life, in aptitude for civil duties, in
knowledge of republican institutions, and elevation of purpose; and
we must be ready to defend openly what he has openly done. Can
Republicans honestly do this thing? Let it be said that he is not only
the greatest nepotist among Presidents, but greater than all others
together, and what Republican can reply? Let it be said that he is not
only the greatest gift-taker among Presidents, but the only one who
repaid his patrons at the public expense, and what Republican can
reply? Let it be said that he has openly violated the Constitution and
International Law, in the prosecution of a wretched contrivance
against the peace of San Domingo, and what Republican can reply?
Let it be said, that, wielding the power of the Great Republic, he has
insulted the Black Republic with a menace of war, involving indignity
to the African Race, and what Republican can reply? Let it be said
that he has set up Presidential pretensions without number,
constituting an undoubted Cæsarism or personal government, and
what Republican can reply? And let it be added, that, unconscious of
all this misrule, he quarrels without cause even with political
supporters, and on such a scale as to become the greatest
Presidential quarreller of our history, quarrelling more than all other
Presidents together, and what Republican can reply? It will not be
enough to say that he was triumphant in war,—as Scipio, the victor
of Hannibal, reminded the Roman people that on this day he
conquered at Zama.[173] Others have been triumphant in war and
failed in civil life,—as Marlborough, whose heroic victories seemed
unaccountable, in the frivolity, the ignorance, and the heartlessness
of his pretended statesmanship. To Washington was awarded that
rarest tribute, “First in war, first in peace, and first in the hearts of
his countrymen.”[174] Of our President it will be said willingly, “first in
war,” but the candid historian will add, “first in nepotism, first in gift-
taking and repaying by official patronage, first in Presidential
pretensions, and first in quarrel with his countrymen.”
Anxiously, earnestly, the country asks for reform, and stands tiptoe
to greet the coming. But how expect reform from a President who
needs it so much himself? Who shall reform the reformer? So also
does the country ask for purity. But is it not vain to seek this boon
from one whose Presidential pretensions are so demoralizing? Who
shall purify the purifier? The country asks for reform in the civil
service. But how expect any such change from one who will not
allow the Presidential office to be secured against its worst
temptation? The country desires an example for the youth of the
land, where intelligence shall blend with character, and both be
elevated by a constant sense of duty with unselfish devotion to the
public weal. But how accord this place to a President who makes his
great office a plaything and perquisite, while his highest industry is
in quarrelling? Since Sancho Panza at Barataria, no Governor has
provided so well for his relations at the expense of his country; and
if any other has made Cabinet appointments the return for personal
favors, his name has dropped out of history. A man is known by his
acts; so also by the company he keeps. And is not our President
known by his intimacy with those who are by-words of distrust? But
all these by-words look to another term for perpetuation of their
power. Therefore, for the sake of reform and purity, which are a
longing of the people, and also that the Chief Magistrate may be an
example, we must seek a remedy.
See for one moment how pernicious must be the Presidential
example. First in place, his personal influence is far-reaching beyond
that of any other citizen. What he does others will do. What he fails
to do others will fail to do. His standard of conduct will be accepted
at least by his political supporters. His measure of industry and his
sense of duty will be the pattern for the country. If he appoints
relations to office and repays gifts by official patronage, making his
Presidency a great “gift-enterprise,” may not every office-holder do
likewise, each in his sphere, so that nepotism and gift-taking with
official remuneration will be general, and gift-enterprises be
multiplied indefinitely in the public service? If he treats his trust as
plaything and perquisite, why may not every office-holder do the
same? If he disregards Constitution and Law in the pursuit of
personal objects, how can we expect a just subordination from
others? If he sets up pretensions without number repugnant to
republican institutions, must not the good cause suffer? If he is
stubborn, obstinate, and perverse, are not stubbornness, obstinacy,
and perversity commended for imitation? If he insults and wrongs
associates in official trust, who is safe from the malignant influence
having its propulsion from the Executive Mansion? If he fraternizes
with jobbers and Hessians, where is the limit to the demoralization
that must ensue? Necessarily the public service takes its character
from its elected chief, and the whole country reflects the President.
His example is a law. But a bad example must be corrected as a bad
law.
To the Republican Party, devoted to ideas and principles, I turn
now with more than ordinary solicitude. Not willingly can I see it
sacrificed. Not without earnest effort against the betrayal can I
suffer its ideas and principles to be lost in the personal pretensions
of one man. Both the old parties are in a crisis, with this difference
between the two: the Democracy is dissolving, the Republican party
is being absorbed; the Democracy is falling apart, thus visibly losing
its vital unity,—the Republican Party is submitting to a personal
influence, thus visibly losing its vital character; the Democracy is
ceasing to exist, the Republican Party is losing its identity. Let the
process be completed, and it will be no longer that Republican Party
which I helped to found and have always served, but only a personal
party,—while instead of those ideas and principles which we have
been so proud to uphold will be Presidential pretensions, and instead
of Republicanism there will be nothing but Grantism.
Political parties are losing their sway. Higher than party are
country and the duty to save it from Cæsar. The Caucus is at last
understood as a political engine moved by wire-pullers, and it
becomes more insupportable in proportion as directed to personal
ends. Nor is its character changed when called a National
Convention. Here, too, are wire-pullers; and when the great Office-
Holder and the great Office-Seeker are one and the same, it is easy
to see how naturally the engine responds to the central touch. A
political convention is an agency and convenience, but never a law,
least of all a despotism; and when it seeks to impose a candidate
whose name is a synonym of pretensions unrepublican in character
and hostile to good government, it will be for earnest Republicans to
consider well how clearly party is subordinate to country. Such a
nomination can have no just obligation. Therefore with unspeakable
interest will the country watch the National Convention at
Philadelphia. It may be an assembly (and such is my hope) where
ideas and principles are above all personal pretensions, and the
unity of the party is symbolized in the candidate; or it may add
another to Presidential rings, being an expansion of the military ring
at the Executive Mansion, the senatorial ring in this Chamber, and
the political ring in the custom-houses of New York and New
Orleans. A National Convention which is a Presidential ring cannot
represent the Republican Party.
Much rather would I see the party to which I am dedicated, under
the image of a life-boat not to be sunk by wind or wave. How often
have I said this to cheer my comrades! I do not fear the Democratic
Party. Nothing from them can harm our life-boat. But I do fear a
quarrelsome pilot, unused to the sea, but pretentious in command,
who occupies himself in loading aboard his own unserviceable
relations and personal patrons, while he drives away the experienced
seamen who know the craft and her voyage. Here is a peril which no
life-boat can stand.
Meanwhile I wait the determination of the National Convention,
where are delegates from my own much-honored Commonwealth
with whom I rejoice to act. Not without anxiety do I wait, but with
the earnest hope that the Convention will bring the Republican Party
into ancient harmony, saving it especially from the suicidal folly of an
issue on the personal pretensions of one man.
INTEREST AND DUTY OF COLORED
CITIZENS IN THE PRESIDENTIAL
ELECTION.

Letter To Colored Citizens, July 29, 1872.

I will say to the North, Give up; and to the South, Keep not
back.—Isaiah, xliii. 6.

The immediate occasion of the present Letter appears in the following, from
colored citizens of Washington to Mr. Sumner:—
Washington, D. C., July 11, 1872.

Sir,—We, the undersigned, citizens of color, regarding you as


the purest and best friend of our race, admiring your consistent
course in the United States Senate and elsewhere as the special
advocate of our rights, and believing that your counsel at this
critical juncture in the period of our citizenship would be free from
personal feeling and partisan prejudice, have ventured to request
your opinion as to what action the colored voters of the nation
should take in the Presidential contest now pending.
The choice of our people is now narrowed down to General
Grant or Horace Greeley. Your long acquaintance with both and
your observation have enabled you to arrive at a correct
conclusion as to which of the candidates, judging from their
antecedents as well as their present position, will, if elected,
enforce the requirements of the Constitution and the laws
respecting our civil and political rights with the most heart-felt
sympathy and the greatest vigor.
We hope and trust you will favor us with such reply as will
serve to enlighten our minds upon this subject and impel our
people to go forward in the right direction. Our confidence in your
judgment is so firm, that, in our opinion, thousands of the
intelligent colored voters of the country will be guided in their
action by your statement and advice.
Hoping to receive a reply soon, we have the honor to be,
With great respect,
Your obedient servants,

A. T. Augusta, m. d.
Samuel Proctor.
David Fisher, sr.
J. J. Ketchum.
Jno. H. Smith.
Chas. N. Thomas.
Edward Crusor.
Wm. H. Shorter.
Wm. H. A. Wormley.
Henry Hill.
William P. Wilson.
Furman J. Shadd.
R. W. Tompkins.
Geo. D. Johnson.
John H. Brown.
Chris. A. Fleetwood.
Henry Lacy.
Chas. F. Bruce.
W. H. Bell.
David Fisher, jr.
J. L. N. Bowen.
David King.
Jacob De Witter.
Wm. Polkeny.

Hon. Charles Sumner.

LETTER.
Washington, July 29, 1872.

Gentlemen and Fellow-Citizens:—

I f I have delayed answering your communication of


July 11th, which was duly placed in my hands by
your committee, it was not because the proper course
for you seemed doubtful, but because I wished to
reflect upon it and be aided by information which time
might supply. Since then I have carefully considered
the inquiries addressed to me, and have listened to
much on both sides; but my best judgment now is in
harmony with my early conclusion.
I am touched by the appeal you make. It is true that
I am the friend of your race, and I am glad to be
assured that in your opinion I have held a consistent
course in the Senate and elsewhere as the special
advocate of your rights. That course, by the blessing
of God, I mean to hold so long as life lasts. I know
your infinite wrongs, and feel for them as my own. You
only do me simple justice, when you add a belief that
my counsel at this critical juncture of your citizenship
“would be free from personal feelings and partisan
prejudice.” In answering your inquiries I can have no
sentiment except for your good, which I most
anxiously seek; nor can any disturbing influence be
allowed to interfere. The occasion is too solemn.
Especially is there no room for personal feeling or for
partisan prejudice. No man or party can expect power
except for the general welfare. Therefore they must be
brought to the standard of truth, which is without
feeling or prejudice.

QUESTIONS PROPOSED.
You are right in saying that the choice for the
Presidency is now “narrowed down” to President Grant
or Horace Greeley. One of these is to be taken, and,
assuming my acquaintance with both and my
observation of their lives, you invite my judgment
between them, asking me especially which of the two,
“judging from their antecedents as well as present
position,” would enforce the Constitution and laws
securing your civil and political rights “with the most
heart-felt sympathy and the greatest vigor.” Here I
remark that in this inquiry you naturally put your rights
in the foreground. So do I,—believing most sincerely
that the best interests of the whole country are
associated with the completest recognition of your
rights, so that the two races shall live together in
unbroken harmony. I also remark that you call
attention to two things,—the “antecedents” of the
candidates, and their “present position.” You wish to
know from these which gives assurance of the most
heart-felt sympathy and greatest vigor in the
maintenance of your rights,—in other words, which,
judging by the past, will be your truest friend.
The communication with which you have honored
me is not alone. Colored fellow-citizens in other parts
of the country, I may say in nearly every State of the
Union, have made a similar request, and some
complain that I have thus far kept silent. I am not
insensible to the trust reposed in me. But if my opinion
is given, it must be candidly, according to my
conscience. In this spirit I answer your inquiries,
beginning with the antecedents of the two candidates.

ANTECEDENTS OF THE CANDIDATES.


Horace Greeley was born to poverty and educated
himself in a printing-office. President Grant, fortunate
in early patronage, became a cadet at West Point and
was educated at the public expense. One started with
nothing but industry and character; the other started
with a military commission. One was trained as a
civilian; the other as a soldier. Horace Greeley stood
forth as a Reformer and Abolitionist. President Grant
enlisted as a Proslavery Democrat, and, at the election
of James Buchanan, fortified by his vote all the
pretensions of Slavery, including the Dred Scott
decision. Horace Greeley from early life was earnest
and constant against Slavery, full of sympathy with the
colored race, and always foremost in the great battle
for their rights. President Grant, except as a soldier
summoned by the terrible accident of war, never did
anything against Slavery, nor has he at any time
shown any sympathy with the colored race, but rather
indifference, if not aversion. Horace Greeley earnestly
desired that colored citizens should vote, and ably
championed impartial suffrage; but President Grant
was on the other side.
Beyond these contrasts, which are marked, it cannot
be forgotten that Horace Greeley is a person of large
heart and large understanding, trained to the support
of Human Rights, always beneficent to the poor,
always ready for any good cause, and never deterred
by opposition or reproach, as when for long years he
befriended your people. Add to these qualities,
conspicuous in his life, untiring industry which leaves
no moment without its fruit, abundant political
knowledge, acquaintance with history, the instinct and
grasp of statesmanship, an amiable nature, a
magnanimous soul, and above all an honesty which no
suspicion has touched,—and you have a brief
portraiture where are antecedents of Horace Greeley.
Few of these things appear in the President. His
great success in war, and the honors he has won,
cannot change the record of his conduct toward your
people, especially in contrast with the life-time fidelity
of his competitor, while there are unhappy
“antecedents” showing that in the prosecution of his
plans he cares nothing for the colored race. The story
is painful; but it must be told.

GRANT’S INDIGNITY TO THE COLORED RACE.

I refer to the outrage he perpetrated upon Hayti,


with its six hundred thousand blacks engaged in the
great experiment of self-government. Here is a most
instructive “antecedent,” revealing beyond question his
true nature, and the whole is attested by documentary
evidence. Conceiving the idea of annexing Dominica,
which is the Spanish part of the island, and shrinking
at nothing, he began by seizing the war powers of the
Government, in flagrant violation of the Constitution,
and then, at great expenditure of money, sent several
armed ships of the Navy, including monitors, to
maintain the usurper Baez in power, that through him
he might obtain the coveted prize. Not content with
this audacious dictatorship, he proceeded to strike at
the independence of the Black Republic by open
menace of war, and all without the sanction of
Congress, to which is committed the power to make
war. Sailing into the harbor of Port-au-Prince with our
most powerful monitor, the Dictator, (properly named
for this service,) also the frigate Severn as consort,
and other monitors in their train, the Admiral, acting
under instructions from Washington, proceeded to the
Executive Mansion accompanied by officers of his
squadron, and then, pointing to the great war-ships in
sight from the windows, dealt his unjust menace,
threatening to sink or capture Haytian ships. The
President was black, not white. The Admiral would
have done no such thing to any white ruler, nor would
our country have tolerated such menace from any
Government in the world. Here was indignity not only
to the Black Republic with its population of six hundred
thousand, but to the African race everywhere, and
especially in our own country. Nor did it end here. For
months the Navy of the United States was kept
hovering on the coast, holding that insulted people in
constant dread and anxiety, while President Grant was
to them like a hawk sailing in the air, ready to swoop
upon his prey.

FALSE IMPRISONMENT OF AN AMERICAN


CITIZEN.

This heartless, cruel proceeding found a victim


among our white fellow-citizens. An excellent merchant
of Connecticut, praised by all who know him, was
plunged into prison by Baez, where he was immured
because it was feared that on his return to New York
he would expose the frauds of the plotters; and this
captivity was prolonged with the connivance of two
agents of the President, one of whom finds constant
favor with him and is part of the military ring
immediately about him. That such an outrage could go
unpunished shows the little regard of the President for
human rights, whether in white or black.

HARD TO BEAR THESE OUTRAGES.


I confess my trials, as I was called to witness these
things. Always a supporter of the Administration, and
sincerely desiring to labor with it, I had never uttered a
word with regard to it except in kindness. My early
opposition to the Treaty of Annexion was reserved, so
that for some time my opinions were unknown. It was
only when I saw the breach of all law, human and
divine, that I was aroused; and then began the anger
of the President and of his rings, military and
senatorial. Devoted to the African race, I felt for them,
—besides being humbled that the Great Republic,
acting through its President, could set such an
example, where the National Constitution,
International Law, and Humanity were all sacrificed.
Especially was I moved when I saw the indignity to the
colored race, which was accomplished by trampling
upon a fundamental principle of International Law,
declaring the equality of nations, as our Declaration of
Independence declares the equality of men.
This terrible transaction, which nobody can defend,
is among the “antecedents” of President Grant, from
which you can judge how much the colored race can
rely upon his “heart-felt sympathy.” Nor can it be
forgotten that shortly afterward, on the return of the
Commission from this island, Hon. Frederick Douglass,
the colored orator, accomplished in manners as in
eloquence, was thrust away from the company of the
Commissioners at the common table of the mail-packet
on the Potomac, almost within sight of the Executive
Mansion, simply on account of his color; but the
President, at whose invitation he had joined the
Commission, never uttered a word in condemnation of
this exclusion, and when entertaining the returned
Commissioners at dinner carefully omitted Mr.

You might also like