Airmen’s Definition of Autonomy

Editors Note: This is the second installation in a series addressing the future of autonomous aerial systems training and acquisition.  It sets a clear framework for the conversation about the Air Force’s Future in autonomous air vehicles.

By Nicholas J. Helms

Last week we understood systems training as a complement to the current way the Air Force designs and tests air vehicles. Systems training capitalizes on a long-term, developmental approach to machine autonomy similar to our approach with human wingmen. It demands agile roles for lab researchers, testers, program managers, and operators so that the Air Force can manage its own data (and destiny) regarding autonomous air vehicle combat. In this way, systems training affords improved synergy between Airmen and defense industry contractors, where each entity can trust that the other is maximizing its own organizational strengths. On more than one level, this vision depends on trust.

Trust in autonomous air vehicles depends on how you define autonomy. Formal definitions of the word autonomous describe the capability of “existing independently” and “responding, reacting, or developing independently of the whole.” Without a doubt, humans are autonomous. Defining machine autonomy, on the other hand, becomes nuanced. Definitions commonly describe autonomous machines as being capable of making decisions and taking action without human intervention. Therefore, the definition of autonomy can be argued in terms of environmental complexity, task, and reliability. However, Airmen should divert from this common understanding and widen their perspective. Airmen should define autonomy as a gradient of human-machine relationships, where increases in machine participation give the human-machine entity a greater capacity to make decisions and take action. This collective human-machine concept of autonomy will be visually depicted later in this section. By including the human and machine as a single team entity, the word “autonomous” now combines the independent contributions of both team participants and highlights the potential to increase overall capacity for action.

Decision making capability without human input characterizes the current, commonly assumed definition of machine autonomy. Paul Scharre and Michael Horowitz, authors for the Center for a New American Security and contributors to the Defense Science Board study on autonomy, defined machine autonomy as “the ability to perform a task without human intervention.” Similarly, the Defense Science Board said that “autonomy results from delegation of a decision to an authorized entity to take action within specific boundaries.” The common consensus is that, when a machine makes decisions and takes actions independent of human intervention, it is called autonomy. That is where consensus ends.

Beyond machine-made decisions and actions, some people reserve the autonomous label for those machines that meet a higher standard of complexity. For some, autonomous systems cannot, by definition, be deterministic.  Deterministic outputs are repeatable and exact.  Humans are non-deterministic. Even identical human twins will not behave as exact replicas of each other. For those who see machine autonomy in the context of human behavior, a complex but solely rule-based and deterministic machine is automated, but not autonomous. Danette Allen, a machine autonomy expert at NASA Langley, says that autonomous systems are defined by nondeterministic emergent behavior. This distinction carries with it implications for how to develop such systems. Particularly, it implies a requirement to allow room for uncertainty when testing autonomous machines.

For some, task complexity plus environmental complexity helps define machine autonomy.  Phillip Durst and Wendell Gray authored a report for the US Army Test and Evaluation Command on autonomous system performance. They included the human-machine relationship, task, and environment as three dimensions that characterize machine autonomy. Scharre and Horowitz also agree with these three dimensions of autonomy, and even suggest that machine complexity plays a role in the definition, but they take issue with parsing machine autonomy into subsets labeled as automatic, automated, autonomous, or intelligent. To them, “it is meaningless to refer to a machine as ‘autonomous’ or ‘semi-autonomous’ without specifying the task or function being automated.” This is an important insight. Scharre and Horowitz put extra emphasis on the tasks that machine autonomy fulfills. This leads them to describe the USAF F-16 Auto-Ground Collision Avoidance System (Auto-GCAS) as “autonomous” where others would be quick to point out that Auto-GCAS is deterministic. Whether a machine is deterministic or complex, the tasks that machines fulfill will ultimately delineate how the human uses those machines.

There is a pattern that describes how humans use machine capabilities to accomplish tasks. First, a novel machine capability is adopted to complete a task.  Any flaws in the machine will be identified as more people gain experience using it. Humans will then develop compensations for those flaws and move on. For Airmen, this pattern can describe the use of Radar, Inertial Navigation Systems and Global Positioning Systems, fly-by-wire flight controls, and Unmanned Aerial Vehicle tele-operated flight controls. For all of these airpower capabilities, decisions and actions were delegated to machines and the consequences of the imperfect machines led to adjustments in tactical human behavior. While each of the aforementioned examples demanded adaptation by the human component of the team, the human-machine team ultimately had a better capacity to sense, navigate, or control. Early technology adopters aside, humans resist habit change. However, as time passes, the human-machine team coevolves towards greater capacity for action, and humans move on towards new technological possibilities.

Over time, novel machine capabilities will no longer provoke fresh enthusiasm; rather, the machines become accepted into doctrine and assumed as entitlements for the human-machine relationship. Dr. Raja Parasuraman and Dr. Victor Riley described this phenomenon in a vanguard paper on humans and automation. They say, “What is considered automation will therefore change with time. When the reallocation of a function from human to machine is complete and permanent, then the function will tend to be seen simply as a machine operation, not as automation.” Parasuraman and Riley used the term “automation” in 1997, but their point foreshadowed twenty years into the future to emphasize that today’s fully-autonomous system is tomorrow’s semi-autonomous systems, and tomorrow’s automation is the next day’s simple machine. With time, humans have a way of accommodating machine capabilities with decreasing skepticism, to the point of interdependency.

Thus far, machine autonomy has been defined by decisions and actions independent of human intervention. Non-deterministic machines hint towards a testing approach that allows room for uncertainty, but eliminating deterministic machines from the definition of autonomy emphasizes the wrong point. Machine autonomy begins and ends with tasks that humans need help with. Therefore, humans will compensate for machine autonomy to support decisions and actions that fulfill any given task. Over time, the human-machine team becomes interdependent and enthusiasm for novel machine capabilities will turn into mere expectations for machine operation. Thus, the Airman’s definition of autonomy should focus on the dynamic exchange of decisions and actions between a human and machine.

 

Figure 1: Airmen autonomy defined by the capacity to act as a human-machine team.  Machine autonomy is characterized by delegated decisions and actions that occur without human intervention.  This illustration allows machine capabilities like Auto-GCAS, Electro-Magnetic Spectrum Sensor Displays, fly-by-wire, and tele-operated flight controls to exist in the context of autonomy that increases human-machine team action.

For Airmen, autonomy is defined by a gradient of human-machine relationships, where increased machine participation gives the human-machine entity a greater capacity to make decisions and take action. Figure 1 illustrates this definition. The left side of the figure could represent aircrew in a simple, mechanically-controlled aircraft. Here, the machine is deterministic and the human pilot represents the autonomous capability. Together, the human-machine team has a generic capacity for action. Moving towards the right, additional machine capability (automated, autonomous, or otherwise) adds to the combined entity’s capacity to act. Labels of automated, semi-autonomous, or autonomous should be less important to an airman relative to the level of capacity for action. Therefore, using this definition, airmen should embrace interdependency and the changing of autonomous systems over time. Instead of focusing on the details of a variety of different labels, they should focus on the more important matter of developing trust in these evolving systems.

Major Nicholas J. Helms is a graduate of the USAF Academy with a Bachelor of Science in Human Factors Engineering.  He is a distinguished graduate of the USAF Test Pilot School with over 2,000 hours piloting multiple aircraft, including the F-16, MQ-9, T-38C, and C-12J.  He has flown missions in support of Operations Noble Eagle, Iraqi Freedom, and Enduring Freedom.

Disclaimer: The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

 

Leave a Reply