How the Air Force Should Test Autonomous Vehicles Part 2

Editor’s Note: This is part 2 of the final installation in a series addressing the future of autonomous aerial systems training and acquisition. The author presents a way forward on the future of autonomous systems. Other parts of the series can be found here.

By Nicholas J. Helms

Implementation

Systems Training is envisioned as operators, program managers, testers, and researchers working closely together to educate autonomous air vehicles with operator techniques and tactics. Systems Engineering methods would verify flight sciences and verify safety algorithms. Thereafter, Systems Training would develop autonomous behavior. While a Systems Training approach is well fit for testing complex autonomous air vehicles, the approach itself does not need to be complex. Systems Training could deliver benefits in the short term with simple behavioral rules. Safety rules will characterize the boundaries that restrain autonomous behavior. Coordination rules will describe how autonomous air vehicles behave with respect to manned teammates and other autonomous vehicles. Timing rules will improve overall system effectiveness. Emergent autonomous machine behavior should be given the space to demonstrate itself towards either added utility or futility. Comprehensive feedback between the human instructor and autonomous machine will be important as the human reinforces and discourages machine action to elicit desired machine behaviors. Comprehensive feedback will also foster trust between the instructor and machine.

Complex behaviors can arise from simple rules. Examples from nature include flocking birds, swarming locusts, and schooling fish.  A man-made example is the Internet, where complex behavior is guided by the fundamental options to add, subtract, or manipulate information. Simple rules are consequences of human mental schemata and associated heuristics. Whether in Mother Nature or on the Internet, complexity sometimes arises out of simplicity. As John Holland said, “behavior of the whole is much more complex than the behavior of the parts.”  Similarly, autonomous air vehicles can mimic the simple rules of Air Force wingman and provide value to multiple missions. Proposed simple rules for autonomous air vehicle flight test will be explained in detail later:

  1. Do not hurt people or break the wrong things.
  2. Train coordination rules.
  3. Train timing rules.

These rules describe the development of human pilots from wingmen and co-pilots to flight leads and mission commanders, and these rules can serve a method for developing autonomous air vehicles. As a result, these rules will contribute to emergent patterns that can assist combat effectiveness, but emergent is not synonymous with random. Therefore, emergent machine behavior should be given the opportunity to progress to a logical conclusion within the safe boundaries defined by rule number one.

The first rule, to test safely, is best served with the same deterministic verification methods that have been used to make auto-ground collision avoidance in USAF F-16s and engine-out landing recovery for the Northrop RQ-4 Global Hawk. Safety is the basis for the first rule of flight test. While safety can never be guaranteed, its place as rule number one is a reminder for testers to reduce risk towards people and property as much as possible. A recent AFRL report on the challenges of autonomous systems test emphasized safety. It said that “safe operation of an autonomous system must be ensured even though the machine’s behavior/performance may not be exhaustively verified according to current development or certification standards.” Therefore, characterizing “safe operation” will be important since it will serve as the baseline for autonomous behavior in a long-term Systems Training approach. For human students, safe operation is defined by the qualification that allows the student to operate the aircraft administratively—to takeoff, fly, and land. While Airmen call this a qualification, society calls it a license. The Institute for Defense Analyses published a paper that recommends this licensure approach as a method for autonomous systems test. It should be expected that safety licensure will be the very first step every time Airmen train new autonomous systems and that Airman instructors would be vigilant and guarded if autonomous behavior ever trips safety boundaries when pushing the envelope of a given system.

Once an autonomous air vehicle has been licensed for safety, mission effectiveness becomes the next priority, and it can be described by coordination and timing rules. Airmen conceptualize mission effectiveness with qualifications for both role and specialty. Wingman, Flight Lead, and Mission Commander are examples of role effectiveness. Airdrop, low-level, search-and-rescue, and suppression of enemy air defense (SEAD) are examples of specialty effectiveness. A recently upgraded novice in either role or specialty will often adopt simple rules to prioritize their attention properly. For example, the unclassified F-16 tactics manual reminds wingmen to always know where the flight lead is, be at their correct formation position, and use remaining attention to manage weapons sensors. Donald Sull and Kathleen Eisenhardt defined these F-16 rules as coordination rules. In their book, Simple Rules: How to Thrive in a Complex World, they distinguish that coordination rules describe what to do, as opposed to how to do or when to do something.  Many Airmen flight leads will remind their wingmen that mission effectiveness can still be achieved as long as simple coordination rules are upheld because the lead is taking responsibility for the other complicated rules for the entire flight. After deterministic testing of safety rules, testing autonomous air vehicle coordination rules would follow and would be best served with a Systems Training approach.

Lastly, timing rules come into play. At the beginning of training, timing rules should be upheld by the Airman instructor because timing rules are developed through experience, require cognitive sophistication, and are built on abstract thought with a temporal understanding of existing coordination rules. Sull and Eisenhardt define timing rules as “address[ing] when to act.” For example, a new F-16 instructor-student team is still effective if the wingman upholds coordination rules and the instructor upholds timing rules such as when to commit, when to preserve range against the enemy, and when to attack the enemy from opposite azimuths. According to Sull and Eisenhardt, timing rules are learned after coordination rules because experts need to experience “enough events over a sufficiently long period to recognize sequences of action or particular rhythms that make sense to use. Because experts have more cognitive capacity, they can have greater temporal awareness and more implicit timing rules than novices.” Therefore, with safety and coordination rules in place, timing rules are a logical next step towards developing mission effective autonomy that aids air combat.

The following description by John H. Holland reinforces the simple rule method in a Systems Training approach for developing air combat autonomy. Any airpower expert will recognize the themes of student aptitude, platform specialization, force packaging, and counter-air tactics. While Holland, a Professor of Psychology and Electrical Engineering, described the complex adaptive system in an evolutionary sense, his words reinforce themes for air combat autonomy. For clarity, “air combat” was substituted for “world”, “environment”, and “resources”:

The first rules that establish themselves are “generalists,” rules that are satisfied by many situations and have some slight competitive advantage. They may be “wrong” much of the time, but on average they produce interactions that are better than random. Because their conditions are simple, such rules are easy to discover, and they are tested often because they are satisfied often. The repeated tests provide a “statistical” confirmation of the generalists’ “hypothesis” about [air combat]. Once the “generalists” are established, they open possibilities—niches—for other rules. A more complicated rule that corrects for mistakes of an over-general rule can benefit both itself and the over-general rule. A kind of symbiosis results. Repetitions of this process produce an increasingly diverse set of rules that, in aggregate, handle [air combat] with fewer and fewer mistakes. The exploitation of [air combat] generated by the aggregate behavior of a diverse array of agents is much more than the sum of the individual actions. For this reason, it is a complex task to design a single agent with the same capabilities for exploiting [all of air combat]. It is simpler to approach this capability step-by-step using a distributed system.

Holland combines with Sull, Eisenhardt, and Turing to reinforce a step-by-step process of translating the skills, rules, and knowledge of Airmen mental schemata into autonomous air vehicles. This Systems Training approach could leverage the entire instructional capacity of Air Force aircrew.

The Air Force Research Laboratory has been working on an autonomous air vehicles envisioned to augment the capability and capacity of current manned-aircraft flight leads. Appropriately named “Autonomous Loyal Wingman,” the program considers the use of a subscale drone working in concert with other manned-platforms. Consistent with autonomy trust data from MIT, Airmen younger than 35 are positively receptive to autonomous air vehicles. The following three examples reflect simple coordination rules that could be prioritized towards three air combat specialty functions.

Offensive Counter-Air (OCA) Autonomous Loyal Wingman Coordination Rules:

  1. Hold where you are told.
  2. Stay 20 lateral nautical miles away from any other aircraft.
  3. If unable to maintain separation between two pinching aircraft, then rejoin on the aircraft with the highest average closure rate in the past 30 seconds.

These coordination rules would complicate enemy targeting, improve sensor density, and facilitate friendly targeting of the enemy. Next, human autonomy instructors might teach weapons employment and threat reactions.

Electronic Warfare (EW) Autonomous Loyal Wingman Coordination Rules:

  1. Hold where you are told, on axis.
  2. Relay a visual depiction of the electro-magnetic spectrum map.
  3. Jam when flying towards the enemy, cease jamming otherwise.

These coordination rules would interleave with those of the OCA Loyal Wingman. The next stage of development would time maneuver with antenna gain characteristics to shape the electro-magnetic spectrum more favorably towards friendly aircraft.

Communication Datalinks Autonomous Loyal Wingman Coordination Rules:

  1. Fly to the edge of a defined signal-to-noise ratio.
  2. Amplify signals received.
  3. Relay amplified signals omni-directionally.

Putting these three specialty loyal wingmen together in a large force exercise would present opportunities to synergize effects, but it would also increase the opportunity for emergent behavior, which represents the final proposed rule for flight test of autonomous air vehicles.

Airmen should let emergent behavior play out towards a logical conclusion. According to John Holland, emergence occurs when “building blocks at one level combine into new building blocks at a higher level.” Put another way, the properties of the whole appear different than the properties of the individual agents. With a Systems Training approach to autonomy, even years of reliable autonomous behaviors in one context do not constitute predictability in a different context. In keeping with the instructor-student paradigm for autonomous System Training, emergence is analogous to the behavior exhibited by human students at the Air Force Weapons Instructor Course. There, contested operational environments stress current-paradigm tactics and push weapons school participants to exploit interdependencies and collaborative techniques towards novel, sometimes different, solutions. The debrief environment is meticulous, but creativity is allowed enough room in this training environment to assess the logical outcome of a novel behavior. Likewise, instructors of autonomous air vehicles should always keep a logical conclusion in mind, but should allow autonomous emergence enough room to execute and enable assessment of the outcome of the novel behavior. With a skeptical eye on safety, autonomous emergent behavior may serve as a catalyst towards alternative methods of mission effectiveness.

The amount of patience an Airman instructor affords autonomy will depend on the amount of feedback that instructor has from the autonomous air vehicle. A human instructor in an open-cockpit bi-plane perceived the same environment that a human student perceived. This enabled the instructor to feel acceleration, see instruments, hear the buzz of resonating wires, smell humidity, and taste an overly rich engine mixture in order to adjust the student to proper behavior. When open cockpits gave way to pressurized cockpits with bubble canopies, instructors had to compensate to sensitize students to alternate methods of perception. The introduction of leading edge flaps and lightened control stick gradients all had consequences to human instructor-student perception and behavior. These consequences were relatively easy to adapt to because, despite the evolution of the aircraft, the human instructor and student were both in tune with the most helpful sensory feedback.

Comprehensive feedback between the human and machine is important because it would serve as the instructor’s conduit for positive and negative reinforcement. Simple rules can serve as a starting point, but instructors will desire adjustments to machine behavior. If a turn is too shallow, the instructor will need to convey the desire for the autonomous air vehicle to “bank tighter” or “use load factor and trade off airspeed” or “dive and maintain energy”. Conversely, if a maneuver is perfect, then the instructor will want to positively reinforce the machine characterization of the behavior with feedback that improves the probability that future maneuvers will be conducted in the same manner. The ability of the instructor to recognize those machine behaviors that need correction versus those that are acceptable is dependent on comprehensive feedback.

Like humans, high quality feedback is critical for the development of trust in autonomous systems. Danette Allen emphasized this point, saying “we talk about whether to trust an autonomous system and it is often in the same context, using the same language, same vernacular as how do we trust humans.” David DeSteno characterized four components of human feedback that provoked distrust, and then proved those same feedback components as untrustworthy in a machine. Feedback is a necessary conduit for human-machine data that characterizes the instructor-student relationship in a Systems Training approach to autonomous systems test. For this reason, DARPA is funding development of techniques “that aim to make artificial intelligence explain itself.” Hutchins, Cummings, Draper, and Hughes studied a similar concept. Of note, they modeled an autonomous machine’s interaction with the environment in the same way that aircrew behave, and developed machine algorithms that provided confidence measures as feedback to the human. Similar to humans, it is easier to trust a student who communicates their intention to do the wrong thing than it is the student that silently executes something unpredictable.

A Systems Training approach for testing autonomous air vehicles does not have to be complex in order to fulfill its purpose. Three types of proposed rules (safety, coordination, and timing) for flight provide a simple starting point for the approach. All three rules match an instructor-student paradigm that current aircrew execute in order to build autonomous human pilots for the Air Force. Safety rules are a necessary foundation, coordination rules are a necessary next step, and timing rules will add precision and efficiency to the autonomy. Lastly, emergent behaviors should be allowed and assessed to a logical conclusion. This approach changes the roles of traditional acquisitions teammates. As such, Airmen will have to accommodate different ways of executing their specialty and leaders will have to accommodate the risk of habit change.

Conclusion

Fundamentally, test and evaluation is about trust. Autonomous air vehicles will serve an important role in future combat only if the Air Force learns how to effectively test those vehicles. The Systems Training concept offers the Air Force a strategic advantage of controlling its own destiny regarding autonomous air vehicle data. Here, the Air Force would own the data describing autonomous air vehicle behavior and contractors could focus on materials science, technological states of the art, and manufacturing techniques that would make more diverse versions of autonomous air vehicles possible. Complementing Systems Engineering with Systems Training will allow autonomous air vehicles to develop effectively. During Systems Training, program managers will be less responsible for delivering requirements, and more responsible for reconciling effective capabilities between different platform lineages. Implemented properly, Systems Training will bring front line weapons system instructors in direct contact with developmental machine autonomy. Test pilots will be more responsible for proving that machines can learn than they will be for proving that machines meet a requirement. Given ample amounts of machine feedback, instructor aircrew need only do what they have done best for decades—teach a human-machine team the nuances of military aviation. All the articles in this series have emphasized the importance of machine adaptation. However, the final point of emphasis asks, “Can the Air Force adapt?”

Nicholas J. Helms is a graduate of the USAF Academy with a Bachelor of Science in Human Factors Engineering. He is a distinguished graduate of the USAF Test Pilot School with over 2,000 hours piloting multiple aircraft, including the F-16, MQ-9, T-38C, and C-12J. He has flown missions in support of Operations Noble Eagle, Iraqi Freedom, and Enduring Freedom.

Disclaimer: The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

Leave a Reply