How the Air Force should Test Autonomous Vehicles Part 1 of 2

By Nicholas J. Helms

Properly educated, the resulting robots are likely to be intellectually formidable.
 —Hans Moravec, Rise of the Robots, 2009

Autonomous vehicles will serve an important role in future combat.  The first installation of this series introduced a concept called Systems Training as a method to develop autonomous air vehicles.  Systems Training is deliberately labeled to complement Systems Engineering, an industrial-era concept that exists throughout DoD acquisitions processes.  In the second article of this series, we investigated the relationship between deterministic machines and non-deterministic humans.  Since its introduction by RAND in 1954, Systems Engineering served to validate Airmen trust in complicated aircraft.  However, autonomous aircraft will best fulfill their potential as complex adaptive systems, just like non-deterministic humans. As the third article of this series emphasized, trust in a complex adaptive system is more dependent on feedback and adaptation in the long-term than it is on perfection in the short-term.  Regarding trust in human aircrew, a trainee’s sustained learning curve over time carries more weight than the talented demonstration of a single task on any given sortie.  As it is for humans, testing—and trusting—autonomous machines demands multiple opportunities to train machine interaction with the environment.

This is the paragraph where many authors would introduce a novel concept that addresses the aforementioned challenges.  Perhaps air forces should borrow from software development and assimilate Scrum processes, spiral development, or emphasize agility over and over.  This article takes a different, not so novel approach.  To train autonomous air vehicle behavior, Airmen should train autonomous air vehicles like wingmen.  They should adopt a developmental approach that this article describes as Systems Training.  This developmental approach would iterate agile machine behavior changes in the short-term, but also accept modest step-by-step progress over the long-term, similar to how instructor pilots teach human students.  Systems Training would build on simple rules in the short-term and allow the complex, non-standardized, evolution of machines in the long-term.  Systems Training will require multiple methods of access to machine learning.  What this approach lacks in novelty, it overcomes with simplicity because it leverages an existing mindset for human instructor-student interaction.

The field of developmental robotics paves the way for Airmen to train autonomy like human students.  Over the last 15 years, developmental robotics has emerged to embrace the complex long-term interactions between human mind and body and considers those interactions analogous to robot components.  Developmental robotics is “the interdisciplinary approach to the autonomous design of [machines] that takes direct inspiration from the developmental principles and mechanisms observed in the natural cognitive systems of [students].”  Developmental robotics builds on a complex systems approach to cognitive development by Thelen and Smith and embodied intelligence by Pfeifer and Bongard.  This developmental approach captures the heuristics that are so important to behavior.  With an emphasis on novelty, schemata, and unscripted environmental interaction, the approach has a clear lineage to complexity. If that was not clear enough, Thelen and Smith also succinctly state that they have “nothing to gain from traditional reductionism.”  If Systems Engineering accommodates complicated machines with reductionism, then Systems Training accommodates autonomous machines with a developmental approach.  While reductionism can reveal the rule-based components of behavior, it does not help reveal future paths of adaptation.  Reductionism is not useless, it is simply overrepresented as a test method for autonomy at the time being.

In the field of artificial intelligence, complexity versus reductionism is represented by “machine learners versus knowledge engineers.”   Moore’s law enabled the processing of big data.  In the book, The Master Algorithm, Pedro Domingos described the different approaches to machine learning and he marginalized the programmed knowledge approach of mundanely codifying lots of facts. However, knowledge engineering should not be discounted altogether and, instead, ought to be recognized as a synergistic contributor to machine learning. This synergistic approach is shared by the IBM Watson computer developers who confirmed that, without quality data or quality human counsel, machine learning has limited potential. In the context of text-based data, a balance between knowledge engineering and machine learning generates the best results. Consistent with the times, though, a text-based data paradigm for artificial intelligence is already morphing to accommodate the input of multispectral sensor data in pursuit of machine autonomy. Regardless of the type of data used, autonomy is best served when there is a combination of machine learning and directly programmed, high quality, knowledge.

Developmental methods complement autonomous air vehicle tests because they consider progress over the long-term.  Training autonomous air vehicles like human students enables development in the here-and-now, the system life cycle, and generational time scales, all of which are important for complex adaptive systems. For Airmen, anything outside of a five-year budget cycle can be defined as long-term. As demonstrated with ICBMs, Systems Engineering was capable of solving complicated problems inside of five years, but this approach is starting to overrun budget cycles and is predicted to perform poorly against complex problems.  Ultimately, Systems Engineering fulfills requirements that are presumed stable, but autonomous air vehicle development demands continuous change.

Authoring requirements that comprehensively capture the entire current aircrew mental schemata is impractical, not to mention the difficulty with incorporating future operational and tactical evolutions.  Instead, requirements for autonomy should characterize behavior, not deterministic physical objectives.  Christopher Langton, a pioneer in artificial intelligence, said that “it’s effectively impossible to cover every conceivable [requirement], top-down systems are forever running into combinations of events they don’t know how to handle.”  Complex adaptive systems, including human brains, do not pursue requirements, they pursue novelty.  As such, aircrew schemata constantly change to optimize niche tactics, responsibilities, and the environment. Apparent consensus requirements like “do not break the airplane” become nuanced in the context of an over-g dive recovery that bends the airplane in order to avoid the ground.  Therefore, a requirements approach towards autonomy would trap acquisitions professionals in a time-prohibitive loop of never-ending requirement modification.

The topic of behavior and decision-making further complicates a Systems Engineering approach to autonomy because requirement authors are likely to characterize behavior poorly. If Airmen pursued a behavior-based requirements catalog for autonomy, they would likely waste time characterizing, and re-characterizing, the behavior requirement.  Malcolm Gladwell highlighted the same conclusion, saying that behavior experts “give different answers at different times, or they have answers that simply are not meaningful.”   Daniel Kahneman called it the illusion of validity and said experts “have no idea that they do not know what they are doing,” they are overconfident and uninformative. Therefore, it would be best to use requirements for simple, agreed upon autonomy behaviors and use Systems Training for long-term progress and to facilitate rapid short-term reprogramming. Alan Turing predicted incompatibilities between short-term and long-term design approaches.  He suggested “to start with a [relatively simple] initial system and then train the system by means of an educational process.” A Systems Training approach is the educational process for machine autonomy, and it helps for the teacher and the student machine to have similar perspectives.

Developmental robotics emphasizes embodiment, and Systems Training should follow suit.  Embodiment challenges the idea that the brain develops intelligence and tells the body what to do.  Rather, embodiment suggests the body can control brain development. In robotic terms, embodiment emphasizes sensor and effector contributions to processor intelligence. For researchers that embrace embodiment, how the human tissue covering opposable thumbs interacts with the visual system is a factor which contributes to, or limits, brain intelligence. For running enthusiasts, embodiment explains why sprint workouts condition motor nerves in the spinal cord to automatically be more responsive. For pilots, embodiment explains automatic human responses to g-forces and instabilities. The embodiment concept is important because, for autonomous air vehicles, the human teacher and machine learner can communicate more effectively if each entity perceives the same world.

Perception is driven by sensors. For the human teacher to educate the autonomous air vehicle effectively, autonomous air vehicle sensors should be functionally redundant to enable alternative opportunities to process data in unpredictable environments. For example, if image processing tasks have electro-optical, infrared, and low-light amplified sensors, then autonomy should function well. Similarly, range processing tasks should receive data from both radars and lasers. Furthermore, cross-modal connectivity should facilitate image processing in relation to range processing patterns and vice versa. At Aurora Flight Sciences, autonomous UH-1 helicopters present visual depictions of Light Detection and Ranging (LIDAR), electro-optical, and geo-spatial data to human instructors to assess machine processing in real-time. The Chief Test Engineer for the program even called tests “perception flights.” This example shows yet another way that developmental robotics is congruent with Systems Training to improve the machine education process.

Human instructors should have access to multiple methods of machine training. In the current Systems Engineering approach, a programming bottleneck occurs at the industry contractor. If autonomy demonstrates a deficiency, then the program comes to a halt to parse requirements and contract additional man-hours of programming to fix the problem. This approach is cost-prohibitive for complex air vehicle test. According to Corey Schumacher, an Air Force Research Lab autonomy researcher, “rapid, low-cost updates and technology insertion is critical” for success. Rapid, low-cost updates will depend on instructor opportunities to influence machine behavior.  Angelo Cangelosi and Matthew Schlesinger describe 20 different devices that fulfill data requirements for seven sensing effects.  Human instructors in chase aircraft could receive visual and auditory representations of these sensing effects to better understand what the autonomous air vehicle is experiencing. Instructors with access to closed-loop control, voice, text and graphics-based inputs would maximize iterative fixes to machine behavior.

Simulation should complement flight test as a method of autonomous vehicle Systems Training. However, all machine training cannot be completed in a simulator. Consistent with a complexity approach, Systems Training should deal with realities of space, time, error, uncertainty, and simultaneous stimuli in order to realistically instruct machine autonomy. Combined with the simulator, real-world testing would train autonomous air vehicles the same way Airmen train student pilots. A Systems Training approach is powerful because it already fits into the Airmen’s schema for developing autonomous human pilots.

Systems Training complements Systems Engineering by instructing autonomous behaviors at a pace that iterates quickly enough to change behavior next week but accepts long-term goals to guide emergent behavior seven years later. Systems Engineering would still be used to verify trust for deterministic effects, whereas Systems Training would be used to calibrate trust for behavioral effects. According to Danette Allen, Senior Technologist at the NASA autonomy incubator, “We want these systems to behave in the way we expect a human to, because that’s what we know how to do. We’re not there yet, but we will be.”  Dr. Steven Bachowski, a disruptive technologies expert for the National Air and Space Intelligence Center, also recommended to “train autonomous systems like we train humans.”  A Systems Training approach would facilitate trust through familiarization.  Lastly, it would better accommodate complexity where reductionist approaches are not predicted to help.

Keep an eye out for part two of this article discussing Implementation and training rules for autonomous vehicles.

Nicholas J. Helms is a graduate of the USAF Academy with a Bachelor of Science in Human Factors Engineering.  He is a distinguished graduate of the USAF Test Pilot School with over 2,000 hours piloting multiple aircraft, including the F-16, MQ-9, T-38C, and C-12J.  He has flown missions in support of Operations Noble Eagle, Iraqi Freedom, and Enduring Freedom.

Disclaimer: The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

Leave a Reply