January 2021
January 2021

Summary

AI is already automating several aspects of human and/or machine functioning and it’s utility is expected to substantially increase in the future. These automations, while being accurate in a majority of instances, are highly unlikely to achieve sustained 100% precision across the entire space of potential actions. So how can we design, introduce and operationalize “failsafe protocols”? Read this article to know more.

Machine Learning (ML) and broadly Artificial Intelligence (AI) technologies have caused significant disruptions by automating several aspects of human and/or machine functioning. From automotive and healthcare to legal and retail experience, the disruption is significant and is expected to be sustained in the future. ML/AI technologies use data to learn rules/patterns associated with tasks; these rules (e.g., Bayesian conditioning) are then used to make decisions which be used to actuate without needing human supervision (referred to as automation in this article). These automations, while being “accurate” (i.e., making the right decision and actuating as expected) on a majority of instances, are highly-unlikely to achieve sustained 100% precision across the entire space of actions.

In part, this lack of perfection can be attributed to a combination of the lack of:

  • data relating to rare events
  • the context in which seemingly correct decisions could lead to adverse effects downstream
  • feedback mechanisms to point out which corner cases need changes in rules to make correct decisions in the future

Knowing that these widely deployed technologies will not be perfect, now or in the foreseeable future, requires us as a community (i.e., academic researchers, developers, industry leaders, and technology adopters) to think about how we can design, introduce and operationalize “failsafe protocols.” Particularly, where human experts can serve as “AI-chaperones,” allowing for AI technologies to bloom (akin to children enjoying company in adult supervision), while also telling the AI models where the judgment was wrong, and what is the course of retraining and reengineering to avoid incorrect actuations in future (akin to children being told to correct behavior if needed). Just as children evolve to become well-behaved adults in socially complex situations when chaperoned, AI systems with “human AI-chaperons” could also evolve into reliable, complex decision-making engines that earn the long-term trust of its adopters.

The most riveting evidence of needing human-in-the-loop to ensure sustained and reliable levels of automation lies in the airline sector. Capt. Sullenberger flying a modern-day commercial airplane, landed his plane safely (no loss of human life) in the Hudson River, New York City, after losing both engines to a bird strike, shortly after take-off. Within 35 seconds of losing both engines, Capt. Sullenberger had decided to land the plane in the river. More than half of the simulations conducted by the United States of America’s National Transportation Safety Board investigating this incident showed that the traditional turn-around approach to land on a conventional runway would not have been feasible. The aircraft’s ability to land safely is often tested on designated runways, and the data is then used to develop landing automation systems. In such an automation system, there would be no option for a “pilot-free” plane to land on a river, which in hindsight turns out to be a safer option in some adverse situations if a skilled-human were to take control of the system. Such convincing analogies can be made for every complex automation system, where skilled-humans can serve as “AI-chaperons” to ensure failsafe performance.

Developing AI technologies with “failsafe protocols,” allowing humans to avert rare but catastrophic mishaps, will be a significant avenue for innovation in the future. The following challenges will have to be overcome in developing fail-safe protocols when using AI technologies.

Loved what you read!
Get 15 practical thought leadership articles on AI and Automation delivered to your inbox
First, identifying when to engage human experts, i.e., adopters (users needing the technology) to enumerate to developers as many possible states of failures and agree on when to “hand-off” operations based on human judgment. There is a responsibility on the adopter’s end to seek data on as many routine and corner cases of operations as possible and continue to measure such data even after deployment.
Second, arriving at a consensus on when models have to be retrained to handle new states of failures that are learned during operations following deployment. To achieve this objective, both adopters and developers will have to collaborate on defining performance metrics that will warrant an “update” to the underlying rule sets that are used to actuate.
Finally, introducing high degrees of interpretability in model outputs allow humans to quickly identify why the model arrived at a particular decision and what are the alternative actions any human can take to avoid a catastrophic outcome. Achieving high degrees of interpretability requires developers to have a deep algorithmic understanding of AI algorithms and provide “simple English readouts” of features and how they contribute to decision-making capabilities. An example of human-engagement proven to be essential in seeking regulatory approvals for automation technologies is self-driving cars. Despite self-driving cars being able to “drive themselves,” not only are humans expected to be behind the wheels by law, but are also expected to engage with the car’s controls at varying intervals of time or when the car’s automation system is unsure of how to actuate based on scanned data.
Automating simple tasks will be low-hanging fruits to convince adopters that these technologies can “make fast disruptions.” However, as the complexities of tasks requiring automation increases, we assert that significant coordination between developers (either in research or industry settings) and adopters (i.e., customers/consumers) is needed to introduce “failsafe protocols,” where human engagement to avert rare, but damaging outcomes is possible. Introducing this notion will likely earn users’ trust, and most importantly, introduce industry-standards for what constitutes dependable ML/AI-based automation systems. We conclude this article with the enthusiasm that “safe and dependable” ML/AI-based automation systems are here to stay and will become ubiquitous across all industry domains.

DISCLAIMER
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the respective institutions or funding agencies.