How Responsible Artificial Intelligence Is Shaping the Future

Industry leaders discussed the ways that ethically-developed AI is solving some of the problems in the financial services, healthcare and government sectors.

May 3, 2021

GTB AI

(Clockwise from top left) Steve Lohr, Ashley Casovan, Gavin Munroe, Rajeev Ronanki and Alka Patel discuss the need for responsible artificial intelligence during the School of Business' Bicentennial Signature event.

By Briahnna Brown

Developing responsible artificial intelligence is about more than just good engineering, said Alka Patel, chief of responsible AI with the U.S. Department of Defense’s Joint Artificial Intelligence Center (JAIC).

Responsible AI requires well-trained professionals working with the data processing, she said, which requires bridging the gap in AI literacy.

The Department of Defense has a strong safety culture, Ms. Patel said, and that is critical to the JAIC’s approach to AI policies and processes. At the beginning of the pandemic, for example, there were major concerns around grocery store supply chains, so the JAIC developed a prototype AI tool to better predict when certain zip codes may begin panic-buying. This tool also helps better inform the U.S. military’s response to crises.

In developing these types of tools, Ms. Patel said, ethics does not get tagged on at the very end of the process.

“We are there at the very beginning when the problem is being framed and thinking through all the different stakeholder perspectives, thinking through the possible harms, thinking through, ‘all right, what are the testing parameters and metrics that we need to be thinking about at the very beginning so that we can have line of sight as we go through the development cycle?’,” Ms. Patel said.

Ms. Patel addressed the ethics of AI during the George Washington University Bicentennial Signature event at the School of Business on Wednesday. The virtual event, which The New York Times reporter Steve Lohr moderated, featured a panel of industry leaders who discussed some of the concerns around responsible AI, such as data privacy, digital equality and ensuring that new AI systems are developed with inclusivity in mind from the beginning.

Rajeev Ronanki, senior vice president and chief digital officer for Anthem, discussed how the pandemic influenced the health insurance provider to transition from risk mitigation to risk prevention by using AI to predict health outcomes based on the data they collect. He said this has led to hundreds of thousands of proactive interventions with Anthem members, which helps keep people healthy while cutting down on medical costs for patients and insurance providers.

Machine learning also is helping Anthem automate some of the administrative processes such as insurance clams and authorizations, Mr. Ronanki said. That gives its employees the ability to spend more time directly engaging with consumers and build stronger relationships and trust.

Being able to better automate some of these processes requires good quality data for the AI systems to learn from, Mr. Ronanki said, and industries need responsible assurance services to examine that data before it is too late.

“When machines learn at scale on incomplete or poor data, and then you're perpetuating that level of automation, at scale, you know, then there's a huge amount of work to go back and retroactively, fix it,” Mr. Ronanki said.

“That's much more problematic,” he said. “So, I think that's where there'll be a new class of providers who will look at datasets that are being used by enterprises to say is that complete as to edify quality and also provide a set of assurance services around that as well as tools to do that.”

The pandemic showed Gavin Munroe, the global chief information officer of wealth and personal banking for HSBC, that the AI models that industries have been using to predict consumer behavior were not as flexible as they needed to be. To evolve new models more quickly and at scale, developers will need better frameworks with ethical guardrails to protect data privacy while also ensuring that practitioners can get the most out of AI.

Some of those guardrails include certifications, which Ashley Casovan, executive director at nonprofit Responsible AI Institute (RAI), formerly AI Global, is hoping to more broadly implement to enhance AI oversight. RAI is launching a responsible AI certification system that builds upon the Organisation for Economic Co-operation and Development’s (OECD) five Principles on Artificial Intelligence.

The independent certification aims to serve as a symbol of trust for participating global organizations in numerous sectors, Ms. Casovan said. Even though AI is already in many of the devices we use every day, she said there is still much work to be done to ensure that it is implemented responsibly, which is why a certification system helps create standards for industries to meet.

“We overestimate what AI can actually do and how intelligent it actually is,” Ms. Casovan said. “I'd love to get to the world…with better health care and better access for patients.

“This is something I'd like to see,” she said. “That's really dependent on good, high-quality data and also us putting these types of standards and guardrails in place.”