Everyone Wants Responsible Artificial Intelligence, Few Have It Yet

Everyone Wants Responsible Artificial Intelligence, Few Have It Yet

As artificial intelligence continues to gain traction, there has been a rising level of discussion about “responsible AI” (and, closely related, ethical AI). While AI is entrusted to carry more decision-making workloads, it’s still based on algorithms that respond to models and data, as I and my co-author Andy Thurai explain in a recent Harvard Business Review article. As a result, AI and often misses the big picture and most times can’t analyze the decision with reasoning behind it. It certainly isn’t ready to assume human qualities that emphasize empathy, ethics, and morality.

Is this a concern that is shared within the executive suites of companies deploying AI? Yes, a recent study of 1,000 executives published by MIT Sloan Management Review and Boston Consulting Group confirms. However, the study finds, while most executives agree that “responsible AI is instrumental to mitigating technology’s risks — including issues of safety, bias, fairness, and privacy — they acknowledged a failure to prioritize it.” In other words, when it comes to AI, it’s damn the torpedoes and full speed ahead. However, more attention needs to paid to those torpedoes, which may take the form of lawsuits, regulations, and damaging decisions. At the same time, more adherence to responsible AI may deliver tangible business benefits.

“While AI initiatives are surging, responsible AI is lagging,” the MIT-BCG survey report’s authors, Elizabeth M. Renieris, David Kiron, and Steven Mills, report. “The gap increases the possibility of failure and exposes companies to regulatory, financial, and customer satisfaction risks.”

Just about everyone sees the logic in making AI more responsible — 84% believe that it should be a top management priority. About half of the executives surveyed, 52%, say their companies practice some level of responsible AI. However, only 25% reported that their organization has a fully mature program — the remainder say their implementations are limited in scale and scope.

Confusion and lack of consensus over the meaning of “responsible AI” may be a limiting factor. Only 36% of respondents believe the term is used consistently throughout their organizations, the survey finds. The survey’s authors define responsible AI as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.”

Advertisement

Other factors inhibiting responsible AI include a lack of responsible AI expertise and talent training or knowledge among staff members (54%); lack of prioritization and attention by senior leaders (53%); and a lack of funding or resourcing for responsible AI initiatives (43%).

Renieris and her co-authors identified a segment of companies that are ahead of the curve with responsible AI, which tend to apply responsible conduct not to just AI, but across their entire suites of technologies, systems, and processes. “For these leading companies, responsible AI is less about a particular technology than the company itself,” they state.

These leading companies are also seeing pronounced business benefits as well as a result of this attitude. Benefits realized since implementing responsible AI initiatives: better products and services (cited by 50%), enhanced brand differentiation (48%), and accelerated innovation (43%).

The following are recommendations based on the experiences of companies taking the lead with responsible AI:

  • Elevate responsible AI to the executive suite. Responsible AI should be more than a “check-the-box exercise,” but rather part of the organization’s top management agenda. For example, Renieris and her co-authors point out, 77% of leader firms are investing material sources — training, talent, budget — in responsible AI efforts, compared to 39% of respondents overall. “Instead of product managers or software developers directing responsible AI decisions, there is clear messaging from the top that implementing AI responsibly is a top organizational priority,” they add. Otherwise, employees will lack the necessary incentives, time, and resources to prioritize it.
  • Get everyone at all levels involved. RAI programs also need to include a broad range of participants in these efforts, the co-authors observe. Plus, responsible AI needs to be considered part of corporate social responsibility efforts.
  • Walk the talk. “Adequately invest in every aspect of your responsible AI program, including budget, talent, expertise, and other human and non-human resources.,” they advise. “Ensure that responsible AI education, awareness, and training programs are sufficiently funded and supported. Engage and include a wide variety of people and roles in your efforts, including at the highest levels of the organization.”
  • It’s about more than avoiding risks and penalties. “A mature responsible program is not driven solely by regulatory compliance or risk reduction. Consider how responsible AI aligns with or helps to express your organizational culture, values, and broader corporate social responsibility efforts.
  • Start ASAP. Launch responsible AI efforts as soon as possible to address common hurdles, including a lack of relevant expertise or training. “It can take time — our survey shows three years on average — for organizations to begin realizing business benefits from responsible AI,” Renieris and her co-authors state. “Even though it might feel like a long process, we are still early in the evolution of responsible AI implementation, so your organization has an opportunity to be a powerful leader in your specific industry or geography.”
Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *