Trustworthy AI’s Journey

In late 2017, an idea for a handbook on ethics of AI was formed and by August 2019 The Oxford Handbook of Ethics of AI was complete. The editors are Marcus D. Dubber, Frank Pasquale, and Sunit Das. Its almost 900 pages are filled with amazing insights from scholars with varied interests, backgrounds, and approaches to AI. These insights are relevant today despite the rapid rate of change in AI. The handbook is also a rather varied collection of thoughts, concerns, and in some instances proposed remedies to AI-centric issues. While its title specifies Ethics, it covers topics including Governance, Fairness, Transparency, Responsibility, Consent and more. Although the articles on such topics are independent, not necessarily building on each other, The Oxford Handbook is a good frame through which to think about Trustworthy AI.
Around the same time, practitioners began their journey toward AI in earnest. But as practitioners, we often focus on problems that are of the most immediate importance. If my business is to make loans and my algorithm allows me to do it better, faster, and more consistently, then my main concern may be explainability. Can I explain to my customer (or regulator) why their loan was approved or denied? If I can do so credibly, and show no bias, then my algorithm is likely to be trusted. In designing a recommendation system that provides sales representatives with next best actions, the goal is to ensure that salespersons trust the recommendations, and therefore act on them. In this instance, while explainability is an important part of the equation, so is ensuring that the AI system is designed around the human – that it earns the trust of the salesperson through transparent and unbiased performance. As I wrote in a previous article on Forbes, even if an algorithm is perfect and the data is representative, the solution may not be considered trustworthy if it does not tackle the right problem. To prevent such circumstances, many frameworks were born: Explainable AI, Human-Centered AI, Sustainable AI, Responsible AI, Ethical AI, Robust AI and more.
Some organizations may deploy multiple frameworks depending on the application being designed, but this is ultimately a clumsy approach. Is there a broad artifact which we can all agree to?
Advertisement
Equal AI has created such a checklist. Questions include: Have you framed the problem accurately? What data are you using, what is its origin, is it representative, and is it complete? Have you considered all appropriate laws? Likewise, Microsoft has a very robust document called the AI Fairness Checklist, in addition to a set of Responsible AI principles which they use in designing their own products. And finally, the Data and Trust Alliance, a NY-based organization, has developed the Algorithmic Bias Safeguards for Workforce. These include criteria for HR teams to evaluate vendors on their ability to detect, mitigate, and monitor algorithmic bias in workforce decisions. This is an interesting phase in the Trustworthy AI journey – but one that we seem to be moving on from.
The emergence of domain-specific concerns has caused organizations to shift focus yet again. If the AI in question is likely to receive high public scrutiny for representation, bias and fairness to all population subgroups, then Diversity, Equity and Inclusion principles apply. On the other hand, in the case of large-scale apps powering a manufacturing environment, perhaps the dominant concern is Environmental, Social and Governance principles. If the applications are being directly used by consumers, a group that thinks carefully about Human-Centered AI takes center stage. And in heavily regulated environments such as Healthcare, Governance, Risk and Compliance takes the lead. This continues Trustworthy AI’s journey from specific applications, to artifacts, to more careful consideration depending on AI’s intent and use.
The hype cycle (technology trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment, plateau of productivity) is applicable to more than technology. It also applies to the birth of concepts and frameworks that surround tech – in this case, Trustworthy AI. Trustworthy AI has been the focus of different parts of an organization. From those that designed applications to ensuring that we find a way for the organization to make a commitment to more domain specific methods to govern that in fact our AI is trustworthy. From everyone talking about Trust all the time and scores of posts on LinkedIn to fatigue on how exactly to solve for it.
I hope that the fatigue I sense is just the stall as we continue to implement Trustworthy AI principles, ensuring that Trust considerations in AI stays at the top – not of inflated expectations, but of productivity.