Reliance on AI for Decision Making

8 Jul 2022

As I was getting prepared to teach a graduate class on Ethics in IT, I was reminded—again—about the incredible power and vulnerabilities of AI-based decision making. With more frequency than I’d like, I see the phrase “AI Ethics,” and am quickly reminded that AI, in and of itself, has no ethics. There is really no such thing as “AI Ethics.” In fact, AI is merely a process (or, more accurately a series of processes) that are used to guide decision making. However, the fact is that truly ethical use of AI applications and systems requires careful consideration of the inherent biases brought to those processes by those who create, market and deploy them.

It is way too easy to believe the oft-made claim that a system can be without bias. It can’t. Ever. We all have biases, and so do the works we create. As such, it is inaccurate—let alone disingenuous—to claim that bias in any system or creation has been eliminated. We have biases based on the things that we’ve learned, both consciously and un, the ways that we’ve learned them, and/or experiences that we’ve had or seen in others. Remember: is not possible to eliminate bias. To suggest so is both clueless and disrespectful to whomever such phrase is uttered.

What is possible, however, is knowing where to look for bias, and subsequently identifying and managing to it. Most biases can—and must—be recognized so that those who are using these powerful (read: life changing) tools can use them in the most beneficial way while minimizing the risk of harm when bad information—or even good information—is used without context and bad decisions and outcomes result. Read: costly resolution and likely litigation.

The stories of the hiring practices of large tech companies that effectively eliminated candidates based on race and/or gender are legendary. However, such outcomes were not the intent of the creators that hoped to screen for certain desired characteristics in potential hires. But the results, which reflected clear bias in favor of white males, were the outcome(s) nonetheless. And none of these cases was a “one off.”

BC Strategist Kevin Keiller turned me on to a book called "Invisible Women, Data Bias in a World Designed for Men," by Caroline Criado Perez. Far from being a man-hating diatribe, the book highlights how many decisions, both before and after the widespread deployment of AI, relied upon biased assumptions that didn’t accurately reflect the patterns of people who weren’t white males. Here’s a real and current example. In mid-June, I was having dinner with a friend who is an academic anesthesiologist. We were talking about this very issue and he questioned how there could be bias in discussions of things like medication dosage. I almost fell over. I asked, politely of course, whether, in fact, accurate dosage information was different for men and women. He said, “of course.” My response was that in this case in particular, determinations of dosage must be measured and calculated to include for biases in gender and race among other determinative factors. To not consider those criteria is not only dangerous for patients, but malpractice-worthy as well if something goes wrong.

From perspectives on commonly used words and phrases, to cluelessly and unintentionally biased municipal decisions regarding snow removal, such biases are a frighteningly clear indicator that inherent biases run so deep that we just don’t see them. As mentioned in the book, it seems ridiculous that the professor of a literature class at Georgetown University titled the class “White Male Writers.” Would a comparable title have raised eyebrows and created a firestorm if it had been called “Black Male Writers” or an equivalent? Doubtful.

Have we not just come to assume that when a literature class is offered without further description, that its subject matter is the work of white male writers? This assumption is the insidious nature of bias—it’s so commonplace that we don’t even see it and thus can’t make best decisions based solely on the outcomes generated.

After considering inherent bias, the next step a decision maker should make, before even considering the use of an AI-based system, is to recognize that AI systems, in whatever form they occur, have no common sense. As such, the outcomes generated by AI tools are only as good as the data that is input into them. Think “garbage in, garbage out.” This concept can be enhanced by tools that can crunch numbers beautifully but that have a spotty record (and I’m being kind here) on analysis. However, on the positive side, AI’s great strength is its reliance of mathematical models to validate conclusions. Nowhere is this truer than in the context of medical care.

A recent article in Fortune by Jeremy Kahn makes this vital point: “In the absence of [critical and patient-specific] information, the tendency is for humans to assume the A.I. is looking at whatever feature they, as human clinicians, would have found most important. This cognitive bias can blind doctors to possible errors the machine learning algorithm may make.”

This statement brings me back to the points raised in the incredibly insightful book by Cathy O’Neil called "Weapons of Math Destruction." In this book, the author, who holds a Ph.D. in Math from Harvard and has taught at leading universities in the U.S., makes several important points, all of which I believe are critical concerning the use of AI.

First, AI-generated information must always be placed in context. As I’ve written previously in another publication, (see https://www.nojitter.com/ai-automation/ai-its-all-about-context), “Managers and those who rely on AI-based information must understand the context of both the data that’s input as well as the generated outcome. With additional complexity comes additional responsibility for validation of the input and output.”

This point requires careful consideration. In order to provide the “right data” to any application, it’s appropriate for those who are using the data to consider first, whether the right questions are being asked. Secondly, how is the data being manipulated? Are different factors weighted differently? What justifies the priorities that those algorithms make? Are the priorities correct? To whom do those priorities belong? Is the system designed to answer questions or to validate desired outcomes? This last question may be the most critical in order for the results that the AI system generates to be useful, or, more importantly, worth the investment.

Rarely do answers to these questions come easy, and some of the answers, particularly those regarding how the sausage gets made, can be difficult to secure—particularly when vendors who offer the “AI solution” want to keep their proprietary processes private. Until there is a firm grip on the quality of the data, quality of the processes, and overall purpose of the exercise, it’s hard to appreciate the value of the AI-generated outcome, let alone rely on it.

It is my belief that decisions made with AI input are fraught with risk for those who haven’t carefully considered the issues of bias and outcome validity. Until these questions can be thoroughly contemplated and vetted by the entity relying on such processes, legal and practical vulnerabilities are hiding in plain sight.

Comments

There are currently no comments on this article.

You must be a registered user to make comments

Add new comment

Your name: