Blog

GallagherBlog.jpg

Responsible use of AI requires trustworthiness and evaluation

By Pamela J. Gallagher 

With the rapid rise of artificial intelligence (AI), it seems that we’ve been thrown into the deep end of AI technology without the luxury of hindsight or any semblance of a collective vision for the role AI ought to play in society.

We are in the throes of it, yet decades from now we will get to look back and see the role we had in its development. Did we blindly accept it? Did we fear it? Did we pioneer it? Did we use it for good, or did we do harm? Some regard it as the magical answer to all problems, others as the end of the world as we know it. The answer is likely somewhere in the middle.

In the face of all we cannot know about AI in these early stages of development and adoption, I see two particular areas that deserve our focus to develop helpful guardrails that will allow us to move forward wisely.

Building trust

One of the primary issues with the acceptance of AI is a lack of trust in the technology. People wonder: Where does this information come from? How does bias factor in? How can I know when it’s being used? At this point, the answers to those questions tend to create more confusion. Building trust will be a major hurdle to wide spread adoption.

There are many opportunities for the various types of AI to help solve problems, but only when we clearly understand what a given AI technology can and cannot do, and then use it accordingly. AI is undoubtedly the flashiest tool available, but we build trust when we take time to consider whether AI is the appropriate tool for the job.

AI is not a panacea; it cannot and will not fix everything. It can only fix identified problems with identified solutions.

Continuous evaluation

Organizations (and even individuals, for that matter) need a system in place to constantly evaluate and measure AI’s impact and its alignment with organizational vision, values, and goals in order to truly be using it responsibly. Continuous monitoring is essential.

Evaluation processes should be designed with an eye toward clarifying how AI can supplement human contributions. For example, sometimes AI is better than a newly trained person who doesn’t understand the system yet. We’ve all had the experience of repeating, “Representative!” into the phone, finally getting to speak to a human, only to find that they can’t actually help solve your problem. As lower-level functions are increasingly done by AI, humans must be trained to operate at the top of their game. Regular evaluation structures will help identify the crucial process points and gaps where human specialization is most needed.

 

The development of AI is an ongoing story, and I plan to write about it as I seek to help lead my own organization in the responsible adoption of AI. We are living through another time of rapid change, and because of universal engagement in and impact of the digital world, everyone is involved. No one can truly claim to be a mere outside observer. We all have a chance to shape AI and will be shaped by it, and we are all responsible for how it is used.

 

Resources:

Human vs AI Test: Can We Tell the Difference Anymore?, Tidio

AI has many obstacles in its way, Data Economy

Tracking the latest in AI, ChatGPT, Modern Healthcare

How AI can have a tangible impact on healthcare, Becker’s Hospital Review

OSF develops AI model to predict death, Becker’s Health IT

3 ways a Louisiana system is integrating AI into clinical workflows, Becker’s Health IT

From finding the right candidates to keeping them, how hospitals are using AI to address workforce needs, Fierce Healthcare

How to Combat Healthcare Staffing Shortages Using AI Technology, AugMedix