6030: Week 2: Artificial Intelligence/Machine Learning

Prompt: The major weakness of AI/ML systems is that they can’t explain “why”.  That is, an AI/ML system can perhaps prescribe a medication for a cancer treatment, but the system can’t provide an analysis of why it made that decision.  Humans typically wouldn’t trust a physician who couldn’t explain the basis for a prescription decision.  Why can’t AI/ML explain itself.  In your opinion, is that a real problem?

The justification for how or why an AI responds is decided within the programming. If the creator wanted the AI to report how it come to a decision, then the code just needs to be written in a way so that the support for the findings is reported. I disagree with the prompt in that the AL/ML systems can, in fact, explain why but humans would rather not bother with the reasoning behind the answer.

When I attended the annual AMEE (Association for Medical Educators of Europe) conference over the summer in Lyon, France, AI and ML was one of the big topics. Perhaps one of the sessions that I still contemplate was a session titled “Artificial Intelligence: What medical educators should be doing now” (AMEE, 2022). This panel discussion brought together experts from the American Medical Association, the Association of American Medical Colleges, and several university leaders whose research has focused on artificial intelligence and machine learning. My largest takeaway from this session was the discussion about the accuracy of the content filling the AI database.

In medical education, new knowledge is being added daily. It is not uncommon for old treatments to be reversed or proven ineffective. In the 1900’s radium was a medical treatment thought to reduce tumors. Patients would soak in radiation hot springs, and breath in radium-rich gas (Waxman, 2017). While this treatment may seem astonishing today, Waxman (2017) explains that this arcane remedy would eventually lead doctors to discover more appropriate doses for the chemical to treat cancer patients. Now, what if the future AI medical database had included this original radium research, as well as the current research? Which treatment would the AI recommend for the patient if both treatments were equal in the mind of the AI? How would the AI be able to ascertain which treatment remedy is current and correct?

The AI is only as good as the content that is being entered into the system. The message from the AMEE session panel was that doctors should use the database content from the AI database; however, they should also take caution and know enough about medicine to question the AI-suggested treatment results (AMEE, 2022). This got me thinking about AI in general terms, because the same message can be said for any profession. We cannot blindly trust the AI because the AI was created by humans, and humans can be dishonest and make mistakes. We should use the AI systems, but use them with caution because the AI is only as good as the creator and the content that fills the database.

Resources

AMEE. (2022). AMEE conferences. AMEE Lyon 2022. Retrieved October 1, 2022, from https://amee.org/conferences/amee-2022

Waxman, O. B. (2017, October 17). Real historical medical treatments that are terrible for you. Time. Retrieved October 1, 2022, from https://time.com/4982099/quackery-medicine-history/