In the realm of medicine, there’s a long history of cautionary tales when it comes to artificial intelligence (AI). AI programs designed to predict medical conditions or enhance patient care have often fallen short, leading to false alarms and exacerbating health disparities. As a result, physicians have predominantly used AI in supporting roles, like note-taking, offering second opinions, and streamlining administrative tasks. However, AI is gaining traction and investment in the medical field.
The Food and Drug Administration (FDA), a key player in approving AI applications for medical use, is at the forefront of this transformation. AI has the potential to discover new drugs, identify unexpected side effects, and assist overwhelmed healthcare staff with repetitive tasks. Still, the FDA has faced criticism for its vetting process and the lack of transparency in its approvals for AI programs that aid in detecting conditions such as tumors or blood clots.
President Biden’s recent executive order aims to address these issues by pushing for regulations across different agencies to manage the security and privacy risks associated with AI in healthcare. The order also seeks increased funding for AI research in medicine and the creation of a safety program to collect reports on AI-related harm or unsafe practices. Furthermore, there will be discussions with world leaders on this topic in the coming week.
One of the challenges in AI healthcare oversight is that no single U.S. agency comprehensively governs the landscape. Senator Chuck Schumer has initiated discussions with tech executives to explore how to nurture AI’s growth while identifying potential pitfalls. Companies like Google have already come under congressional scrutiny for introducing AI programs, such as the Med-PaLM 2 chatbot for healthcare workers, which raises concerns about patient privacy and informed consent.
The FDA’s current approach to overseeing AI, particularly large language models, falls behind the rapidly evolving advances in the field. While it has only just begun discussions on reviewing AI technology that continues learning from thousands of diagnostic scans, European AI tools already scan for a broader range of issues.
The FDA’s jurisdiction is limited to products approved for sale, meaning it has no control over internally developed AI systems used by large healthcare organizations and insurers. This lack of oversight and transparency is making doctors hesitant to fully embrace AI, as they seek clarity on factors like program development, testing, and effectiveness.
The goal of getting AI oversight right in medicine is a critical one that involves multiple agencies. There are concerns that unchecked AI running in the background of patient care could lead to unforeseen consequences, similar to what has been observed in the aviation industry when automated safety systems override human judgment.