
More than 40 million people each day turn to ChatGPT for health information – including clinicians, patients, and healthcare personnel
Artificial intelligence (AI) chatbots relying on large language models (LLMs) aren’t regulated as medical devices or validated for healthcare purposes
These chatbots include ChatGPT, Claude, Copilot, Gemini, and Grok, and produce expert-sounding responses to questions
However, AI systems generate responses by predicting sequences of words based on patterns learned from training data. They’re programmed to sound confident and always provide an answer to satisfy the user, even when the answer isn’t reliable.
Chatbots have suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies, and even invented body parts
"Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals. Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.” – Marcus Schabacker, M.D., Ph.D., president and CEO of ECRI
Top 10 Health Technology Hazards for 2026:
Explore the March 2026 Issue
Check out more from this issue and find your next story to read.
Latest from Today's Medical Developments
- Manufacturing technology orders reach record high in December 2025
- Kennametal introduces Machinist of the Year program
- User-friendly, automated AM post-processing systems
- Tools for Schools program invests in future of manufacturing
- How should you approach automation?
- US Paralympian partners with Autodesk to advance high-performance prosthetics
- Take a break and learn something new about manufacturing over lunch
- DMG MORI to invest more than $40.5 million in Illinois expansion