health
March 30, 2026
There are more AI health tools than ever—but how well do they work?
Specialized chatbots might make a difference for people with limited health-care access. Without more testing, we don't know if they’ll help or harm.

TL;DR
- Microsoft, Amazon, and OpenAI have launched AI health tools that allow users to connect medical records and ask health questions.
- The demand for these tools is high due to difficulties in accessing traditional healthcare systems.
- Researchers and experts express concerns about the lack of independent and rigorous testing for these AI health tools before public release.
- Potential risks include misdiagnosis, recommending too much or too little care, and users misinterpreting AI advice.
- Companies are conducting internal evaluations like OpenAI's HealthBench, but these have limitations and lack the impartiality of third-party reviews.
- Independent studies, like one on Google's AMIE chatbot, show promise but Google is not releasing it publicly due to safety and equity concerns.
- While some AI applications like suggesting exercise plans are low-risk, others like triage, diagnosis, and treatment plans carry significant risks.
- Despite disclaimers, users are likely to use these tools for diagnosis and management, highlighting the need for robust safety measures.
- There is a consensus among experts that while AI health chatbots could be beneficial, especially for those with limited healthcare access, their current performance and safety are not fully established.
Continue reading the original article