Mental Health Applications For Artificial Intelligence Stokes Fears

There is now an alarming and growing push to direct the explosion of artificial intelligence applications to the dangerous task of screening for mental health issues.

Mainstream media talking heads are now endorsing the practice, using the extreme cases of suicidal adolescents as a starting point. But there is no doubt that the goal is to implement AI for the detection and treatment of mental maladies.

According to the American Psychiatric Association, estimates are that there are over 10,000 mental health apps available on app stores.

Virtually none are approved.

Coming right up behind them is what many mental health care providers and researchers fear — a slew of AI-driven applications that seek to replace human interaction in diagnosis and treatment.

The American public is far from buying into this trend, according to new findings by the Pew Research Center. Patients already see AI technologies taking over such areas as screening for skin cancer and monitoring vital signs.

And now there are AI-enabled chatbots such as Wysa and other FDA-approved applications that supporters tout as making up for the lack of staffing for mental health and substance abuse therapists. These are screening patient conversations and texts to make recommendations.

AI is also determining opioid addiction risks and looking for mental health issues such as depression. Experts increasingly worry that the applications are now in the territory of making actual clinical determinations, and there are calls for the Food and Drug Administration to evaluate safety concerns.

A recent example came from Koko, a mental health nonprofit that was recently exposed as using ChatGPT as a mental health counselor. About 4,000 people did not know their responses were coming from AI, prompting many ethicists to raise concerns about the practice.

Still others are utilizing the popular ChatGPT to answer questions normally reserved for a personal therapist. The platform cautions that this is not an intended use.

The FDA in 2020 launched a digital health center designed to monitor and evaluate AI’s implementation in health care. Critics charge, however, that the agency is too slow in adapting to changes in the digital landscape “to provide reasonable assurance of safety and effectiveness.”