More than 20 percent of Americans will have symptoms of depression or anxiety this year, but only one in five of them will get adequate treatment. Increasingly, many people are turning to smart phone apps for help. There’s only one problem: none of the thousands of mental health apps available has so far has been adequately tested to see if they provide relief.
[Photo: Dr. Ying Kuen Cheung]
As part of an ongoing National Institute of Mental Health-funded study, Dr. Ying Kuen Cheung, professor of Biostatistics at the Mailman School of Public Health, is providing the expertise to test whether these apps work the way they’re intended. “There is huge potential for smart phones to help people with depression and anxiety,” says Dr. Cheung. “But so far we have no idea what works and what doesn’t. Just because an app uses a technique that’s effective in a doctor’s office doesn’t mean it will translate in the context of a mobile app.”
Available free for Android users, the apps target common causes of depression and anxiety like sleep problems, social isolation, lack of activity, and obsessive thinking. Worry Knot gives users techniques to identify and untangle their worries. Social Force helps people reconnect with friends and build a support system. All of the apps track user progress and offer encouragement along the way.
The application suite, called IntelliCare, uses a state-of-the-art technique from machine learning, a branch of artificial intelligence, to “learn” from the way people interact with the software. Anonymous user data trains the system to provide more personalized interventions, isolating the techniques and goals that best encourage users to meet their goals. The system can also provide recommendations for other apps in the suite that could meet their needs much in the way Amazon uses your shopping and browsing history to recommend the perfect pair of sneakers.
This recommendation process is also central to how Dr. Cheung is able to make comparisons between the apps. He explains that the technique, called open-ended adaptive randomization, is more flexible than a traditional randomized controlled trial, which has a defined start and finish. “Adaptive design also allows us to evaluate all the apps simultaneously, even as people are continuously downloading, using, and discarding any number of them.”
Dr. Cheung just completed a test run using data from 366 users, focusing on 90 days of usage, including every second of user interaction—a huge amount of data. To assess the merits of the apps, he explains, it’s crucial to first understand how they’re used. “Every app is used in a different way. One app might ask you to identify what is making you anxious and analyzes it with you over a course of several minutes,” he says. “Another might make a quick suggestion that you put down your phone and take a walk outside.”
Dr. Cheung uses a statistical technique designed for large data sets to tease out five or six meaningful usage patterns. These patterns are key to what comes next: the clinical phase of the study when researchers will follow 200 people with anxiety or depression who have agreed to use the IntelliCare apps for up to eight weeks and undergo an initial clinical assessment as well as a series of telephone and online questionnaires about their mood. The research team will look for the usage patterns Cheung identified to see if they match with clinical improvements.
Going forward, Dr. Cheung and colleagues anticipate that the IntelliCare system will continue to adapt to encourage users to use the apps in ways that help alleviate their symptoms. According to Cheung, this aspect of implementation science is not unlike how commercial app makers refine their apps to make them more engaging and popular, only with clinical outcomes not profit as their ultimate goal. “Our job isn’t done once we prove one or more of the apps works in a clinical sense,” he says. “They can’t just sit on a virtual shelf. To be successful, they have to be something people use.”