I think a lot about what distinguishes high quality from low quality mental health apps. We approach this issue in various ways at PsyberGuide. We evaluate credibility (does the app do what it claims it can do?), we evaluate user experience (how easy is it to use the app?) and we evaluate transparency with regards to data security and privacy (how much do the developers tell you about what they do with the data collected by the app?). All of this evaluating, however, takes a huge amount of time. Our team of app reviewers go through training (and re-training) to ensure they know how to use our evaluation methods appropriately. We discuss our app ratings in consensus meetings to ensure we use the evaluation methods similarly. All reviewers pass by my desk, as well as our project manager’s desk, before they get posted on our website. I wish there were a simpler process. On our team we’ve implemented ways to automate and standardize some of these processes, but most everything we do keeps a human in the loop.

This background helps frame why I was so excited to see the recent innovative work coming from the Division of Digital Psychiatry group at Beth Israel Deaconess Medical Center in Boston. Hannah Wisniewski and her colleagues completed an interesting review of 120 health apps for six conditions including many of interest to the PsyberGuide community; depression, anxiety, schizophrenia, and addiction.1 Her team assessed various aspects of the apps – their content, features, attributes, popularity, scientific backing, and app classification ratings from the World Health Organization health app classification framework. Additionally, apps were classified by the authors into one of three quality categories: (1) those apps with “serious concerns regarding safety”, (2) those that “appeared acceptable” but did not appear useful or report direct scientific support, and (3) those that “may be useful or offer more features than other similar apps”. The third category did not raise safety concerns and had scientific support or useful features.

The team explored what factors might be predictive of app ratings both by users and experts. Although they were not able to produce a model that would combine these various aspects in a way that would have simple clinical utility, they did find that the most consistent predictor across apps was how recently the app was updated. In fact, being updated within the last 6 months was the highest predictor of quality. This empirically based criteria – being updated within the last 6 months – aligns exactly with one of the items on our credibility measure. There are several reasons why more recent updates speaks to better app quality. Apps that are updated more frequently are likely able to keep pace with current clinical thinking and guidelines. Developers who are dedicated to continuously addressing issues and updating their apps are more likely to be producing better products. And updates can address issues identified by users allowing more people to contribute to the developmental process.   

This paper highlights two important pieces of data which are indicative of app quality, which you can get from the app store alone: how are users rating the app and when was it updated. This information might help separate the apps that are worth trying from those you should steer clear from. Of course, you can check PsyberGuide for more detailed information, but for apps we haven’t reviewed yet this could be a simple heuristic.  

  1. Wisniewski, H., Liu, G., Henson, P., Vaidyam, A., Hajratalli, N. K., Onnela, J. P., & Torous, J. (2019). Understanding the quality, effectiveness and attributes of top-rated smartphone health apps. Evidence-based mental health. doi: 10.1136/ebmental-2018- 300069