The Mental Health App Guide Designed With You In Mind


One Mind PsyberGuide is a non-profit project that aims to help people to use technology to live a mentally healthier life.

At One Mind PsyberGuide, we want to lead the field of digital mental health forward. Through rigorous evaluation of technology and promotion of scientific best practices, we seek to guide the science, practice, development, and use of digital mental health tools in various settings. We hope to improve access to mental health resources to those who lack access to traditional support, and to help people explore how technology can be used to improve wellbeing.

One Mind PsyberGuide is funded by One Mind, a leading non-profit organization in brain health research. One Mind PsyberGuide was established in 2013 in response to a growing need for guidelines to help people navigate the mental health app marketplace. In 2017, One Mind welcomed Dr. Stephen Schueller as One Mind PsyberGuide Executive Director and established a partnership with Northwestern University. One Mind PsyberGuide now operates out of the University of California, Irvine and Northwestern University where our team consists of experts in mental health, technology, and technology delivered care. One Mind PsyberGuide is not an industry website; its goal is to provide accurate and reliable information free of preference, bias, or endorsement.

How One Mind PsyberGuide Works

The One Mind PsyberGuide team reviews apps based on the app’s Credibility, User Experience, and Transparency of Privacy Practices.

Credibility

We look at the research supporting the technology and the credibility of the development process.

User Experience

We explore how fun, functional, easy-to-use, engaging, and interesting the technology is.

Transparency

We review privacy policies to see if key pieces of information about what happens with entered data are addressed.

App Selection Process

We discover new apps in a number of different ways, including:

  • Research papers and published reviews of apps
  • Searches on Apple and Google Play app stores
  • Trending apps on social media and popular news
  • App developers
  • Through our partner organization & networks

We want to review apps that people are actually using. So when we identify new apps we prioritize apps that have the most user reviews in the Apple and Google Play app stores. This gives us an idea of the apps’ popularity. If you know of an app you would like us to review, you can contact us at info@psyberguide.org.

App Scoring

Credibility

The Credibility Score combines information about research, development, purpose, and popularity. This measure aims to give users an idea of how credible a digital tool is, i.e. how likely it is that it will work. Apps are scored based on:

  • Direct research evidence specifically for the tool itself
  • Indirect research evidence or evidence-based principles
  • The rigor of the development process (including ongoing maintenance)
  • Popularity and clarity of purpose
Consumer Ratings
Possible Points: 2.
Note: Ratings may come from app store or other sources. Total number counted must be from a single source. 
Score Criteria
2 Ratings exist from >1500 users with an average rating of 3.5+
1 Ratings exist from 31-1500 users with an average rating of 3.5+
0 Fewer than 30 user rating OR an average rating below 3.5
Proposed Goal
Possible Points: 2
Score Criteria
2 Product describes at least one mental health goal which is specific, measurable, and achievable (e.g.reduce stress, reduce symptoms of PTSD)
1 Product describes non-specific or hard to measure mental health goals (e.g. improve your life, improve your wellbeing)
0 No clear goals
Evidence-Based Content
Possible Points: 1
Score Criteria
1 The app uses evidence-based practices to achieve its goals
0 The app does not use evidence-based practices to achieve its goals (or there are no goals described)
Research Base
Possible Points: 3
Score Criteria
3 Strong research support for the product (at least two between-group design experiments that show efficacy or effectiveness)
2 Some research support for the product (at least one experiment that shows efficacy or effectiveness)
1 Other research (e.g. single case designs, quasi-experimental methods demonstrating efficacy, or preliminary analyses)
0 No research.
Software Updates
Possible Points: 2
Score Criteria
2 The application has been revised within the last 6 months
1 The application has been revised within the last 12 months
0 The application has not been revised or was revised more than 12 months ago.
Clinical Input in Development
Possible Points: 1
Score Criteria
1 Clinical leader with mental health expertise involved in development
0 No clinical leader with mental health expertise involved in development
Research on Development Process
Possible Points: 1
Score Criteria
1 Pilot, feasibility and acceptability data OR evidence of stakeholder engagement in development
0 No pilot, feasibility and acceptability data AND no evidence of stakeholder engagement
Efficacy of Other Products
Possible Points: 1
Score Criteria
1 Developer/development team has developed other mental health interventions delivered via technological medium which demonstrate efficacy
0 No other mental health technological interventions demonstrating efficacy have been developed by this team
Research Independence & Review
Possible Points: 2
Score Criteria
2 At least one research paper funded by government agency (e.g. NIH) or non-profit organization OR two articles published in peer-reviewed journals
1 All research funded primarily by for-profit organizations or combined funding sources OR one article published in a peer-reviewed journal
0 No information about source of funding for the research AND No published, peer-reviewed papers
User Experience

The User Experience rating is an app quality score. “User Experience”, sometimes referred to as just UX, is the overall experience of using an app or program, in terms of how easy and engaging it is to use. The Mobile App Rating Scale (MARS) is used to assess the quality of the user experience of apps. MARS was developed by a team of researchers at Queensland University of Technology (QUT), with expertise in the development of digital health tools.

Resources:

There are three main MARS factors:

  1. The MARS mean is the mean of four objective subscales:
    • Engagement: how fun, interesting and customizable the app is, and how well it engages the people it’s intended for
    • Functionality: how well the app features work, how easy it is to navigate through the app. Is it self-explanatory, intuitive, and easy to learn?
    • Aesthetics: the overall visual design – how appealing are the graphics, colors and layout?
    • Information: is the content of the app accurate, well-written and credible?
  2. Subjective Quality
  3. Perceived Impact

The Subjective Quality and Perceived Impact scores are based on the raters’ own impression of the eTool, including its usability and perceived effectiveness.
The MARS can be used as an adjunct to qualitative eTool descriptions, to give eTool users an overview of their quality rating. The scale can also help with the ranking of eTools based on their quality. The MARS scale is being used worldwide by eTool evaluation and development projects.

Transparency

Transparency scores relate to information regarding an apps’ data storage and collection policies and how readily available this information is to users. It’s important to note here that for this metric, we evaluate whether or not an app’s privacy policy has certain key pieces of information regarding data storage, encryption, deletion, etc. What we don’t do is audit the apps practices, to ensure that they actually do what they say they do in their policies. We believe that developers should be as transparent as possible with privacy information so that users can be fully informed of how their data is used and stored.

RatingExplanation
AcceptableA product that has been scored as acceptable has an acceptable level of data transparency; the privacy policy of the product provides sufficient and easily accessible information on the policies related to data collection, storage, and exchange. The information provided conforms to standards for collection, storage, and exchange of health information.
QuestionableA product that has been scored as questionable has a privacy policy that is unclear or lacking specific details of policies surrounding data collection, storage, and exchange or is questionable in its adherence to standards on collection, storage, and exchange of health information.
UnacceptableA product that has been scored as unacceptable either a) does not have a privacy policy, b) has a privacy policy that excludes important information about data privacy, collection, storage, or exchange, or c) has a privacy policy that outlines practices for data privacy, collection, storage or exchange that do not conform to standards for health information.