When you think about social work, what is the first thing that comes to mind? It’s probably child protective services or being a therapist. The reality is social workers do a wide range of work, from helping people be discharged from the hospital, to assisting writing the laws and policies that govern that process. Technology and its relationship to these layers of work can often be difficult to manage. One thing is for certain, technology is coming at social work practice. Whether it is direct practice or community organizing, we need to be aware of balancing the risks and benefits.

What is Lack of Technology Costing You?

As social workers, we sometimes run away from technology as it may be cost-prohibitive, and of course, we worry about the potential risks. Lately, the profession is faced with questions about NOT using technology. By not adopting technology, what is it costing your practice? By not using tools to better treat, be more efficient, organize, and advocate; what it is costing those you serve?

As technology becomes more ubiquitous, it comes with both risks and benefits. The challenge is balancing that out for mental health and community practice. Issues of privacy, security, and technological ethics are making this complex, but if mental health professionals and others are not learning and sharing their thoughts, a lot of these decisions will be made for us and those we serve.  How can you get more active in understanding these issues? There are a lot of apps and services claiming to assist social work practice, and there are some important considerations before you dive in. 

As I examine technology that may be used for social work practice, one of the first places I look at is the “Privacy” section and/or “terms of service” section for that platform’s website. The first thing to check is: does the app or platform actually have a privacy policy? Studies have shown that this isn’t always the case. When policies do exist, they can often be dense, but they are important to read. For example prior to recommending an app for a client you want to ensure the steps taken to keep Protected Health Information (PHI)… well… protected.  How is the company keeping your identity safe? What happens once your data is collected? How is it stored, and for how long? What rights do you have to revoke or delete data? All of these privacy steps should be outlined. There should also be a basic summary of how they will keep your information secure.

Expanding on privacy, the terms of service should be describing how the data will be used once collected. Will they be using it to improve the apps services? Will they be selling the data to a 3rd party? Who else will the data be shared with? 

One place to look for guidance on this issue in PsyberGuide’s Transparency score. This score represents how clear an app’s privacy policy is in detailing the data storage and collection procedures of a mobile health app. Apps are scored as having an Acceptable, Questionable or Unacceptable level of transparency. 

There are lots of resources (like PsyberGuide) out there to help you navigate the responsible choice and use of mental health apps with your clients. We shouldn’t let the risks of adopting technology get in the way of rather than embracing them. We need to be asking the right questions to help minimize risk and maximize the potential benefit. 

If you are a social worker interested in exploring these conversations, there is a vibrant community on social media using the hashtag #SWtech (for a history of the hashtag check out this post via some of its founding members). Engage in conversation with other social workers to learn how to be a technology-savvy therapist. We should be asking these questions head-on. As a social worker, what questions do you have about how technology is impacting your practice?