Handing over your mental health to artificial intelligence (AI) tools has become a new reality. A survey conducted by NordVPN showed that 74% of Brits use technology to support their mental health. While users once favored habit trackers, yoga, or mindfulness apps to support their mental health, they now turn to AI therapists and AI relationships.
Cybersecurity experts say that as health and cybersecurity professionals more often question the effectiveness and security of new AI-based mental health and relationship tools, the algorithms of those sensitive apps should become more transparent.
“Recently, the market has been flooded with AI-based tools to support users’ mental health. Some AI therapists are approved by specialists, but most are AI girlfriends and boyfriends that are nothing more than marketing bots. They are not made to solve mental health issues – instead they deliver dependency and lure out user’s personal information,” says Adrianus Warmenhoven, a cybersecurity advisor at NordVPN.
According to the World Health Organization, 1 in 8 people globally need support for their mental health. The coronavirus pandemic dramatically increased the workloads of therapists, and they already find it challenging to cope with the current demand. Thus, AI tools come as a quick and easy way to access a “human touch.”
Mental health professionals note that clinically-validated AI technology can be used to reduce symptoms of anxiety and depression or some levels of burnout. Nevertheless, they are more cautious about using AI for healing people with severe mental health issues.
This is because generative AI relies on existing data sets and can easily miss subtle details about a patient’s personality or condition. Moreover, body language, tone of voice, facial expressions, and other nonverbal communication are also crucial in identifying if a patient is in crisis.
Additionally, experts warn that in the vast majority of cases, AI chatbots do not connect vulnerable users to mental health professionals even when they disclose suicidal thoughts. On the contrary, there are examples when bots advise people to harm themselves or others.
“The problem is that, unlike licensed professionals, developers of mental health applications have not been held accountable neither for manipulative behavior nor for collecting sensitive private information. In this case, it’s not only about emails or phones. This time, we’re talking about users’ emotions, feelings, sexual health, and even suicidal thoughts,” says Warmenhoven.
How to protect your privacy while using chatbots
Adrianus Warmenhoven, cybersecurity advisor at NordVPN, for those who use AI therapists or sweethearts to cope with loneliness or mental health issues advises to:
Use clinically-approved applications. While chatbots were created to replace actual humans, most AI chatbots have nothing to do with mental health support. So, if you need human relations or psychological advice, please get in touch with a professional.
Be cautious about the personal information you share with AI tools. Some AI applications also collect data on behavior, sleeping patterns, physical activity, heart rate and rhythm variations, and even voice recordings from telephone calls that assess the user’s mood and cognitive state. These applications put your private information at serious risk of a leak, breach, or hack. In addition, romantic AI chatbots have a huge number of trackers that collect and share your personal information to third-parties, often for advertising purposes.
Note that not all sensitive information might be qualified as personal. A personal chat with an AI girlfriend or boyfriend is not always private. The definition of “personal information” might be vague. While it is clear that your email or phone number is personal information, it might not always be the case with text in a chat. It could be shared with third parties unless it is end-to-end encrypted.
ENDS
FAQs
1. Are AI therapists safe and effective for mental health support?
AI therapists can offer basic support for certain mental health issues, such as anxiety, depression, or stress. However, most experts caution against using AI for severe mental health problems. While some clinically-validated AI tools can help reduce symptoms, many AI chatbots, especially AI “relationships,” lack the necessary professional oversight and may not prioritize user safety. It’s essential to choose clinically-approved tools and consult with a professional if you’re dealing with complex or serious mental health concerns.
2. What personal information do AI mental health and relationship apps collect?
AI mental health and relationship apps often collect a wide range of personal data, including behavior patterns, physical activity, sleeping habits, and even voice recordings to assess mood. Some applications also track heart rate and cognitive state. Many AI chatbots, especially those designed for romantic purposes, collect and share personal information with third parties for advertising purposes. This can include text conversations and emotional data that may not be well-protected or considered private unless encryption is used.
3. How can I protect my privacy while using AI mental health apps?
To protect your privacy, use clinically-approved and reputable AI mental health applications that prioritize user data security. Be cautious about the personal information you share, and avoid apps with vague or weak privacy policies. Ensure the app uses end-to-end encryption to protect chat data. Regularly review app permissions and consider limiting the amount of sensitive information you provide. When in doubt, consult a mental health professional for advice rather than relying solely on AI tools.
