Artificial intelligence (AI) technologies are part of the new wave Industry 4.0, whereby organisations are implementing automated systems, building internet of things and using big data, smart systems, and cyber-physical systems to expand their capabilities and improve productivity. The Australian Government’s Productivity Commission indicated AI may provide significant benefit, with estimates of a higher annual GDP by two-thirds or even up to three and two-thirds higher than what it is currently.

The risks must also be studied. The US, EU, the UK and Canada are introducing AI regulation, with support from some eminent AI developers. The Australian AI expert reaction to the Australian Government’s potential ban on ‘high-risk’ uses of AI emphasised safeguarding the use of AI should be balanced with realising its prospects, including in digital mental health.

AI refers to the capacity of machines or computer programs to think, learn, and complete tasks that typically require human intelligence. Algorithms and computer systems are used independently or in combination with humans. AI is used with mental health records, screening and diagnosis or to understand patient’s emotions in order to align interventions. Ubiquitous computing and AI help in medical monitoring, communication and memory aids.

“Many users and practitioners are uncertain about which digital mental health information and resources are of good quality, usability and effectiveness.”

digital mental health

Human-artificial intelligence

Human-artificial intelligence (HAI) refers to where humans and AI collaborate on a common task or goal for efficient, safer, sustainable and enjoyable work and lives. The aim of HAI is to enable the other’s strengths. However, the Center for Humane Technology aims further by prioritising human values, such as empathy, compassion, and responsibility. Australia adopted a voluntary ethics framework for “responsible” AI in 2018. “Fair aware” AI has been used in digital mental health to promote diversity and inclusion, as well as “explainable” AI to demonstrate transparency and trust between users and practitioners.

Digital Mental Health

Many people are increasingly turning to technology for their mental health concerns. The National Safety and Quality Digital Mental Health Standards were developed to keep apace in 2020, aiming to improve the safety and quality of digital mental health service provision. However, a lack of integrated, scalable, effective solutions are digital mental health challenges. Many users and practitioners are uncertain about which digital mental health information and resources are of good quality, usability and effectiveness. A scoping review evaluated digital mental health platforms and interventions to identify the types of available evidence. The mix of early evidence suggested feasibility, partial usability, engagement, and acceptability, as well as a study that effectively treated anxiety and depression in adults.

Innovation requires a better understanding the complex relationship between mental ill-health, its comorbidities and the biopsychosocial factors that influence suicidality. For example, tailoring digital solutions for men requires identifying the contexts that they can safely share their experiences, and linking them to localised interventions. The grappling with outreach, adoption and sustaining engagement means divergent thinking is vital.

Previous integrative reviews found there are promising developments in digital mental health. However, there are teething problems with its implementation, such as difficulties in overcoming the human factors in human-computer interaction (HCI). The impacts of YouTube on loneliness and mental health showed how the integration of human factors is crucial for the development and implementation of effective digital mental health tools.

Human-AI and digital mental health

Qualitative studies can play an important role in increasing the accessibility, engagement and effectiveness of digital mental health through:

  1. identifying user needs,
  2. understanding barriers to use,
  3. evaluating user experience, and
  4. evaluating the impact of platforms and interventions.

AI may help advance digital mental health through better and faster qualitative data analysis to make sense of multidimensional online feedback e.g., a Viable blog discusses the use of GPT-4 in natural language processing for sentiment analysis.

Author

Dr Luke Balcombe is a digital mental health (DMH) expert and a researcher in the Australian Institute for Suicide Research and Prevention (AISRAP). Dr Balcombe is researching and consulting in the applied psychology and informatics disciplines. Luke is a leader in DMH current state and future trends projects. He is analysing themes and providing insights on the challenges and opportunities for designing and using technology-enabled solutions in mental health care. He is presenting human-centred insights and discussing findings on the impact of YouTube on loneliness and mental health, the use/regulation of AI, and safety and quality standards. His studies include systematic reviews, qualitative research, expert comments, blogs and articles.

3: Good Health and Well-being
UN Sustainable Development Goals 3: Good Health and Well-being