Artificial Intelligence and Emotional Support

I am about to embark on a 10-month automation fellowship. I will bring together my background in Philosophy and my technical ability as a developer to explore questions concerning artificial intelligence and how we develop emotional connections with technology.

This blog post is a summary of the areas I would like to explore further during my research.

Automation could play a pivotal role in addressing the substantial challenges to the mental health sector. Therapy chatbots are an accessible form of emotional support, in terms of cost, time, and reducing stigma.

My research question is broadly “What is required for a conversation to provide emotional support?” This is the pivotal question underpinning how chatbot technologies can be successfully designed and implemented for this purpose. Instead of focusing on how we can build AI to mimic human interaction, I would instead like to analyse what elements are needed to provide emotional support and work up from there. Studying our emotional reaction when interacting with technology forces us to re-evaluate the fundamental building blocks of what makes effective communication.

The second part of my research will focus on how communicating with technology is perceived differently to communicating with another human.

Imagine someone calls the Samaritans (a helpline providing emotional support) and talks for 30 minutes about how their depression is making it difficult for them to cope from day to day. The Samaritan on the phone does not give advice or disclose any personal information; they just listen and ask reflective questions, which helps the caller explore their own feelings. The caller ends the call feeling that this was a positive conversation, and they feel emotionally supported.

Now imagine in an alternate universe, the caller rings a similar helpline, and has exactly the same conversation, but this time they are communicating with very sophisticated AI. It also reflectively listens, and asks the caller the same questions, which helps them explore their emotions.

Would the caller feel more emotionally supported talking to a human (in the first instance) than by talking to AI (in the second instance) even though the conversation was exactly the same?

It is possible that the caller might feel more supported whilst talking to a human, as they might think that the person understands and empathises with them. Then again, the caller may feel more comfortable talking to AI, perceiving there to be a lack of judgment, and therefore they might be willing to disclose more details relating to their emotional state. This is demonstrated by Ellie, a virtual therapist, who aims to help veterans talk about their PTSD. Studies have shown that veterans are more willing to talk openly about the depth of their emotions when talking to Ellie compared to a human therapist. The same could apply to other sectors of the population who find it hard to confide in others about their emotional state.

It is likely that there are some unique advantages and disadvantages that both human-to-human interaction, and human-to-technology interaction have when providing emotional support.

The importance of emotional intelligence

A fascinating paper by C Burr and G Keeling made me think about how important our capacity for moral reasoning is when providing emotional support. The paper explores what directions ‘we should pursue and avoid when designing artificial agents that learn and think (like humans) about morality’(C. Burr & G. Keeling, 2018).

This led me to question whether it is important to have a capacity for moral reasoning when providing emotional support. Although it is generally understood that the role of a therapist is to be non-judgemental, is it also important that they understand the injustice in a client’s story?

How important is it that we can understand moral concepts such as ‘fairness, duty, blame, and responsibility’ (in the context of societal norms), when providing emotional support. How does this relate to empathy?

Some people may argue that there can be no alternative to human-human care, and stress the importance of empathy in therapeutic relationships. I think there is an important point here; humans and computers are not the same and a constant drive to compare the two will result in disappointment. However, I think that both may have some advantages over the other. Throughout the process of developing AI, we need to keep ethical implications at the forefront of research, and there are many ethical questions to explore here, from what is appropriate emotional support, to questions of who owns our data.

I don’t think anyone is advocating a future where all social care and emotional support is undertaken by robotic workers, but rather looking at how AI can augment service provision. Analysing situations where human-computer interactions are effective, could result in increased availability of human-human interactions for those people who would gain most from them.

I’m looking forward to exploring these questions further (and many more I’m sure!). If anyone is doing similar research or would like to talk about AI/therapy/chatbots/anything at all, get in touch and I’d be happy to chat!

– Ellie Foreman (Automation Fellow)