Voice assistants can’t “solve” loneliness

According to researchers at the Universitat Oberta de Catalunya, “AI voice assistants offer a promising solution to [the challenge of loneliness]”.

Scepticism is warranted here. They also claim that:

Voice assistants offer a promising solution to the challenge of loneliness…

Firstly, this is a textbook example of the fallacy of techno-solutionism. It oversimplifies a very complex issue and takes it as a premise that technology alone can provide a solution to a systemic problem. The whole piece is dismally techno-solutionist.

Two other statements featured in the article caught my attention:

[chatbots] contribute significantly to improving [older people’s] mental health by mitigating feelings of loneliness.

and

Some of the participants in the studies described voice assistants as “a friend or a companion”.

This might sound like a genuine benefit but can also be explained and dismissed with the notion of adaptive preferences, which is what people do when they unconsciously adjust their desires to the available possibilities more or less forced by external reasons. In other words, when they come to value what’s attainable over what they might want.

As Sen and Nussbaum argue, well-being is about being able to achieve the functionings one has reason to value (a true capability). An adaptive preference (experiencing a chatbot as a friend) is suboptimal in this case because it reflect an adjustment to a deprivation of the key good of human connection. The older adult suffers from loneliness because they are alone but would prefer to be with others if they had the opportunity. In other words, if the person would value to be with someone else and feels lonely because they are alone, the use of chatbots would, at best, be simply masking the unhappiness and the loneliness.

From a philosophical perspective this project should make us question whether the simulation of positive experiences, rather than engaging in reality, is truly meaningful and fulfilling. The philosopher Robert Nozick says it isn’t, and many agree. His Experience machine thought experiment about a device capable of providing users with perfectly pleasurable, simulated experiences, relates to this. Would you plug-in and choose a life of simulated happiness over imperfect but authentic real-life experiences? (If not, ponder on the reason why.)

The upshot is that loneliness arises from a multitude of psychological, physical, or social factors (e.g., losing access to social interactions because of health issues, children being too busy to make time to visit their older parents, etc.). None of these factors is solved or even adddressed by a chatbot. Can a chatbot genuinely improve a lonely person’s well-being, or can it at best merely offer a simulation of the human connection that is lacking? I think the latter is the case. Granted, chatbots could support older individuals and help them cope with their loneliness. Yet, it can’t be more than that and as a coping mechanism based on a simulation it might end up undermining both the capacity to genuinely connect with others and to be genuinely self-sufficient.

Via: Irati Hurtado, PhD