Conversational Artificial Intelligence for Psychotherapy - Tool or Agent
- Oren Asman
- May 29
- 1 min read
Presenter: Jana Sedlakova, Ph.D. Student, Institute of Biomedical Ethics and History of Medicine (IBME)
Date: 3 April, 2023
In a nutshell: Conversational artificial intelligence presents many opportunities in the psychotherapeutic landscape—such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. This talk proposes a holistic, ethical, and epistemic analysis of CAI adoption. We will consider how (Self-) knowledge, (self-) understanding, and relationships connect to CAI being a tool or an agent and look at human-AI interaction to suggest that CAI cannot be considered an equal partner in a conversation, as is the case with a human therapist. Thus it should be restricted to specific functions.
Short response: Oren Asman, LLD, Bioethics and Law Center, Tel Aviv University Faculty of Medicine
In a nutshell: Conversational artificial intelligence-patient “alliance” calls for thinking of the concept of the therapeutic alliance, how it may apply in this context, and what the psychological, epistemic, and philosophical implications of this may be. Focusing on one element of the alliance, congruence, one may suggest that the more this AI imitates human agency it might undermine and alter what humanity has known as a good and healthy life.
Suggested Reading:
• Sedlakova, Jana; Trachsel, Manuel (2022). Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent? The American Journal of Bioethics
• Asman, Oren; Barilan Yechiel Michael; Tal Amir (2023). Conversational Artificial Intelligence – Patient Alliance Turing Test and the Search for Authenticity. The American Journal of Bioethics
Comments