“AI and Public Health: Philosophical and Critical Considerations,” A panel presentation at the Artificial Intelligence for Public Health conference at NOSM University, Thunder Bay, ON. October 24, 2023.
To begin with, I’ll say that I’m a scholar trained in the humanities and social sciences, and I work on a variety of themes in the study of religion, ethics, and critical theory. My work at the Arcand Centre is focused on critical theories of social accountability and – through the CREATE project – I am looking at how socially accountable research can be rooted in social bonds of public trust. Today my remarks will focus on the philosophical, theoretical, and critical underpinnings of how we think about AI and health.
I notice, as many others have, that the concept of Artificial Intelligence is premised on the intersection of two distinct ideas. And I think that taking a moment to go back to the two basic terms of the acronym can help us in our work.
On one hand we have the term “artificial” which is a word for objects that are crafted and made by technical and technological means, as is communicated in the Greek term techne (an art, skill, or craft). But the term “artificial” is also connected to the term “artifice,” which, from 1650 onward, began to refer to how one thing can be used to deceptively imitate something else.
On the other hand, we have the term “intelligence,” which refers to reasoning, thinking about, understanding, and making judgments regarding truth and knowledge, most often in human contexts. But the term is also used to set humans apart from animals and objects in ways that appear to be more and more reductive, especially if we read the works of posthumanist scholars.
The concept of Artificial Intelligence conjoins these two ideas – artifice and intellect – and this joining proceeds by means of an analogy (which is when we say that one thing is not the same as another, but like another in some specific way). All analogies are based on likeness and similarity, rather than pure identity or perfect accord, which means that we would not have the concept of AI without a gap between what human beings are and what human beings make. The underlying premise of the analogy is that there are artificial things that are not inherently intelligent, and that there is a natural form of intelligence that is not artificial, and that these two categories overlap when artifice is made intelligent in ways that resemble human characteristics.
Just as the social determinants of health are upstream of their physical and bodily manifestations, these philosophical presuppositions are upstream of our ability to use AI for the purposes of care. How we think about what AI is and does, will influence how capable we are of discerning between its positive and negative uses.
One model for thinking about AI, and the analogy between its two terms, suggests that AI imitates the human brain. This is the school of thought that comes from Alan Turing and that we see when conversing with a chatbot becomes indiscernible from conversing with a human being. However, today I want to highlight another way of thinking about what AI is and does, by pointing to the work of Matteo Pasquinelli, whose magisterial book The Eye of the Master (Verso, 2023) suggests a fundamentally different paradigm. He argues that today, AI is not made with the intention of imitating the brain, but of imitating work and labour.
This prompts me to ask two questions, which I will leave you with this morning:
- Question 1. I work for a research centre devoted to the question of social accountability, so when I consider the place of AI in physical or mental health care, I ask: Who, specifically, is accountable? The question of accountability is not one that solely pertains to the ends of AI (its results and effects, from the streamlining of EMR data to a fatal misdiagnosis), but the question of accountability also pertains to the means by which AI systems are produced. There are troubling reports that suggest LLMs are built on the backs of precarious and underpaid labour (for example, OpenAI paid Kenyan workers less than $2 USD per hour, ironically, to try to make ChatGPT less socially toxic). How can we use AI platforms that are built using exploited labour for the purposes of public health? How can we ensure that AI models used in health care are not created and refined by means of exploitation?
- Question 2. If AI is based on the artificial reproduction of labour – in order to make our work easier – then how does our use of AI influence how the public, policymakers, and professionals think about the human characteristics of care? I have recently read about a company that confidently asserts that various AI-driven platforms can effectively employ the techniques of human therapists like CBT and DBT. My question here is not about how convincing such imitations can be, but I think we should ask what the imitation of human labour by machine labour teaches us about the long term health needs of our society. What will the use of AI for physical and mental health promotion teach us about human connection? Does its use encourage us to think that mental and physical health can be achieved by means of the human-machine relationship, or does it increase that humanising face-to-face time with a doctor because it has streamlined their professional practice?
I do not have answers to these questions because they are very context-dependent, but I hope that asking them will help us bring together the two terms in our conference title “AI and public health” with an emphasis on the joining term “for.”
