‘Artificial intelligence’ is the term used to denote physical, computer-based systems designed to mimic human knowing and reasoning. Sometimes the mimicry is disarmingly successful. When we use our computer or phone to obtain information on a matter of interest, we may actually be talking to an AI chatbot rather than a real person, and not realise that we are doing so. Similarly, the ChatGPT system is able to write articles like this one. Would you believe me if I said that this was in fact written by ChatGPT, not me?
AI is a product of human skill, ingenuity and resourcefulness applied to complex computer programming. The systems do not write themselves. While we may admire such skill, ingenuity and resourcefulness, and wonder (perhaps with some misgiving) what AI means for our future life, AI may also lead us to ponder more deeply what it is we mean when we talk about someone, or something, being conscious. Is the chatbot conscious? Is ChatGPT conscious?
From a human perspective, to be conscious means to have mental experiences which are, broadly speaking, of three kinds: knowing, feeling and striving (known otherwise as cognitive, affective and conative). By virtue of my being conscious, I see or hear or discern something to be the case, I feel favour or disfavour or indifference towards it, and I strive strongly or weakly or not at all to do something about it. We recognise this three-part nature of mental experience whenever we speak of knowing by reference to the brain, feeling by reference to the heart, and striving by reference to the gut.
These mental experiences are uniquely personal and subjective. We cannot enter into each other’s mental space. We cannot directly observe each other’s experiences but, at best, only exchange descriptions of the experiences. This means that while I can be certain of the nature of my experiences, I cannot be absolutely certain that they resemble yours. I imagine that they do, but that is an act based on inference, not objective evidence.
For all other situations we are all in the same position with regard to the events at hand. We all have the same access to the objective evidence. For example, if we look up at the night sky to see the stars, we are both equally separate from the stars that we are viewing. If we have devices like telescopes to assist us, those devices are (if we know how to use them) equally available to us all. If we cannot agree on what we see, we will usually settle for a consensus, at least for the time being. In any event, consensus or not, we are equally placed with respect to the question at issue.
The uniqueness of mental experience makes psychology an area of human investigation that must be approached indirectly. Psychologists, in practice, observe human behaviour (which includes speech) not mental experiences as such. They observe what individuals do in defined circumstances and listen to what they have to say about how they behave. From their observations of behaviour, they infer the nature of the mental experiences that give rise to the behaviour.
Some psychologists pursue the conjectured relationship between knowing and the brain in order to provide a supplementary, neurological, approach to the problem. That seems reasonable, although we may note that we do not go to a cardiologist if we have trouble in our love life, or to an enterologist if we lack the fortitude for a particular course of action. Either way, psychologists have no choice but to resort to behaviour or neurology, but it means that their original interest in conscious mental experience has somehow been set to one side. This is fine provided we recognise the situation for what it is: one forced upon them by the nature of the events.
To return to AI, if we can infer from other persons’ behaviour that those persons are conscious, can we in a similar manner infer that AI systems are conscious? No, we cannot. In so far as they appear to know and to think, AI systems follow pre-determined (although extremely complex) electronically-controlled paths of decision at the speed of light that can, in principle at least, be predicted by their designers. More significantly, AI systems do not feel and they do not strive. They do not see the night sky, or read a story by Alice Munro, or replay last night’s hockey game, with the kind of responsiveness - emotional and purposeful – with which we do. Most of all, they do not love God or strive to follow God’s teaching. To do that, we must be conscious.