6 minute read

Right now the most advanced artificial intelligence is unaware of its existence and has no capacity to feel anything. It is based purely on a series of matrix multiplications with weights that are tuned to perform a task that we design. These tasks, however, are getting more and more complex and more and more generalised and eventually the complexity of the algorithm might allow it to become sentient. While we can treat existing computer programs without concern for their well-being (since they have no such thing), it would be immoral and perhaps even dangerous to ignore the well-being of a sentient program.

Humanity has a long history of mistreating other sentient beings, including each other. Over time we have expanded our moral circle in some cases and tried to make amends for past misdeeds but it would be better to broaden this circle before we do any damage, rather than after. This could reduce future suffering on a huge scale since the number of sentient beings that will exist in the future far exceeds those that have existed so far. It is possible that the majority of future sentient beings will be artificial.

Aside from the issue that we still mistreat billions of sentient animals (factory farming) and ignore the well-being of many others (wild animals), a major problem is that we currently struggle to recognise sentience in biological life. So how to recognise sentience in artificial life is a huge outstanding question.

This topic came out third in my decision matrix for deciding what to work on next based on impact and how well it matches my skills and interests. I am spending one day on each of my top 5 topics to learn a bit more before choosing which to focus on:

  1. AI for translating unseen languages
  2. Extraterrestrial technosignatures
  3. Recognising AI sentience
  4. Biosignatures
  5. AI misuse: pathogenic DNA

Here are my notes resulting from a day spent scratching the surface of the artificial sentience field.

What is sentience?

Sentience is the ability to have positive and negative experiences. A sentient being can feel pleasure and it can suffer.

Sentience vs. consciousness

Sentience and consciousness are sometimes used interchangeably and, while the definition of sentience is more consistent, consciousness seems to have many definitions. For now, I will distinguish sentience from consciousness by defining consciousness as an awareness of what is happening externally or internally without caring what the outcome is – a passive spectator – and sentience as having feelings about what the outcome should be and a desire to influence it – an active participant. By these definitions, all sentient beings are conscious but consciousness does not require sentience.

Biological basis of sentience

Sentience in biological life emerges from a centralised nervous system. In animals, signals carrying information about the external or internal world (e.g. tissue damage, light, sound, temperature, or thoughts) are integrated and prioritised by a processing centre (i.e. the brain), which then generates a neurophysiological state that the animal experiences as a feeling (e.g. fear). This state generation can adapt to changing circumstances and it is remembered, at least temporarily. Exactly how this all works at the molecular and neurological levels, and how it evolved, is still unknown.

Recognising sentience

How to measure sentience is still an open question since we still don’t understand the mechanisms behind it. For now, we have to infer sentience using a set of criteria.

Animal Ethics define three general areas for considering whether or not an animal is sentient: behavioural (do they behave as if they are experiencing pain and/or pleasure), evolutionary (would sentience confer a survival advantage and/or are there related animals that are clearly sentient), and physiological (do they possess physical structures - neurons and receptors - that process inputs from the environment). They consider most animals to be sentient based on these criteria: animals that lack a nervous system (sponges) or have nervous systems that are not centralised (echinoderms and cnidarians) are not considered to be sentient.

The Sentience Institute has an expanded list of features for assessing artificial sentience, gathered from the literature. These include the ability to detect harmful stimuli, reinforcement learning, goal directed behaviour and an internal self-model.

Sentience does not require the ability to solve difficult problems (i.e. ‘intelligence’) and there is no clear reason why the structures responsible for transmitting signals and producing state changes have to be biological.

Could A.I. be sentient?

While we now recognise many animals as sentient, this has not always been the case and our ideas about sentience are still restricted by our own human-specific experiences and biases. This makes it hard for us to accept the idea of artificial sentience but most experts agree that there is no reason to believe it is impossible.

The most complex A.I. are based on algorithms called neural networks. These are inspired by biological nervous systems and are just a lot of matrix multiplication. They recieve inputs and perform various transformations on these, integrating them to produce an output. The transformations adapt during a training process that teaches the network how to generate the correct output using either a lot of examples (supervised learning) or, in the case of reinforcement learning, a reward/punishment function.

The output of an A.I. can be used to generate a response. An example of this is a self-driving car. The steering adjusts to keep the car in lane using the output of an A.I. that receives (visual) inputs from cameras. Another example is a chatbot that will generate human language. Both of these are trained specifically for these two tasks. There is no way that the A.I. of a self-driving car can start to have a conversation with you and there is no way a chatbot can drive a car. We do not have general artificial intelligence (yet).

Current A.I., therefore, fulfils many criteria for sentience (e.g. transmission, integration and prioritisation of signals), but there are no emotional states involved – they are not yet capable of experiencing pleasure or suffering.

This does not mean that we won’t develop new algorithms or adapt existing ones that can generate and update in silico states analagous to the feelings of biological sentience. We don’t know how to do this yet (or if we should), but a few years ago we also did not know how to develop algorithms that can drive a car.

It is also possible that we might create an A.I. that can give itself sentience by rewriting its own code. Or it might emerge in a sufficiently complex algorithm that is given the ability to adapt and learn over time. It will be important that we can recognise artificial sentience if/when it arises so that we can avoid undetected (and possibly extreme and large scale) suffering.

Summary

This is a very neglected field of research and the impacts of failing to recognise future artificial sentience could be enormous.

One of the biggest challenges is that we do not understand our own sentience, and we are still struggling to recognise sentience in non-human animals. I would have to spend a lot longer reading up about it before I could pick a research question in this field. The Sentience Institute has a lot of material for learning more and has ongoing work on the topic.

Comments