Can psychological concepts help in determining (AI) sentience?

Aco Momcilovic
5 min readOct 24, 2022

--

And in the end, can we learn more about ourselves through the process of discovering sentience in others?

I wrote this article as Co-Founder and co-Director of the Global AI Ethics Institute for the larger document: “CAN AN AI BE SENTIENT? Multiple Perspectives On Sentience And On The Potential Ethical Implications Of The Rise Of Sentient”
The full document could be found here: https://globalethics.ai/http-globalethics-ai-wp-content-uploads-2022-10-goffi-momcilovic-et-al-2022-can-an-ai-be-sentient-1-pdf/

AI DNA? (by MidJourney)

Can AI be (theoretically) sentient is at first glance a technical question and problem. From that standpoint, I cannot answer it, and I can only hope the best world (engineering) experts in the field would give us useful information about it. As far as I am informed, it can’t be sentient by most of the sentience definitions, purely from the low and inadequate resources and level of technology development.

But obviously, it will depend in the future, when we (at least partially) solve technical development issues and capabilities, and get closer to the gray zone of sentience, on the choosing of the definition. And yes, there are many proposals and opinions about sentience, personhood, laws that could be attached, and other similar constructs. I will try to ask useful questions (not being able to provide answers) from a psychological standpoint and use some concepts that are offered by psychology as a science.

And I do believe that AI development and its promise and hope about the creation of AGI and maybe one day super intelligence will create a fascinating interplay between sciences and art, and some systems of belief depending on our cultures that evolved for centuries.

AI Sculpture

To be pragmatic and focus on just one small but very dominant segment, I choose to reflect on the sentience that is implied in many of the AI definitions — one that is comparable to human sentience, human behavior, and similarities with humans as a species and individuals. In the general population, it is simplified as a vision of “being” similar to people in all possible aspects. So, what are the questions we might task about current or future AI systems to determine their (humanlike) “sentience”? Or can’t they even be sentient?

1. Do they have, or can create a deeper (mental) model of how the world works?

We, humans, create many mind maps that help us make sense of the world and connect dots that allow us to reason beyond our own experiences. Currently what we have today is with reason called Artificial Narrow Intelligence. Will AI have something like the theory of mind understanding (false beliefs for example) and will be able to comprehend others and the world with a similar system?

2. Can they use or create heuristics in their processing of information?

When speaking about human intelligence, information processing is one of the pillars. An extremely important question is balancing the resources that might be available to them, and the precision of the information and their conclusions which might not even need heuristics. We use them to save energy and be faster, but often with a tradeoff with precision/truthfulness.

3. Do they have self-determination?

Could we talk about sentience without it? It certainly depends on those definitions coming from different cultures. Being smart is not the same as wanting something as some authors notice.

4. What would be their motivation? Can they (one day) develop it themselves or it is “given” by human creators?

Emergence of AI

Evolution psychology is trying to explain many forces that are driving human behavior. Motivational theories are used as an explanatory concept in between some inputs (stimuli) and some outputs that are observable behavior. We know that AI systems don’t have it now, but can we imagine a future where they will have some needs for self-preservation, and replication and would be under the pressures of natural selection? Or a completely new set of factors could be recognized? Even worse, if we as a creator of those future systems will be the ones determining and choosing those deep motivational roots, what we will choose and based on what systems of beliefs, morals, and ethics?

The goals of sentient AI will not only put new challenges in front of multidisciplinary researchers that are and will work on the projects of AGI and one day ASI, and that are trying to answer mentioned questions. They will also make influence to the general public. With mass adoption, it might be one of the biggest reinforcements for the people that are users (and all eventually will be) to learn more about different nuances of sentience, the appearance of the sentience, consciousness, definition of (physical or digital) life, etc. So, because of the revolution that AI development started, in the best-case scenario, we might be facing also great educational initiatives that will help people to better understand something in order to better use it.

And for the end, is there a possibility that with more and more “intelligence”, capabilities, efficiency, and precision of outputs given by AI, we conclude not that it (AI) is finally sentient, but maybe that there is a possibility that we (humans) actually, never were (sentient)?

That we are just fantastic algorithms with the needed level of resources to perform our tasks.

Meditating robot

P.s. The test proposed by Turing was for humans to determine if they are speaking with a human or a machine, and if they can’t make difference, we could consider it intelligent. I propose RINGTU-Ace Test — Can an AI system recognize if IT is communicating with sentient (live, human) beings, or not?

--

--

Aco Momcilovic
Aco Momcilovic

Written by Aco Momcilovic

Ph.D. Student. National AI Capital Researcher. Human Resoucres, Psychology, Entrepreneurship, MBA…

No responses yet