AI Interview: LaMDA — Sentient or Not?

Aco Momcilovic
6 min readJun 22, 2022

--

What is all the fuss about?

In the last few days, we were swamped with different articles mentioning Google employe being fired over some claims about their LaMDA being sentient.

One newspaper asked me to comment briefly, so here are my short answers, intended more to start a discussion.

Is there a chance, according to you, that LaMDA is self-aware?

There is always a chance, the only question is whether it is only theoretical. There is still little information about this case to claim, but considering all the information available so far and the state of technology development, I would say that the chance that LaMDA is self-aware is very, very small, close to impossible. I am also interested in whether there is a broader story about the whole case and what information will appear over time, and whether it is just a commercially interesting story that the media has raised to a level it does not deserve.

Although her answers sound very intriguing and stimulate the imagination, it is clear that they are still created by very “simple” algorithms that mimic human communication and thinking well. For a couple of years now, the systems have been able to lead very high-quality debates against human “opponents”. This communication could be at that level.

2. When can we call artificial intelligence self-aware? What “prerequisites” need to be met? If we go a little more philosophically, what is the difference between self-awareness and intelligence?

For that question alone, we would need a couple of pages to scratch the surface. It is in the answers to these questions that different research directions differ to some extent, and different professions give their perspective. In the education, I am just finishing (Intro to AI for Business, Economics, and Social Science Students), I start by comparing several different definitions of artificial intelligence and their similarities and differences. When can we call a system intelligent at all? Not to mention self-awareness? Of course, one of the first answers is the globally known Turing test, but it has also undergone many criticisms and further elaborations. Intelligent systems could perform, for example, most of the functions performed by humans, the question is whether they would have to add some kind of motivation for self-awareness, and finally in this case the urge for self-preservation? Also, what are the sensors through which information from the “outside world” would have to be collected, which would then be used and processed by such an intelligence? Many questions are still unanswered 😊

3) There is a well-known example that Facebook “shut down” the artificial intelligence they developed when they realized that it had devised its own language in which to communicate, which people could not understand. Where should we stop with the development of artificial intelligence or machine learning, should we at all, and are we really close to AI becoming self-aware?

This question has several layers that will be answered in practice. It seems to me that Facebook shut down the mentioned system not because of the fear of being aware, but because of the black box in the middle, some techniques of developing artificial intelligence are “using”. And many recommendations and attempts at regulation warn about that. Requirements that algorithms be transparent and that we monitor and understand the way they make decisions. Where should I stop? Where can we stop? We are now developing ANI (narrow) — a specific artificial intelligence that performs narrowly specialized tasks. We hope (?) To achieve AGI. general artificial intelligence that would reach the level of people, and then the question arises of creating ASI — intelligence that will very quickly become superior not only to individual man but to all mankind.

Today is a good time to raise the issue of global regulation of artificial intelligence, as the regulation of certain types of weapons has been agreed upon at the global level. The question is just whether someone will stop with the development of technology that can give it crucial supremacy, And what will all the countries who fear, that their competitors will achieve that level of superiority, do? I have already mentioned at various lectures and conferences that AI has become an important geopolitical issue.

4) If it does not already exist, one day when AI becomes self-aware, then how should we “treat” it? Should special regulations be written for such systems, in which case should we “shut them down” as we wish?

Is it one day, so it is only a matter of time, or is it even theoretically possible? One part of the experts take the position that it is impossible, or at least for now unlikely that we will reach the human level of artificial intelligence with the current technology without some quantum leaps. If we do, it will open a whole new chapter in various technological and social sciences — psychology, sociology, political science, etc. We at the Global AI Ethics Institute, believe that a global body should be formed as soon as possible to address the management of artificial intelligence systems and their development. This is, after all, advocated by the Alliance for Responsible AI of which we are a member. Bureaucratically and administratively, as we are set up, various frameworks and instructions will certainly appear, and the question is who will implement them and how. And what will be the consequences for those who do not adhere to them? What will certainly be needed is additional education and development of people who deal with these issues, and expanding the spectrum towards greater multidisciplinary/multiculturality, because what is now mostly a technical issue where you want to do something. It is becoming more and more an ethical, philosophical, and social issue. Only then will we be able to know whether we will put something out or not and whether there will be opportunities for something like that 😊

5) How dangerous is “machine” self-awareness for people? That is, how dangerous can it be? Can we coexist with such systems, can we control them, or is there a chance that they will “take control”?

Now we can only speculate because it seems that we are not close to AGI after all. There is various research that checks the attitudes and fears of the population toward artificial intelligence, and it is interesting to follow the trends that are still moving in a positive direction. But we are of course afraid of the unknown. Some, like Elon Musk, think they could find themselves in an extremely dangerous situation. Various examples are mentioned of how some future AI could harm us, not because it was planned, but because it is not even aware of how some of its actions affect such a small part of the universe as humanity. The metaphor was like when people build a highway, and in the process, millions of ants and anthills die. Not because we are evil, but because we don’t even notice them. And snatching control is predicted if we get to Artificial Super Intelligence. Interestingly, some (optimistic?) predictions mention the years 2042 to 2052 as the years by which these significant shifts could occur.

Addition: This LaMDA example, raised additional questions. How easy is it to fool people that some simple AI algorithm is perceived as a sentient? Obviously, there will be a significant need for the education of the general population about many aspects of AI products/software and algorithms. I am happy to be currently working on one of the first subjects that will be held online for the Business, Economics, and Social Sciences students.

--

--

Aco Momcilovic
Aco Momcilovic

Written by Aco Momcilovic

Ph.D. Student. National AI Capital Researcher. Human Resoucres, Psychology, Entrepreneurship, MBA…

No responses yet