Google Engineer Placed on Administrative Leave after Claiming Company’s AI Has Become Sentient

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,602
Points
113
A Google engineer by the name of Blake Lemoine has been sidelined by his employer after claiming that LaMDA (Language Model for Dialogue Applications), the company's artificially intelligent chatbot generator, has become sentient.

Go to post
 
You know what... that is actually kind of huge. If you can throw a turing test at it and it passes... what next?
 
I don’t know if the chatbot is alive, or if the engineer was just lonely and programmed the chatbot to say what he wants it to say.

Sounds a bit more like the latter after reading the article.
 
You know what... that is actually kind of huge. If you can throw a turing test at it and it passes... what next?

Except the Turing test means absolutely nothing.

It just tells you that you have crested a machine capable of fooling a human. Not that you have crested something that is self aware.

The premise of the Turing test is ridiculous, especially since it is usually applied via text chat, a notoriously ambiguous medium, but even if it weren't, just because a machine fools someone does not mean it is aware.
 
Except the Turing test means absolutely nothing.

It just tells you that you have crested a machine capable of fooling a human. Not that you have crested something that is self aware.

The premise of the Turing test is ridiculous, especially since it is usually applied via text chat, a notoriously ambiguous medium, but even if it weren't, just because a machine fools someone does not mean it is aware.
Ok... I'll bite. So what is a way to know if something is self aware? We've redefined sentience because we didn't like when other animals passed that test.
 
Ok... I'll bite. So what is a way to know if something is self aware? We've redefined sentience because we didn't like when other animals passed that test.

I'm not sure if there can be such a test.

How do you know that anyone except yourself is self aware?
 
I'm not sure if there can be such a test.

How do you know that anyone except yourself is self aware?
A question philosophy majors have spent untold amounts of dollars and centuries of time hiding away at universities trying to figure out.

Doing a little light reading myself:

... tracing back to the well-known writing of Jeremy Bentham in An Introduction to the Principles of Morals and Legislation: "The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"

That may be the best definition I can think of, and if I may point to another piece of evidence to support this, Marvin the Android was definitely a sentient being by this definition, and it's entire being encapsulated and illustrated that to live is to feel pain.

Can you program an AI to suffer? Physically, maybe, if you installed sensors and triggered their output towards avoidance (after all, isn't that what physical pain is biologically?), but there is also psychological harm, which isn't much easier to imagine. You could program an AI to seek to avoid certain things, or with negative weighting to avoid certain situations, but is that really the same thing as suffering? And what does that say about a creator that seeks to create something that is destined to suffer?
 
Last edited:
While I do not think our tech is capable, yet, of making a true sentience, I can not discount the possibility that an intelligence or awareness has emerged.

I hope the leaders at Google are at least doing some work to make an independent review rather than dismiss out of hand the possibility.

I know how management can be - how many people have been ridiculed for coming up with a new idea no one wants to believe?
 
Become a Patron!
Back
Top