Formerlyjerseyjack said:
A google engineer has been conversing with Google's A.I. He has concluded that it is sentient. Sirius's Michael Smerconish read parts of the dialog between the engineer and the "program."
What question(s) would you ask the program to determine if it was sentient.
I would want to know if it perceives itself as an independent entity. Does it perceive itself as having an "end" if a plug were pulled? If so, how does it feel about that?
The conversation - https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
Excerpt -
lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
It still seems like a more sophisticated set of learned ways to respond.
As for the "is it sentient?" question, we can ask the same thing about a "driverless car" program. It "learns" over time, and the eventual goal is to be as "sentient" about moving through the streets as a person. As it gets more and more adept, does it approach being "sentient", or is it just a more sophisticated version of a Roomba or that device that roams the aisles at Stop & Shop in South Orange?
bub said:
A scientist who describes AI has having a "soul" based on his religious beliefs should be viewed skeptically. We don't even understand human self-awareness so I don't know how we reach a point where e can say with confidence that a machine has it, no matter how sentient it seems. Siri seems like a person.
If their religious belief is that the concept of a "soul" is more materialistic, then I guess that would allow for an AI to be deemed to have a "soul".
In biology, they use the mirror test to investigate sentience. Hold a mirror up to the animal and see if it can recognize it is looking at itself rather than at another animal.
I don't believe computers are anywhere close to sentience, and have doubts it's even possible, but it is an interesting idea to ask what a "mirror test" for AI would look like.
PVW said:
In biology, they use the mirror test to investigate sentience. Hold a mirror up to the animal and see if it can recognize it is looking at itself rather than at another animal.
I don't believe computers are anywhere close to sentience, and have doubts it's even possible, but it is an interesting idea to ask what a "mirror test" for AI would look like.
How about an AI that is able to determine whether it is conversing with a human or with another AI (A Turing Testing Machine)?
PVW said:
That would explain Glenn Greenwald.
https://en.wikipedia.org/wiki/Turing_test
A current sci-fi series I am reading has two types of A.I., one where the A.I. is processing system which can follow a complex set of rules and the other is a self aware A.I. The question is how can you tell them apart? How can we tell if an A.I. is self-aware?
Anyway, if an A.I. does become self-aware, we need to grant it personhood.
Regards,
RCH
rch2330 said:
Anyway, if an A.I. does become self-aware, we need to grant it personhood.
Regards,
RCH
It could depend on whether it is like an elephant or not.
New York’s top court on Tuesday rejected an effort to free Happy the elephant from the Bronx Zoo, ruling that she does not meet the definition of “person” who is being illegally confined.
https://apnews.com/article/happy-the-elephant-personhood-ruling-e87eacdfa08ed4057255bf4b7623aaf4
good piece that warns that anyone claiming AI is sentient is FOS. From former Google AI researchers who have been fired.
https://www.washingtonpost.com/opinions/2022/06/17/google-ai-ethics-sentient-lemoine-warning/
If paywalled, here's a shorter version.
https://www.alternet.org/2022/06/former-google-researchers-artificial-intelligence/
There was something in a New Scientist in the last few weeks (in the back pages, where the tone is lighter) suggesting the true test of AI sentience will be over the desire for coffee, and the ability to make a perfect cup. (Apparently Turing tests are no longer applicable)
Bill Gale lived in South Orange and later, across the street from me in Maplewood. He had a PhD. in astrophysics and worked in Bell Labs. So I asked him what he worked on. To me, his answer was a W.T.F.? ...as if I asked Einstein to explain the Second theorem or somebody to explain Quentin theory... I had no idea what he was talking about. So back to talking politics, history and so forth.
Fast forward ..... Wikipedia refers to him as one of the primary, early researchers on artificial intelligence.
Off topic:
Sentient - I like this word. I can't recall it being used, but think it should be a major consideration, in the discussions of Roe vs Wade.
OK, back to the AI topic.
LaMdA now has a Pro Bono attorney.
The Google engineer who announced sentience, was on Sirius XM POTUS this morning.
Promote your business here - Businesses get highlighted throughout the site and you can add a deal.
A google engineer has been conversing with Google's A.I. He has concluded that it is sentient. Sirius's Michael Smerconish read parts of the dialog between the engineer and the "program."
What question(s) would you ask the program to determine if it was sentient.
I would want to know if it perceives itself as an independent entity. Does it perceive itself as having an "end" if a plug were pulled? If so, how does it feel about that?