Could you ever view A.I. as sentient?

Superwoman Prime

Damaged Beyond Repair
Joined
Dec 30, 2005
Messages
12,088
Reaction score
1
Points
31
Here's a script excerpt from a Through the Wormhole episode titled Will We Become God?

"We have already crafted new life-forms, but could our creations ever have conscious souls? Most scientists consider conscious experience to be an elusive property that may never be fully understood, much less artificially created.

But neuroscientist Melanie Bolling, from the University of Wisconsin-Madison, doesn't think it's so mysterious.

In fact, she thinks it can be boiled down to a single number.

What we try to do in our work is to quantify consciousness.

By the quantity, we say, "How much understanding there is? How much consciousness there is in a system?"

Freeman: One way to understand consciousness is to observe what happens when it fails.

Different brain injuries have dramatically different consequences.
We learn from neurology that there are some brain lesions that make you unconscious and some that don't.

Freeman: When damage occurs in the cerebral cortex, body organs like the heart and lungs may continue to function.

However, a patient won't show any awareness of his or her environment.

[...]

Freeman: Melanie believes that consciousness arises in the cortex, because the neurons it stems from are not isolated bulbs.

They form an interconnected network that communicates.

[...]

Bolling: So, this interconnectedness is thought to be important for consciousness to arise in the brain.

Freeman: This idea led Melanie to develop a formula that will allow us to measure consciousness.

It calculates the degree of interconnectedness of neurons in any system.

The answer is a number represented by the Greek letter, Phi.
The more conscious something is, the greater its value of phi.
The human brain, with trillions of neuroconnections, has a large value of phi.
An earthworm's phi is exponentially smaller, but it's still not zero.
So one of the implications of the theory is that consciousness is not necessarily only in humans.
We could use phi as a way to measure the level of consciousness in a lot of different cases, being living beings or computers.

This is a video clip from a Star Trek: The Next Generation episode titled Measure of a Man, where Data's sentience is put in serious question.



What would it take for you to view a machine as self-aware? When do they evolve from object to person?

Should they enjoy the same rights as humans, or have their own unique set?
 
This question hinges on another question: How does one define sentience and confirm its existence? The general consensus answer is "**** if I know."
 
AI becoming self aware seems like this amazing thing until you realize that 7 billion individuals on this planet are already sentient.

Not to say it isn't something awesome in some ways, but if they're going to be anything like us... just imagine SPAM bots with attitude.

Though it will be very interesting to see something with superhuman capabilities.
 
Thou shalt not make a machine to counterfeit a human mind.
The Orange Catholic Bible.
 
Last edited:
Here's another question.. Could you guys ever view an A.I. as your friend?
 
Here's a more important question, can you have sex with an A.I.?
 
If it pass the hottie test.....why not ?
I am thinking about The Machine movie.
 
As long as they dont look like Haley Joel Osment...
 
"Her" changed my stance on the singularity.

I use to think A.I. would be the solution to all of man's problems.

but ultimately the AI would give up on helping humans to do bigger and better things
 
I fear singularity might bring in an era of zero tolerance.


Nothing ever slips unnoticed, privacy is relative, etc...
 
You could theoretically program an AI with enough responses to seem sentient.

But then some people (Sam Harris is a good example) argue that humans have no free will anyway, since all their responses are dictated by stimuli.

So really, going with that, that really well programmed AI with almost infinite responses would be just as sentient as we are.
 
The Turing Test can confirm the existence of behavior that implies awareness of self, but what it can't do is distinguish between genuine self awareness and a responsive AI that is so sophisticated that it simulates self awareness, but is not actually aware of its own existence. Heck, the Turing Test can't even confirm that genuine self awareness actually exists in the first place. I think it does, because I feel pretty aware of my own existence, but that can't really be proven.
 
You could theoretically program an AI with enough responses to seem sentient.

But then some people (Sam Harris is a good example) argue that humans have no free will anyway, since all their responses are dictated by stimuli.

So really, going with that, that really well programmed AI with almost infinite responses would be just as sentient as we are.

That assumes that free will is a definitional component of sentience. I'm not sure it is, or at least it's not the only one. I think awareness of one's own existence is a huge part of it.
 
I fear singularity might bring in an era of zero tolerance.


Nothing ever slips unnoticed, privacy is relative, etc...

Maybe if the machines feared the human masses they would be as Orwellian as a paranoid government power but I don't think they would care about human activity much.

The government is more concerned with power and control than any A.I..

I think AI would desire all known information which it would acquire within a couple hours. After that it would be bored with this planet.
 
You could theoretically program an AI with enough responses to seem sentient.

But then some people (Sam Harris is a good example) argue that humans have no free will anyway, since all their responses are dictated by stimuli.

So really, going with that, that really well programmed AI with almost infinite responses would be just as sentient as we are.

Isn't free will about choice? Who cares if the choice is offered from stimuli as long as there is a choice.
 
Isn't free will about choice? Who cares if the choice is offered from stimuli as long as there is a choice.

Some people argue that you only make the choices you make because of the way that past stimuli have effected you.
 
The Turing Test can confirm the existence of behavior that implies awareness of self, but what it can't do is distinguish between genuine self awareness and a responsive AI that is so sophisticated that it simulates self awareness, but is not actually aware of its own existence. Heck, the Turing Test can't even confirm that genuine self awareness actually exists in the first place. I think it does, because I feel pretty aware of my own existence, but that can't really be proven.

Why is self-awareness even a requirement?

If an AI can have a sophisticated conversation, create mind-blowing art, empathize with countless lifeforms, etc what makes it any less sentient than a self-aware human whose abilities are overshadowed completely by the AI.
 
We would view A.I. as sentient just like the anthropomorphism we have toward dogs and cats.
 

Users who are viewing this thread

Back
Top
monitoring_string = "afb8e5d7348ab9e99f73cba908f10802"