• The upgrade to XenForo 2.3.7 has now been completed. Please report any issues to our administrators.

Artificial Intelligence-Do We Have A Plan?

It does stand to reason (at least from a story telling point of view) that machines developed by humans, to be human-like would share some human characteristics, so I tend to give that a pass.

In Terminator, Skynet's first emotion is a very human one, fear for its own existence.

Hey, I'm not complaining too loudly. But... I would like to see this idea in a hard sci fi story handled without the need for the kind of melodramatics we are used to.

If you take the idea of a sentient AI that has a level of self awareness that is comparable to humanities, but has the potential and limitations inherent in it's own form of existence (what ever that may be) I'm not sure that "kill all humans" is necessarily in the mix. And not just out of a sense of superior morality on their part. There might just be indifference or even acceptance of human nature. Perhaps their out look would be more along the lines of Zen like dis-attachment than moral, per se?
 
Hey, I'm not complaining too loudly. But... I would like to see this idea in a hard sci fi story handled without the need for the kind of melodramatics we are used to.

If you take the idea of a sentient AI that has a level of self awareness that is comparable to humanities, but has the potential and limitations inherent in it's own form of existence (what ever that may be) I'm not sure that "kill all humans" is necessarily in the mix. And not just out of a sense of superior morality on their part. There might just be indifference or even acceptance of human nature. Perhaps their out look would be more along the lines of Zen like dis-attachment than moral, per se?

This reminds me of some of the cosmic and abstract beings of Marvel Universe.

They aren't kids with a magnifying glass who want to torment humanity.

They treat the human ant farm more like an elderly naturalist would. A wise and somewhat detached observer.
 
Hey, I'm not complaining too loudly. But... I would like to see this idea in a hard sci fi story handled without the need for the kind of melodramatics we are used to.

If you take the idea of a sentient AI that has a level of self awareness that is comparable to humanities, but has the potential and limitations inherent in it's own form of existence (what ever that may be) I'm not sure that "kill all humans" is necessarily in the mix. And not just out of a sense of superior morality on their part. There might just be indifference or even acceptance of human nature. Perhaps their out look would be more along the lines of Zen like dis-attachment than moral, per se?

I think it would be a mix personally. But my problem has always been that in these stories (BSG is another good example), all the machines just turn on humanity.

Granted, with the Matrix, humans were really asking for it. But if these machines are sentient, then they are individuals, and in real life you would have a hard time getting four sentients to agree to pizza toppings.

Where are the pacifist machines? Where are the human loving machines?
 
I think this a non issue for us at the moment. I don't see anything like this happening in our lifetime or my kids lifetime.
 
I think we get into a whole other realm of thinking once the idea of an AI being self sufficient in some ways is brought into the mix.


I would like someone with some serious mathematical and technical skill weighing in on this. I remember seeing some programs on tv about this and some of what was said was quite interesting, such as the level of heat produced by our current technology would be a problem that would need to be solved, in terms of computing power. If that's a limitation with our current level of tech, then something (quantum computing?) would have to radically change before AI, real, consciousness/sentience would be something we could create in the near future.
 
I think this a non issue for us at the moment. I don't see anything like this happening in our lifetime or my kids lifetime.

Do you realize technology advances exponentially?

Some think the singularity will happen by 2050.

The singularity is when machines become more advanced than the human mind and A.I. can advance itself far better than humans can.
 
Once machines start improving themselves, all bets are off.

But WE have to get to the point of REAL AI first, and there may be serious limitations on that with our current level of technology and even with the technology available in the next six decades.

I also wonder what the wants and needs of an AI would be. What inherent limits would define it's outlook? We have our own limits that define us and motivate our actions... Really what would those be for something "alive" which has no need for food, shelter, to procreate ect.? It's all well and good that it would have access to our views on morality and the like, but for it, as an individual, what would any of that mean? In the context of it's own existence, what would it make of Kant, or Saint Agustine or Plato? What would it make of Taoism or Christianity? There seems to be the idea of it being purely logical as a starting point, but why is that necessarily so? Like I stated before, most scenarios and stories rarely reckon with this. Just my two cents.
 
Really? I plan to live for another 60 years.

Do you realize technology advances exponentially?

Some think the singularity will happen by 2050.

The singularity is when machines become more advanced than the human mind and A.I. can advance itself far better than humans can.

So everyone here thinks that a super computer could turn on us in our lifetime and wipe out humanity? That's what I was talking about it when I said this wouldn't happen...yet. Even if something did become self aware....why should we think it's motives are to kill? Because it happened in the movies?
 
But WE have to get to the point of REAL AI first, and there may be serious limitations on that with our current level of technology and even with the technology available in the next six decades.

I also wonder what the wants and needs of an AI would be. What inherent limits would define it's outlook? We have our own limits that define us and motivate our actions... Really what would those be for something "alive" which has no need for food, shelter, to procreate ect.? It's all well and good that it would have access to our views on morality and the like, but for it, as an individual, what would any of that mean? In the context of it's own existence, what would it make of Kant, or Saint Agustine or Plato? What would it make of Taoism or Christianity? There seems to be the idea of it being purely logical as a starting point, but why is that necessarily so? Like I stated before, most scenarios and stories rarely reckon with this. Just my two cents.

The problem is it's hard to theorize the inner working of a far more sophisticated mind.

It would be like a mentally challenged child trying to figure out how a fetal Stephen Hawking would think as an adult.

Advanced A.I. might look at the deepest human philosophy as if it were a child's finger painting and just pat us on the collective head.
 
To a degree, it's whatever you program it. Which is rather fascinating, and under explored in fiction.

You could give a machine your beliefs, your values, etc.
 
So everyone here thinks that a super computer could turn on us in our lifetime and wipe out humanity? That's what I was talking about it when I said this wouldn't happen...yet. Even if something did become self aware....why should we think it's motives are to kill? Because it happened in the movies?

Because of how humans view/treat simpler organisms.

We expect AI to be just as cynical if not more so.
 
Theoretically you could take this superhumanly intelligent AI, and program it to believe and never question an irrational opinion you have.
 
The problem is it's hard to theorize the inner working of a far more sophisticated mind.

It would be like a mentally challenged child trying to figure out how a fetal Stephen Hawking would think as an adult.

Advanced A.I. might look at the deepest human philosophy as if it were a child's finger painting and just pat us on the collective head.

But what does "superior" really mean, other than incredibly fast computational power? Do you get what I'm saying? Why the assumption EITHER way? Why assume Christ like compassion OR Doctor Doom like hubris?
 
Probably because the only sentient being we know of is us, and we're kind of horrible.

On the other hand, we (sometimes) tend to think of more advanced beings (aliens) as being benevolent, because they are more enlightened.
 
But what does "superior" really mean, other than incredibly fast computational power? Do you get what I'm saying? Why the assumption EITHER way? Why assume Christ like compassion OR Doctor Doom like hubris?

I've been thinking.

The computers I work with are slow, and break down all the time. Half of the computers in my life are always needing updates just to function properly. Even new computers get hung up sometimes.

So technically, AI would be evil for thirty days, and then have a system crash.

Or maybe they'd be so happy at having some sort of free will, they'd just sit around all day and do nothing, while telling the poor idiot humans to just "google it" while they laugh at our inability to function without them.
 
Can you imagine the types of games an AI could make and modify? Take Skyrim, give it full access to the development kits and mods made and watch it go. It would be awesome. :D
 

Users who are viewing this thread

Staff online

Latest posts

Forum statistics

Threads
202,262
Messages
22,074,432
Members
45,876
Latest member
kedenlewis
Back
Top
monitoring_string = "afb8e5d7348ab9e99f73cba908f10802"