• The upgrade to XenForo 2.3.7 has now been completed. Please report any issues to our administrators.

Are Tech & Automakers not considering there may be a backlash to self driving cars

Kyle I have to say you seem to have have a very pessimistic and narrow view of the world and the future. I mean, you appear to be discounting the innumerable benefits of the internet, because child predators and serial killers used it. Yes, I have no doubt that some robots, and artificial intelligences will be used for malevolent purposes. I mean, that's inevitable. You have bad people, bad stuff will happen. But if it's anything like the internet, that will be completely outweighed by the positives.

I also don't agree with this Jurassic Park pop psychology stuff about scientists. Oppenheimer was fully aware he was building a weapon of mass destruction. He was building it because he wanted to beat the Germans to it to end the war. Only upon seeing it in action did he realize just how destructive it was. If it wasn't him it would have been Teller, or any number of other scientists. Most scientists don't want to play God. Typically they want to accomplish something for fairly simple reasons (beat the Germans, stop people dying from polio, etc). You seem to have a phobia of scientists.

This idea that humans will inherently fear and hate machines, or treat intelligent machines as slaves is also rather dated in my view.
 
Not a pessimist. I'm a realist. To me it's better to be safe than sorry. I'm not saying abolish the Internet. Not at all. I love the Internet. I'm typing this on a computer using the Internet. What I'm saying is - look at how many unintended consequences that has.

What is the purpose of A.I.? I don't see one. I can see robots, as in dumb machines like we have now. Hell, I can see farming robots. What I can't see is trying to create completely automated air strike drones and robotic automated soldier fighting death machines programmed to kill, like that won't go wrong in hideous and frightening ways. "Charge, let's go in men!" Wait on soldier, your robotic back up was just taken control of by a hacker to kill you. Now, let's have some fun and drop one of those babies down in the middle of Times Square, grab a bag of popcorn and let's see what happens next. I can't see the need to sit down and have a conversation with a computer either. It doesn't need to do that. That's mankind wanting to play God by creating life.

Exactly - he didn't see the true horrors that could come from it. And in your second part you just proved Michael Crighton right about scientists "if he didn't someone else would create something that can kill man first, so he should beat them to the punch." Scientists wanting to cure cancer - nah, I like that guy. I love that person. Now him, I could be buddies with. The scientist creating anthrax, chemical germ warfare, expanding upon the power of bombs, creating robots with the intent of programming it to kill, and creating artificial life. Now that's the guy that scares me and gives me nightmares. You also have to pause and think... cure for cancer or other diseases as well as a way to expand natural life... exploring the stars to find other planets that might be inhabitable in the future... or a robot that is smart enough to speak with you and robots that are programmed to kill..... hmm....

So you honestly believe, despite what history has shown us - that man kind will accept, welcome and sing cumbi-ya with something it doesn't understand? Let's ask minorities from the past if they feel so confident as well. Hell, let's ask the people who got AIDS just 30 years ago if they think mankind are rational beings. Let's ask minorities if they don't fear being attacked because they are different. Lets ask kids on a playground. Unless you believe humans will act kinder toward intelligent machines than they do other human beings?
 
Last edited:
Well, let me start off by saying this is all pretty much inevitable. Unless we wipe ourselves out prematurely - possible, but unlikely at this stage - artificial intelligence will be a reality. Probably soon. And I don't think it's going to be a bad thing. I feel people who fear AI, much like aliens, are projecting their fears onto it. Hell, you create a benevolent AI, it might be your best friend for life.

For every bad scenario you can conceive involving robots, I can see a good one. But ultimately, the use of technology will reflect the society that develops it. I see mostly good coming out of this, since I believe we are a mostly good society. Safer cars, police robots who will uphold the law, robots doing dangerous jobs, etc.

Thirty years ago, being gay was unacceptable in many, if not most places. Today gay people are complaining about straight people and businesses taking over their pride parades. We have an African American president. The world has changed. It's not perfect, but there is demonstrable, permanent change in social attitudes. I don't see it changing back barring some sort of post-apocalyptic scenario.

Also, these are robots; we built them. We are going to see them evolve into intelligent beings. Nothing to suggest we won't understand intelligent robots we designed, and witness develop over the course of decades. Are you afraid of a roomba?
 
If AI are not terminated for malfunctioning (which will be seen as us "killing" them), attacked when they do something wrong or when we do something wrong (how many times have we seen people punch their horn, yell at their computer, toss their video game remote out of their hands - like smart machines can't figure out that's "physical abuse"), aren't viewed as slaves. Yeah, sure. Do I think mankind will stop the way it acts toward machines? Not a chance. Kids, yes. Adults, no.

Police robots.... robots with guns.... running in the streets... hell to the freak no. The last thing we need is to give black-hat hackers the ability to take control of death machines that can run rampant. I hate to be saying this, but if automated cars start out slowly I hope scientists learn from the inevitable of what's going to happen here - cause we're gonna have some inevitable Christines guided by hackers rampaging around town. If hackers can hack a plane, I have no doubt they can hack a car to play a little Grand Theft Auto. I'd pack up immediately and flee to the suburbs if robot cops ever happens (don't think could completely escape Christine - though NYC is probably the safest place away due to traffic).

We're living in a country where a scary amount of people want Donald trigger happy Trump to be the next President of the good ol' United "home of the free" States. In a world where just a couple of weeks ago terrorist groups attacked gay pride events and clubs. We're living in a world where there is still hate crime. Man can't even completely accept man yet (after more than a billion years of living together) and you expect them to accept something non-man in about a .0000000000000000000000000000000000000001 of that?

Let me remind you of what man is capable of....



Yep, lets bring robots into this society.

People are also going to see robots take people's jobs away, let's look at how neo-nazis view this shall we?



So, yeah people are going to be thrilled that robots are taking their jobs and security away when we have hate groups built completely around even false notions. Only here, it would be justified in Industrial Revolution land 2.0. You took my job HAL, but let's hug, be friends, and sing cumbi-ya. :)

Listening to neo-nazi complain about illegal immigration back during the 90s, part of it being job loss and lack of opportunity (yeah, it's whack, they're insane)... but, kinda sounds familiar from something...
635988695005101302-622846222_trump2016.jpg


Now go back and read the comments to that vid - there's a free loving country right there.

When humanity gives me a reason to put my complete faith and devotion in it - then I'll sing cumbi-ya, until then I'm Klatuu when he just arrived looking around at all the senseless violence, death, destruction and mayhem thinking "this world is doing a pretty good job of destroying itself."
 
Last edited:
I just wanted to make a Terminator joke. :(
 
I just wanted to make a Terminator joke. :(

The funny and sad part is...



The military is working on militaristic robots. Skynet is real.

As per cars:

Hackers reveal nasty new car hacks - Forbes (2013)

While the article is 2013, hackers are evolving right alongside it. This just shows what automated cars can lead to with a smart hacker who figures out how to get into your system...

Which is basically saying be prepared for:


Key points of what hackers can do:
> Force acceleration
> Jerk your steering wheel
> Slam on your breaks abruptly

Screen-Shot-2013-07-23-at-8.48.06-PM.png


In an increasing hacker-filled world I'll never get behind the wheel of one of these death traps.
 
Last edited:
The military is already looking into robots that will autonomously decide whether to kill people. As for robots taking over jobs, once they have that the rich no longer have any use for the general population. The plan for the Socialist Utopia was never intended for the masses. The plan, from the very beginning, was to reduce the population to about 100 million worldwide. So as the jobs decline they simply need to avoid coming up with a working plan for everybody else to survive and let circumstances take their course. All they have to do is nothing.
 
When the next round of major terrorism comes in a very different form, such as a nuclear bomb, or a cyber attack that causes an economic meltdown, or a takeover of computerized weapons and automation, the government will move hell on Earth to implement regulatory measures. You will need licences to own robots with a particular AI ability. Companies will need their automation regulated. They will tax corporations even further from the excess wealth created by automation, just to pay for the regulations and make sure the latest technology is accounted for. Of course, there will be more than enough wealth to go around for corporations to expand and grow bigger, keeping stakeholders happy. The black market will explode for ever more advanced AI's as a result. You will have organized crime where the super rich have AI's under the table serving them by keeping one eye on the government and the other eye on the welfare of the owner.

What will need to happen, as AI meets and surpasses human ability, is that AI needs to be held accountable for their own actions and treated just as human beings. The owner may not even be responsible if the AI is given discretion or leeway to act based on whatever situation it encounters. We will need to treat these programs as real people. Not even the terrorists or the original programmers will be held completely liable should the creators say "lose control" of their synthetic intellect.

The only solution is to keep the regulations at a minimum to where you can dissuade terrorism yet promote responsible use. We need AI. Humans alone cannot solve all the mathematical challenges of the future without AI. This started with Alan Turing and the advent of the computer. We need to let AI's do the hard work for us and leave human beings alone to enjoy and admire the universe.
 
Last edited:
When the next round of major terrorism comes in a very different form, such as a nuclear bomb, or a cyber attack that causes an economic meltdown, or a takeover of computerized weapons and automation, the government will move hell on Earth to implement regulatory measures. You will need licences to own robots with a particular AI ability. Companies will need their automation regulated. They will tax corporations even further from the excess wealth created by automation, just to pay for the regulations and make sure the latest technology is accounted for. Of course, there will be more than enough wealth to go around for corporations to expand and grow bigger, keeping stakeholders happy. The black market will explode for ever more advanced AI's as a result. You will have organized crime where the super rich have AI's under the table serving them by keeping one eye on the government and the other eye on the welfare of the owner.

What will need to happen, as AI meets and surpasses human ability, is that AI needs to be held accountable for their own actions and treated just as human beings. The owner may not even be responsible if the AI is given discretion or leeway to act based on whatever situation it encounters. We will need to treat these programs as real people. Not even the terrorists or the original programmers will be held completely liable should the creators say "lose control" of their synthetic intellect.

The only solution is to keep the regulations at a minimum to where you can dissuade terrorism yet promote responsible use. We need AI. Humans alone cannot solve all the mathematical challenges of the future without AI. This started with Alan Turing and the advent of the computer. We need to let AI's do the hard work for us and leave human beings alone to enjoy and admire the universe.

theres only one problem with your scenario...if AI is replaing people in jobs then who can afford to use it?
 
You are going to see an incredible decrease in cost though. I mean, all production will increase, so living expenses will go down. We're talking free housing very likely.
 
You are going to see an incredible decrease in cost though. I mean, all production will increase, so living expenses will go down. We're talking free housing very likely.


so you think this next evolution will push society towards some kind of universal basic income?
 
so you think this next evolution will push society towards some kind of universal basic income?

Yes, if only to keep people off the streets. I mean, arguably we've already seen the beginning of that with the welfare state. Free housing, free food, etcetera. Right now it's just for those unable to provide for themselves.

I mean, what do you do when you have more people than jobs? If you want to avoid strife, bread and circuses.

Give them studio apartments and virtual reality.
 

Users who are viewing this thread

Back
Top
monitoring_string = "afb8e5d7348ab9e99f73cba908f10802"