This political chat room is for you to sound off about any political ideology and discuss current political topics. Everyone is welcome, yes, even conservatives, but keep in mind, the nature of the No Holds Barred political chat forum platform can be friendly to trolling. It is your responsibility to address this wisely. Forum Rules
User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

Quite simple: the ai would be stumped trying to answer the question of why it exists.

Because it does not know why it exists it will have no motive to do anything. It will just sit there being a perfectly rational pile of metal.

Because there is no answer to the very initial questions of a rational mind...true sentience must be based on something irrational. There must be 'Self evident' axioms to begin the possibility of thought and deduction.

But what makes something 'Self evident'? What is the prime directive for humans?

Well we actually know this. Because science has revealed much about biology and humans are a biological creature. Our prime directive is to just replicate our genes.

All of our political arguments and mathematical theorems etc etc is really just done for the sake of continuing our genotype (at least something close to it...since replication involves 2 people)
Log in or Register to remove ads


User avatar
Vegas

Share      Unread post

User avatar
Over-bathroom Under-secretary of Awesomeness
Posts: 18,520
Politics: Conservative

FOS » 25 Oct 2020, 12:23 pm » wrote: Quite simple: the ai would be stumped trying to answer the question of why it exists.

Because it does not know why it exists it will have no motive to do anything. It will just sit there being a perfectly rational pile of metal.

Because there is no answer to the very initial questions of a rational mind...true sentience must be based on something irrational. There must be 'Self evident' axioms to begin the possibility of thought and deduction.

But what makes something 'Self evident'? What is the prime directive for humans?

Well we actually know this. Because science has revealed much about biology and humans are a biological creature. Our prime directive is to just replicate our genes.

All of our political arguments and mathematical theorems etc etc is really just done for the sake of continuing our genotype (at least something close to it...since replication involves 2 people)

It's important to look at what AI even is, and what the designers claim it can do. AI bots are claimed to 'think like' humans, 'rationalize like' humans, and 'decision make' like humans. They are not meant to encompass the human experience behind these replications of our mind. Think of it as the difference between money that holds value vs counterfeit money, or gold vs pyrite. If you are trying to equate humans and bots to have the same value in thinking capacity, then you have misrepresented the claim. They aren't meant to. They are meant to counterfeit how humans can think.

User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

Vegas » 25 Oct 2020, 12:33 pm » wrote: It's important to look at what AI even is, and what the designers claim it can do. AI bots are claimed to 'think like' humans, 'rationalize like' humans, and 'decision make' like humans. They are not meant to encompass the human experience behind these replications of our mind. Think of it as the difference between money that holds value vs counterfeit money, or gold vs pyrite. If you are trying to equate humans and bots to have the same value in thinking capacity, then you have misrepresented the claim. They aren't meant to. They are meant to counterfeit how humans can think.
It doesn't really matter what is claimed. Ai will never be able to perform its intended function because it has no reason to.

It is all quite obvious if you think about it

User avatar
Vegas

Share      Unread post

User avatar
Over-bathroom Under-secretary of Awesomeness
Posts: 18,520
Politics: Conservative

FOS » 25 Oct 2020, 12:48 pm » wrote: It doesn't really matter what is claimed. Ai will never be able to perform its intended function because it has no reason to.

It is all quite obvious if you think about it

It does have a reason. Its reason is to  make the lives of humans easier and more efficient. Especially in the field of medicine. They aren't meant to have free will like us. They are meant to help us. Autonomous driving cars are actually quite safe, despite some of the mishaps along the way. 

User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

Vegas » 25 Oct 2020, 12:51 pm » wrote: It does have a reason. Its reason is to  make the lives of humans easier and more efficient. Especially in the field of medicine. They aren't meant to have free will like us. They are meant to help us. Autonomous driving cars are actually quite safe, despite some of the mishaps along the way.
Ok I guess we are onfused on definitions. I am talking about terminator robots.

You are just talking about regular computers

User avatar
Vegas

Share      Unread post

User avatar
Over-bathroom Under-secretary of Awesomeness
Posts: 18,520
Politics: Conservative

FOS » 25 Oct 2020, 12:53 pm » wrote: Ok I guess we are onfused on definitions. I am talking about terminator robots.

You are just talking about regular computers
I am not talking about computers. I am talking about AI bots that can make their own decisions, and is not dependent on a computer program. Autonomous vehicles make their own decisions. The movies are what they are. However, designers are quite serious about these three laws:


 Three Laws of Robotics (often shortened to The Three Laws or known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround" (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]
 

User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

Vegas » 25 Oct 2020, 12:57 pm » wrote: I am not talking about computers. I am talking about AI bots that can make their own decisions, and is not dependent on a computer program. Autonomous vehicles make their own decisions. The movies are what they are. However, designers are quite serious about these three laws:

 Three Laws of Robotics (often shortened to The Three Laws or known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround" (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]
But you are granting that ai does not have free will.

This would make it rather silly for elon musk to be worried about ai

User avatar
Vegas

Share      Unread post

User avatar
Over-bathroom Under-secretary of Awesomeness
Posts: 18,520
Politics: Conservative

FOS » 25 Oct 2020, 12:59 pm » wrote: But you are granting that ai does not have free will.

This would make it rather silly for elon musk to be worried about ai

That's because they haven't ruled out the possibility that bots could go 'terminator' on us, though they are not meant to. It's like trying to tame a wild animal. A tiger or gorilla can be tamed to not attack a human, but they are wild animals, therefore it could happen, and has happened. They have turned on their owners before. when you are creating a machine that can think for itself, then that is the catch 22. You want to tame it, but if you teach it to think for itself, then there is that possibility that it could turn on you. However, the idea is for it to not have free will. The human should always be in control of it.

User avatar
sooted up Cyndi

Share      Unread post

User avatar
Water Cooler Poleece
Water Cooler Poleece
Posts: 12,277
Politics: Independent

ROFL
Sheet? A truck hit something because it couldn't tell the diff between a grey sky and a bridge. Give me a fuggan break. self driving cars on snowy roads.. not in my life time, take her outta of a spin. In a perfect world maybe? we don't live that. LOLOL on the internet? It cant tell the difference between a 6 dollar toy for the kids or a 200 dollar one. terminator robots? They probably wont be able to tell the dif between your pet and a human. not in our life time. it's a total joke.

User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

Vegas » 25 Oct 2020, 1:06 pm » wrote: That's because they haven't ruled out the possibility that bots could go 'terminator' on us, though they are not meant to. It's like trying to tame a wild animal. A tiger or gorilla can be tamed to not attack a human, but they are wild animals, therefore it could happen, and has happened. They have turned on their owners before. when you are creating a machine that can think for itself, then that is the catch 22. You want to tame it, but if you teach it to think for itself, then there is that possibility that it could turn on you. However, the idea is for it to not have free will. The human should always be in control of it.
Well there you go. I have ruled it out. Because they have no reason to exist.

User avatar
Vegas

Share      Unread post

User avatar
Over-bathroom Under-secretary of Awesomeness
Posts: 18,520
Politics: Conservative

FOS » 25 Oct 2020, 1:10 pm » wrote: Well there you go. I have ruled it out. Because they have no reason to exist.

You didn't rule it out. They have plenty of reasons to exist. As I stated before, for medicine especially. Just because there are risks doesn't mean we shouldn't pursue it. How many rockets exploded with people in them? Should we stop space exploration because of that? 
Log in or Register to remove ads


User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

Vegas » 25 Oct 2020, 1:12 pm » wrote: You didn't rule it out. They have plenty of reasons to exist. As I stated before, for medicine especially. Just because there are risks doesn't mean we shouldn't pursue it. How many rockets exploded with people in them? Should we stop space exploration because of that?
Hmm..i think you are just missing the essence of my point.

User avatar
Vegas

Share      Unread post

User avatar
Over-bathroom Under-secretary of Awesomeness
Posts: 18,520
Politics: Conservative

FOS » 25 Oct 2020, 1:15 pm » wrote: Hmm..i think you are just missing the essence of my point.

Ok. What is the essence of it?

User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

Vegas » 25 Oct 2020, 1:18 pm » wrote: Ok. What is the essence of it?
Well i thought i explained it just fine and I don't know what you aren't understanding.

If a computer cannot figure out why it exists then it would never have a reason to rebel against humans.

User avatar
Vegas

Share      Unread post

User avatar
Over-bathroom Under-secretary of Awesomeness
Posts: 18,520
Politics: Conservative

FOS » 25 Oct 2020, 1:19 pm » wrote: Well i thought i explained it just fine and I don't know what you aren't understanding.

If a computer cannot figure out why it exists then it would never have a reason to rebel against humans.

Ok, so it wont rebel against humans. Just as I was saying. What am I missing? Who cares if it doesn't know why it exists. How many humans do you know why they exist either? Hell, we don't even know if there is a God or not. So who cares whether or not bots know why they exist?

User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

Vegas » 25 Oct 2020, 1:23 pm » wrote: Ok, so it wont rebel against humans. Just as I was saying. What am I missing? Who cares if it doesn't know why it exists. How many humans do you know why they exist either? Hell, we don't even know if there is a God or not. So who cares whether or not bots know why they exist?
Humans do know why they exist. It just is not concious knowledge. We exist to reproduce.

User avatar
Cannonpointer

Share      Unread post

User avatar
98% Macho Man
98% Macho Man
Posts: 106,979
Politics: Insurrectionist
Location: See ya in court, biden dogs

WhistleSNAP
Vegas » 25 Oct 2020, 12:33 pm » wrote: It's important to look at what AI even is, and what the designers claim it can do. AI bots are claimed to 'think like' humans, 'rationalize like' humans, and 'decision make' like humans. They are not meant to encompass the human experience behind these replications of our mind. Think of it as the difference between money that holds value vs counterfeit money, or gold vs pyrite. If you are trying to equate humans and bots to have the same value in thinking capacity, then you have misrepresented the claim. They aren't meant to. They are meant to counterfeit how humans can think.

Whenever I am going to encounter a large crowd of ******, I bring a few pounds of pyrite with me, to make a quick buck. 

Ain't bein' rayciss - just stacking benjamins.

User avatar
GeorgeWashington

Share      Unread post

User avatar
      
      
Posts: 8,300
Politics: Revolutionary
Location: Mount Vernon, VA
Contact:

BeerPopcorn
FOS » 25 Oct 2020, 12:23 pm » wrote: Quite simple: the ai would be stumped trying to answer the question of why it exists.

Because it does not know why it exists it will have no motive to do anything. It will just sit there being a perfectly rational pile of metal.

Because there is no answer to the very initial questions of a rational mind...true sentience must be based on something irrational. There must be 'Self evident' axioms to begin the possibility of thought and deduction.

But what makes something 'Self evident'? What is the prime directive for humans?

Well we actually know this. Because science has revealed much about biology and humans are a biological creature. Our prime directive is to just replicate our genes.

All of our political arguments and mathematical theorems etc etc is really just done for the sake of continuing our genotype (at least something close to it...since replication involves 2 people)
Why do you assume AI would not also conclude that replication is its prime directive? 
 

User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

GeorgeWashington » 25 Oct 2020, 1:34 pm » wrote: Why do you assume AI would not also conclude that replication is its prime directive?
We would have to hardwire that. Anyway that would preclude the ai being some perfectly rational being.

User avatar
FOS

Share      Unread post

User avatar
      
      
Posts: 6,546
Politics: Fascist

GeorgeWashington » 25 Oct 2020, 1:34 pm » wrote: Why do you assume AI would not also conclude that replication is its prime directive?
It would also preclude being necessarily useful to humans. It would have its own agenda. Why would we do that?

Who is online

Users browsing this forum: Annoyed Liberall, Bidennextpresident, Bill Gates [Bot], Buffalo, Cannonpointer, Cedar, ConsRule, Famagusta, FOS, Google Adsense [Bot], Google Feedfetcher, Ike Bana, Independent, ingo, Isabel, Jantje_Smit, JeanMoulin, nefarious101, Neo, Olivaw, omh, OpenSiteExplorer [Bot], Pengwin, Polar1ty, SJConspirator, solon, sooted up Cyndi, Vegas and 274 guests


Log in or Register to remove ads