FOS » 25 Oct 2020, 12:23 pm » wrote: ↑ Quite simple: the ai would be stumped trying to answer the question of why it exists.
Because it does not know why it exists it will have no motive to do anything. It will just sit there being a perfectly rational pile of metal.
Because there is no answer to the very initial questions of a rational mind...true sentience must be based on something irrational. There must be 'Self evident' axioms to begin the possibility of thought and deduction.
But what makes something 'Self evident'? What is the prime directive for humans?
Well we actually know this. Because science has revealed much about biology and humans are a biological creature. Our prime directive is to just replicate our genes.
All of our political arguments and mathematical theorems etc etc is really just done for the sake of continuing our genotype (at least something close to it...since replication involves 2 people)
It doesn't really matter what is claimed. Ai will never be able to perform its intended function because it has no reason to.Vegas » 25 Oct 2020, 12:33 pm » wrote: ↑ It's important to look at what AI even is, and what the designers claim it can do. AI bots are claimed to 'think like' humans, 'rationalize like' humans, and 'decision make' like humans. They are not meant to encompass the human experience behind these replications of our mind. Think of it as the difference between money that holds value vs counterfeit money, or gold vs pyrite. If you are trying to equate humans and bots to have the same value in thinking capacity, then you have misrepresented the claim. They aren't meant to. They are meant to counterfeit how humans can think.
FOS » 25 Oct 2020, 12:48 pm » wrote: ↑ It doesn't really matter what is claimed. Ai will never be able to perform its intended function because it has no reason to.
It is all quite obvious if you think about it
Ok I guess we are onfused on definitions. I am talking about terminator robots.Vegas » 25 Oct 2020, 12:51 pm » wrote: ↑ It does have a reason. Its reason is to make the lives of humans easier and more efficient. Especially in the field of medicine. They aren't meant to have free will like us. They are meant to help us. Autonomous driving cars are actually quite safe, despite some of the mishaps along the way.
I am not talking about computers. I am talking about AI bots that can make their own decisions, and is not dependent on a computer program. Autonomous vehicles make their own decisions. The movies are what they are. However, designers are quite serious about these three laws:FOS » 25 Oct 2020, 12:53 pm » wrote: ↑ Ok I guess we are onfused on definitions. I am talking about terminator robots.
You are just talking about regular computers
But you are granting that ai does not have free will.Vegas » 25 Oct 2020, 12:57 pm » wrote: ↑ I am not talking about computers. I am talking about AI bots that can make their own decisions, and is not dependent on a computer program. Autonomous vehicles make their own decisions. The movies are what they are. However, designers are quite serious about these three laws:
Three Laws of Robotics (often shortened to The Three Laws or known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround" (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]
FOS » 25 Oct 2020, 12:59 pm » wrote: ↑ But you are granting that ai does not have free will.
This would make it rather silly for elon musk to be worried about ai
Well there you go. I have ruled it out. Because they have no reason to exist.Vegas » 25 Oct 2020, 1:06 pm » wrote: ↑ That's because they haven't ruled out the possibility that bots could go 'terminator' on us, though they are not meant to. It's like trying to tame a wild animal. A tiger or gorilla can be tamed to not attack a human, but they are wild animals, therefore it could happen, and has happened. They have turned on their owners before. when you are creating a machine that can think for itself, then that is the catch 22. You want to tame it, but if you teach it to think for itself, then there is that possibility that it could turn on you. However, the idea is for it to not have free will. The human should always be in control of it.
FOS » 25 Oct 2020, 1:10 pm » wrote: ↑ Well there you go. I have ruled it out. Because they have no reason to exist.
Hmm..i think you are just missing the essence of my point.Vegas » 25 Oct 2020, 1:12 pm » wrote: ↑ You didn't rule it out. They have plenty of reasons to exist. As I stated before, for medicine especially. Just because there are risks doesn't mean we shouldn't pursue it. How many rockets exploded with people in them? Should we stop space exploration because of that?
Well i thought i explained it just fine and I don't know what you aren't understanding.
FOS » 25 Oct 2020, 1:19 pm » wrote: ↑ Well i thought i explained it just fine and I don't know what you aren't understanding.
If a computer cannot figure out why it exists then it would never have a reason to rebel against humans.
Humans do know why they exist. It just is not concious knowledge. We exist to reproduce.Vegas » 25 Oct 2020, 1:23 pm » wrote: ↑ Ok, so it wont rebel against humans. Just as I was saying. What am I missing? Who cares if it doesn't know why it exists. How many humans do you know why they exist either? Hell, we don't even know if there is a God or not. So who cares whether or not bots know why they exist?
Vegas » 25 Oct 2020, 12:33 pm » wrote: ↑ It's important to look at what AI even is, and what the designers claim it can do. AI bots are claimed to 'think like' humans, 'rationalize like' humans, and 'decision make' like humans. They are not meant to encompass the human experience behind these replications of our mind. Think of it as the difference between money that holds value vs counterfeit money, or gold vs pyrite. If you are trying to equate humans and bots to have the same value in thinking capacity, then you have misrepresented the claim. They aren't meant to. They are meant to counterfeit how humans can think.
Why do you assume AI would not also conclude that replication is its prime directive?FOS » 25 Oct 2020, 12:23 pm » wrote: ↑ Quite simple: the ai would be stumped trying to answer the question of why it exists.
Because it does not know why it exists it will have no motive to do anything. It will just sit there being a perfectly rational pile of metal.
Because there is no answer to the very initial questions of a rational mind...true sentience must be based on something irrational. There must be 'Self evident' axioms to begin the possibility of thought and deduction.
But what makes something 'Self evident'? What is the prime directive for humans?
Well we actually know this. Because science has revealed much about biology and humans are a biological creature. Our prime directive is to just replicate our genes.
All of our political arguments and mathematical theorems etc etc is really just done for the sake of continuing our genotype (at least something close to it...since replication involves 2 people)
We would have to hardwire that. Anyway that would preclude the ai being some perfectly rational being.GeorgeWashington » 25 Oct 2020, 1:34 pm » wrote: ↑ Why do you assume AI would not also conclude that replication is its prime directive?
It would also preclude being necessarily useful to humans. It would have its own agenda. Why would we do that?GeorgeWashington » 25 Oct 2020, 1:34 pm » wrote: ↑ Why do you assume AI would not also conclude that replication is its prime directive?
Users browsing this forum: Annoyed Liberall, Bidennextpresident, Bill Gates [Bot], Buffalo, Cannonpointer, Cedar, ConsRule, Famagusta, FOS, Google Adsense [Bot], Google Feedfetcher, Ike Bana, Independent, ingo, Isabel, Jantje_Smit, JeanMoulin, nefarious101, Neo, Olivaw, omh, OpenSiteExplorer [Bot], Pengwin, Polar1ty, SJConspirator, solon, sooted up Cyndi, Vegas and 274 guests