Neo » 29 Mar 2023, 12:46 pm » wrote: ↑
No, it cannot think for us. It's artificial. It's using existing human created data to generate a response. There are serious concerns, such as filters already obviously already in place driving politically correct responses. Ask chatgpt about Donald Trump and it refuses to answer. Ask about Biden and it returns some glowing butt kissing.Vegas » 29 Mar 2023, 12:50 pm » wrote: ↑
Neo » 29 Mar 2023, 1:09 pm » wrote: ↑
I am in the field. It does not display intelligence. It's quite artificial. Getting better but ai will never think, it will always reference and compare data sets. The only opinions it develops are those programmed in.Vegas » 29 Mar 2023, 1:12 pm » wrote: ↑
Xavier_Onassis » 29 Mar 2023, 1:12 pm » wrote: ↑
I don't know about this evil Ai?Vegas » 29 Mar 2023, 1:16 pm » wrote: ↑
Well, if no one is going to give me a red nuclear button so I can destroy Russia, China and whatever other **** country I want to destroy, then please give me AI, so that I can use it to move Joe's arthritic finger to press the stupid button.Vegas » 29 Mar 2023, 12:29 pm » wrote: ↑
Neo » 29 Mar 2023, 1:16 pm » wrote: ↑
Here is the definition of 'thinking':the ability to acquire and apply knowledge and skills.
The process of using one's mind to consider or reason about something.
It is more than being able to reason, of course, it is the ability for one generation to WRITE DOWN what it has learned and to pass it on to following generations.Vegas » 29 Mar 2023, 1:16 pm » wrote: ↑
So, you disagree with this statement ?Neo » 29 Mar 2023, 1:16 pm » wrote: ↑
Which requires reasoning.Xavier_Onassis » 29 Mar 2023, 1:29 pm » wrote: ↑
Because they can't reason how.The mother bear passes on how to catch salmon, but the grandmother bear and great great grandmother bear may have learned how to survive a forest fire or a flood and could NOT pass it on to the three generations that followed her.
Xavier_Onassis » 29 Mar 2023, 1:35 pm » wrote: ↑
That is just scratching the surface .....of the ethical debate ....that isn't being debated in a rush for profit.Vegas » 29 Mar 2023, 12:29 pm » wrote: ↑
What direction will this go? Will it get to the point where if a chatGPT, or any other AI, can eliminate a job, then it will? Students, in college and high school, are loving it all. It writes papers for them. Soon, society may substitute real social connections with AI bots. This will slowly decay and dull our minds to the point where any critical thinking will be a thing of the past.
The decline has begun.
Obviously we need to have limits in place. I agree with the concerns. Deep fakes generated by AI would be indistinguishable from real footage as the usual giveaways would be removed. We are certainly in an age where anything you do not personally witness first hand should be met with some skepticism. Tests will have to be monitored real time. Controls over things like infrastructure should always have a human at the switch.Majik » 29 Mar 2023, 1:29 pm » wrote: ↑
and you down play the risk?
and could your opinion be tied to your lively hood?
The letter, issued by the non-profit Future of Life Institute, has been signed by more than 1,100 individuals, including Apple co-founder Steve Wozniak, Stability AI founder and CEO Emad Mostaque, and engineers from Meta and Google, among others.They argue that AI systems with human-competitive intelligence can pose “profound risks to society and humanity,” and change the “history of life on Earth,” citing extensive research on the issue and acknowledgments by “top AI labs.”Experts go on to state that there is currently limited planning and management regarding Advanced AI systems despite companies in recent months being “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”“Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders,” the letter states.
Users browsing this forum: activeketoavis, Beekeeper, Bill Gates [Bot], Bob, Buck Naked, Buffalo, Chiseler151, ConsRule, Dirty Harry, FJB, Goodgrief, Google [Bot], Google Adsense [Bot], Google Feedfetcher, Jantje_Smit, jefftec, jerra b, Justin Sane, Kobia2, MackTheFinger, Majik, Mrkelly, nefarious101, Neo, neue regel, PhiloBeddo, PoliticalPopUp, razoo, RebelGator, Redheaded Stranger, roadkill, ROG62, SJConspirator, Skans, SouthernFried, Squatchman, Steve Jobs [Bot], Sumela, sunburn, Z09, Zeets2 and 1 guest