Google Computer Scientist Quits So He Can Warn World Of ‘Scary’ And ‘Dangerous’ AI

Warns that “bad actors” will attempt to use AI for “bad things”

A Google computer scientist dubbed the ‘Godfather’ of AI has quit the company, stating that he did so in order to warn the world of the dangers that the technology presents as big-tech engages in an AI arms race.

Geoffrey Hinton responded Monday to a New York Times article that insinuated he had quit Google in order to criticise the company, noting that the real reason was he actually wanted to freely talk about the dangers of AI.

From the Times interview:

He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their AI systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of AI technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

Hinton added that he fears in the immediate term the internet is going to be quickly inundated with fake videos, photos, and news, and soon no one will “be able to know what is true anymore.”

He added that in the long term, he fears AI will eventually eclipse human intelligence.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said, adding “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Hinton also told the BBC that one of the leading dangers of AI is that “bad actors” will attempt to use it for “bad things”.

Current Google CEO Sundar Pichai has admitted that the company’s ChatGPT competitor, Bard, has developed “emergent properties,” meaning that it is learning things that it hasn’t been programmed to know, and that the AI’s behavior is something he does not “fully understand.”

Last month Elon Musk revealed that Google has long planned to create an AI God and that Musk has repeatedly warned the company’s owners against it.

Musk stated that Google’s ultimate goal is to “create digital super intelligence” or what he describes as a “digital god.”

The Twitter owner said that Co-Founder of Google, Larry Page, told him privately years ago the company’s larger agenda is to work toward Artificial General Intelligence.

Musk co-founded OpenAI, the company behind ChatGPT with Larry Page, but he is clearly genuinely concerned about the rapid advances in AI and how it could negatively impact humanity.

Musk told Tucker Carlson that it is “absolutely” conceivable that AI could take control and make decisions for people, which ultimately might lead to “civilizational destruction”.

“The danger, really, AI is perhaps more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential, however small you want to regard that probability, but it is not trivial,” Musk warned.

Original Article: