Geoffrey Hinton, a leading figure in AI research, warns that artificial intelligence could one day surpass human control, suggesting a 10%-20% risk. He urges the tech industry to prioritize safety over profits, particularly criticizing Google for its shift toward military applications. Hinton’s concerns reflect a broader worry about the speed of AI development and its implications. He believes companies should allocate more resources to safety research, yet responses from labs on their safety efforts remain vague.
Geoffrey Hinton, often dubbed the “Godfather of AI,” shared a striking warning about the potential for artificial intelligence to surpass human control. Awakened last year for the unexpected honor of a Nobel Prize in physics, he reflected on his groundbreaking contributions to the field. He quipped, “I dreamt about winning one for figuring out how the brain works. But I didn’t figure out how the brain works, but I won one anyway.”
Now at 77, Hinton’s legacy is rooted deeply in neural networks, with his 1986 proposal for predicting the next word setting the stage for today’s language models. Despite his excitement about AI’s potential to revolutionize sectors like education and medicine, he expressed rising concerns over its fast-paced evolution. “The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” he metaphorically described. “Unless you can be sure it’s not gonna want to kill you when it’s grown up, you should worry.”
Hinton estimates a concerning 10% to 20% probability that AI could someday seize control from humans. He lamented that “people haven’t got it yet, people haven’t understood what’s coming,” emphasizing that the peril is still largely underestimated. This alarm isn’t just his alone; it resonates with other prominent figures in the tech industry, including Google CEO Sundar Pichai and OpenAI’s Sam Altman. However, Hinton isn’t shy about criticizing these companies, especially big tech firms, for prioritizing profit margins at the cost of safety.
“If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation,” Hinton remarked, expressing disbelief at their push against existing regulations, which he believes are already inadequate. His disappointment is palpable, especially towards Google, where he once worked, for its disappointing turn towards supporting military applications of AI.
Hinton also advocates for a significant shift in resource allocation within AI development, suggesting that companies should devote substantial resources—suggestively around a third of computing power—to safety research, a stark contrast to what’s currently happening. However, when CBS News reached out to AI labs for specifics on their safety research budgets, none provided concrete figures. They all claim that safety is a high priority but exhibit hesitance towards the regulations proposed by legislators.
Geoffrey Hinton’s dire warnings about AI highlight a growing concern within the tech community, advocating for strong safety measures as this technology advances. A Nobel laureate who played a key role in the evolution of neural networks, Hinton emphasizes that the potential risk of AI overtaking human control is significant, yet many in the industry fail to grasp the severity. His criticisms extend to the big tech firms’ push for less regulation, calling for a more significant commitment to research safety before it’s too late.
Original Source: www.cbsnews.com