U.S. scientist John Hopfield, recently awarded the Nobel Prize in Physics for his pioneering research in the field of Artificial Intelligence (AI), has issued a troubling warning about the rapid advances in this technology. Hopfield, professor emeritus at Princeton University, expressed concern about the lack of knowledge that still surrounds the operation of modern AI systems and warned of the potential danger they pose if not properly controlled.
In a video recorded from Britain for a meeting at the University of New Jersey, the 91-year-old researcher compared advances in AI to other powerful technologies he has witnessed in his lifetime, such as biological engineering and nuclear physics. In both, he noted, it has been proven that they can have both beneficial and devastating effects. For Hopfield, the key is understanding:
“As a physicist, I’m very concerned about something that is not controlled, something that I don’t understand enough to know what limits might be imposed on this technology.”
Today’s AI systems, with their impressive processing power and deep learning, are described by Hopfield as “absolute wonders.” However, the problem lies in the fact that, despite their advances, scientists still do not fully understand how they work internally. This lack of understanding calls into question humanity’s ability to control the limits of AI and ensure that its uses are ethical and safe.
The development of AI, particularly with regard to neural networks and deep learning, has experienced explosive growth since the 1980s, thanks to the pioneering work of Hopfield and his colleague Geoffrey Hinton, also a Nobel laureate. The two laid the foundations for today’s AI systems, which have had an enormous impact in many areas, from medicine to industry, but have also raised concerns about their future implications.
Hopfield and Hinton, known as the “fathers” of AI, have been critical about the rapid expansion of the technology they helped create. Their concern centers on the fact that AI is evolving faster than scientists can assimilate, raising the risk that its development will spiral out of control.
Hopfield advocates, like Hinton, for more research to better understand the limits and risks of AI before its capabilities surpass human oversight. As companies and governments compete in a race to master this technology, the need for limits and regulations becomes more urgent.
Hopfield’s warning adds to growing global concern about the impact of AI on society. While the technology promises to revolutionize many aspects of our lives, it also raises fundamental questions about control, ethics and responsibility in its development.