Two researchers used fundamental knowledge of the physical properties of materials to create key innovations that make artificial intelligence work

 

CTBP congratulates John J. Hopfield and Geoffrey E. Hinton for their Nobel Prize in physics. Their research and innovations helped make possible “machines that learn” — artificial neural networks with the ability to store and reconstruct information and recognize complex patterns within data.  It was given “… for foundational discoveries and inventions that enable machine learning with artificial neural networks”. 

This year’s laureates used tools from physics to construct methods that helped lay the foundation for today’s powerful machine learning. John Hopfield created a structure that can store and reconstruct information. Geoffrey Hinton invented a method that can independently discover properties in data and which has become important for the large artificial neural networks now in use.

NSF supported their pioneering work in the 1980s, which helped create the foundation for the AI revolution of today, including Hopfield’s seminal 1982 paper “Neural networks and physical systems with emergent collective computational abilities.” Hopfield and Hinton’s multiple breakthroughs, achieved independently, used fundamental concepts and methods from physics to develop new computer technologies that mimic an organic brain’s ability to process information through memory and learning.

They used physics to find patterns in information

© Johan Jarnestad/The Royal Swedish Academy of Sciences

Many people are familiar with how computers can translate languages, interpret images, and even engage in meaningful conversations. What’s less commonly known is that these technologies have long been essential in research, especially in organizing and analyzing massive datasets. Over the last 15 to 20 years, machine learning has surged in development, utilizing a framework known as an artificial neural network. Today, when we refer to artificial intelligence, we often mean this kind of technology.

While computers don’t actually think, they can now replicate functions like memory and learning. This year’s physics laureates have played a key role in making this possible. Drawing on core physics concepts and methods, they have developed technologies that leverage network structures to process information effectively.

Unlike traditional software, which operates like a recipe with specific instructions, machine learning allows a computer to learn from examples. This approach enables it to address complex, ambiguous problems that cannot be solved with straightforward instructions. For instance, machine learning can interpret images to recognize the objects within them.

 

Read full article at Nobel Prize News