The Nobel Prizes in Physics and Chemistry were both awarded to AI this year, much to everyone's surprise, including the laureates themselves.
"How can I be sure this isn't a prank call?" was Geoffrey Hinton's initial thought when he received a call from the Nobel Prize committee at two in the morning. At the time, the 77-year-old AI godfather was in a hotel in California with weak internet and phone signals, planning to get an MRI scan to check his health that day. It wasn't until he realized the call was from Sweden, the caller had a thick Swedish accent, and there were several people on the line that he confirmed the fact that he had won the Nobel Prize in Physics. John Hopfield, the other laureate at the age of 91, was also somewhat shocked upon receiving the news.
This year's Nobel Prize has a high "AI" content.
Going back to October 8th, on this day, the 2024 Nobel Prize in Physics was officially awarded to Geoffrey Hinton and another scholar, John J. Hopfield, in recognition of their fundamental discoveries and inventions in the field of machine learning and artificial neural networks.
Yes, it's not the popular directions of condensed matter or quantum physics that were predicted beforehand, but AI, machine learning, and more specifically, neural networks. Some people say that the Nobel Prize has taken over the Turing Award's role, and some even say that physics no longer exists.
Advertisement
So, what is the connection between their contributions and physics?
The Nobel Prize committee revealed that they used physical methods to find the characteristics of information and built methods that laid the foundation for today's powerful machine learning. Hopfield's "Hopfield neural network" is described in a way equivalent to the energy of spin systems in physics. Hinton's "Boltzmann machine" used tools from statistical physics. Later, Hinton built on this work to help launch the explosive development of current machine learning, which is the deep learning revolution we are familiar with.
Coincidentally, on the afternoon of October 9th, Beijing time, the Royal Swedish Academy of Sciences decided to award the 2024 Nobel Prize in Chemistry to three scientists. Half was awarded to Professor David Baker of the University of Washington in the United States for his contributions to computational protein design; the other half was jointly awarded to Demis Hassabis and John M. Jumper of the British artificial intelligence company Google DeepMind for their contributions to protein structure prediction.
The Nobel Prize official website stated that this year's three Nobel Prize winners in Chemistry used "proteins" - the exquisite chemical tools of life - to crack the code of the amazing structure of proteins. Among them, the chemistry prize winner Baker successfully completed the almost impossible task of creating brand new proteins. His co-winners Hassabis and Jumper developed an AI model AlphaFold2 to solve a problem that has been around for 50 years: predicting the complex structure of proteins with tremendous technical potential.From the 2024 Nobel Prize in Physics to the Nobel Prize in Chemistry, AI has become the unexpected "hot" technology in this year's Nobel Prizes. However, for many scholars in the field of physics and chemists, this year's Nobel Prizes are not only "boring" but also somewhat depressing, as theoretical physics and theoretical chemistry do not receive recognition from the academic holy grail—the Nobel Prize.
Therefore, many people have commented that the Nobel Prize has really become "watered down," and that physics and chemistry technologies are not as useful as AI. But the problem lies in the fact that interdisciplinary research has become a recognized trend in academia, and AI technology is indeed driving cross-empowerment in physics, chemistry, biology, medicine, finance, and other disciplines. In September of this year, Turing Award winner in 2000, academician of the Chinese Academy of Sciences, and professor at Tsinghua University, Andrew Yao, said that there are two most obvious trends in AI: one is moving from weak intelligence to general intelligence. The other is the cross-empowerment between disciplines, making the work that is already obviously interdisciplinary more active and important.
While the academic community has cast a vote of confidence in AI, the industry has clearly calmed down a lot this year, with the "big model bubble theory" being widely discussed. For example, in the report "Generative AI: Too Much Investment, Too Little Return?" released by Goldman Sachs in June, it is mentioned that in the next few years, technology giants, companies, and the public sector will invest about one trillion dollars in generative AI, but so far, these investments do not seem to have brought the expected returns.
AI Commercialization: When and How Will the "Killer App" Emerge
If there is any consensus between the academic and industrial communities, it should be that large models are not a "panacea." Only by placing large models in specific scenarios can the value of the technology be realized.
In terms of the implementation path, the development trajectory of current AI search players can be roughly divided into two paths: on the one hand, many companies follow the "from model to application" path, gradually adding new features to AI products to build an ecosystem. This is like "looking for nails with a hammer"—first design the product well, then look for application scenarios.
The other path is "from scenario to technology," which means strengthening AI search functions in clear high-demand scenarios, achieving a close connection between technology and actual needs. This is equivalent to directly finding the "nail" of demand in a mature scenario, and then using the "hammer" of AI technology and product capabilities to hit it accurately, meeting user needs more directly and with significant effects.
The rapid development of deep learning and large models makes AI seem omnipotent, but the commercialization of this wave is full of pain points. Although companies like OpenAI have made significant technological breakthroughs, the profit model is still unclear. For example, OpenAI has gained widespread attention and huge revenue through the GPT series, but due to high computational power and data costs, it is still in a state of huge losses. This reflects that although AI large models are technologically advanced, their high cost and high energy consumption limit large-scale commercial application.
From the current situation, search engines are the most suitable and first product form to be transformed by AI applications. With the support of AI technology, current search engines have entered the next stage, that is, user-centered, better understanding of user semantics, and supporting personalized recommendations and cross-modal, cross-language retrieval, interaction, etc., and the user value is expected to exceed traditional search.However, current AI search products are still in their infancy, and their business models need further exploration. Industry analysts have pointed out that the overall performance of current AI search products is still some distance away from the standard of a "killer application." "Despite significant technological progress, from the diversity of product forms, the breadth of usage, and the depth of application scenarios, AI search has not yet formed a single application capable of overturning the market."
The Ceiling of AI: Is There a Discrepancy Between Technological Breakthroughs and Market Expectations?
In addition to commercialization, there is a significant concern in the industry about whether AI will fall into a new trough, that is, whether AI has an upper limit to its capabilities.
This concern is not baseless. After all, historically, AI has experienced several underestimations. Since the Dartmouth Conference in 1956, artificial intelligence technology and industry have gone through three rises and three falls. Few technologies have been able to span such a long cycle, neither falling into silence nor achieving a large-scale explosion.
So, why does each wave of AI enthusiasm always turn into a bubble? To return to this question, we must first understand why AI is prone to causing bubbles.
The logic behind this is also very clear: the misalignment between technological breakthroughs and market expectations. Whenever AI technology achieves breakthroughs in laboratories or certain specific fields, the market will over-amplify this partial success, expecting AI to quickly disrupt human life and work in various fields. However, the maturity of technology is usually far from meeting such expectations. When key issues such as computing power, data, and algorithm performance are not fundamentally resolved, the market's excessive expectations inevitably lead to disappointment, and the withdrawal of capital leads to the bursting of the bubble.
Secondly, new technologies always take some time to change the world. Senior Researcher Mark P. Mills at the Manhattan Institute believes that, just like revolutionary technologies such as automobiles, radio, and the internet, new technologies will go through a long dormancy period before changing the world, and cannot avoid the three stages of invention and creation, commercial feasibility, and large-scale market launch.
"Disruptive innovation" usually lasts about 20 years in each stage. For example, many years after the invention of the automobile (1886), the Model T design appeared (1908), and by the end of the 1920s, the penetration rate of automobiles in the United States rose to 20%.
Sometimes innovation can happen faster, from the idea of "packet switching" to the creation of the internet in less than ten years, the World Wide Web took 20 years to become commercially available, but it only took 10 years to see significant market penetration.
Conclusion: Do not deify AI, nor demonize it.In summary, the industry is always looking forward to an unlimited, omnipotent, and omniscient "God-like AI," but this is just a beautiful fantasy. In addition to the practical constraints of computing power and energy, human society will also impose an artificial boundary and limit on AI.
Looking back at the birth and growth of AI, from the initial symbolic logic reasoning to today's deep learning-driven intelligent applications, each technological innovation has brought unprecedented efficiency improvements and innovation possibilities to fields such as healthcare, finance, and education. However, under the shadow of each progress, ethical and legal challenges also emerge.
Issues such as the attribution of responsibility for accidents involving self-driving cars, employment discrimination caused by algorithmic bias, and copyright infringement by AI-generated content, like a mirror, reflect the complex reality where technology, ethics, and law are intertwined. These issues serve as a warning bell, alerting us that in the pursuit of AI technology's vast sea of stars, we must not ignore its potential impact on basic values such as social equity and privacy protection.
Therefore, for AI, we cannot deify it nor demonize it. Perhaps only by bringing AI back to a rational environment can we treat AI correctly.
post your comment