AI as a New Infrastructure for Knowledge Transmission

Because most of the time we are dealing with normal science, there are in fact not that many genuine points of innovation. Most work is, at bottom, engineered repetition, tuning, and pattern matching. The rise in humanity’s average level does not come from people across history suddenly having more time to study; it comes from knowledge becoming easier and easier to diffuse. When Newton and Leibniz invented calculus in the seventeenth century, it stood at the absolute frontier of human intellectual activity. Today, a high school student can grasp its basic framework in a few months. The community has in effect formed a division-of-labor mechanism: it prevents later generations from repeatedly starting from zero, and it gradually turns “reinventing the wheel” into something inefficient and faintly suspicious.

In computer science, the overwhelming majority of papers are also doing normal science. After the emergence of the Transformer, thousands of papers have essentially been doing this: swapping datasets, adding a module, changing the loss function. These are engineering evolutions inside the Transformer paradigm, not ruptures at the level of fundamental concepts. The nodes that truly deserve to be called innovations may be only a few: the attention mechanism itself, the discovery of scaling laws, RLHF as a formed alignment strategy, and the theoretical framework of diffusion models. Much of the rest is closer to exploitation than exploration.

The main mechanism here is the continuous iteration of technologies for encoding, compressing, and transmitting knowledge. Writing, the printing press, academic journals, the internet, and now AI: each has been an upgrade of the infrastructure of transmission. In The Gifts of Athena (2002), Joel Mokyr distinguishes between two kinds of knowledge: propositional knowledge (“what is,” or Ω knowledge) and prescriptive knowledge (“how to,” or λ knowledge). His core argument is that the Industrial Revolution happened not because human beings suddenly became smarter, but because the mapping between these two kinds of knowledge became more efficient: people could more quickly convert “knowing the principle of X” into “knowing how to use X to do things.”

If normal science is repeated practice over specific patterns, if the bottleneck of human progress lies more in transmission than in discovery, and if one function of the community is to avoid meaningless repetition through division of labor, then AI’s impact on academia is not merely that “papers can be written faster.” The sharper judgment is this: AI may be making large portions of the human labor invested in normal science unnecessary. If 90% of academic output consists of incremental filling-in within existing paradigms, and AI can already handle a substantial share of that filling-in, then what exactly is the rationale for maintaining a global academic labor force of several million people to perform this task?

In the past, what was the transmission mechanism for knowledge such as “how to write a paper”? It was apprenticeship. You worked under an advisor, and the advisor taught you by hand: how to design a baseline so the experiment looks fair, how to write related work without offending reviewers, how to calibrate the tone of a rebuttal, and how the first sentence of an abstract should hook an editor. These things are not fully written in any textbook. They are procedural knowledge, embedded in a specific structure of social relations and passed on orally through apprenticeship. Whether you can learn them therefore depends, in essence, on who your advisor is. What a student in a top lab absorbs almost by osmosis, a student at an ordinary institution may still not know after finishing a PhD. The channels through which knowledge is transmitted are bound to the hierarchy of academic power. The concept of tacit knowledge, developed by Michael Polanyi in Personal Knowledge (1958), describes precisely this layer; his later famous formulation, “we can know more than we can tell,” also points to the fact that such knowledge cannot be fully made explicit, cannot be sufficiently carried by textbooks, and can only circulate within communities of practice.

What has AI done? It has lowered the threshold for acquiring this layer of tacit knowledge from “you first have to spend years as an apprentice inside a guild” to “you have to know how to ask questions.” “Let AI look at the paper; it basically functions like peer brainstorming” was almost unimaginable two years ago. If a doctoral student wanted high-quality peer feedback, they either waited months for formal peer review or found a peer in the same direction who was willing to spend time reading the paper carefully. Now, you can obtain a reasonably capable intellectual sparring partner at any time. And notice: this is not merely an efficiency gain. It is a transfer of power. In the past, perhaps only a few thousand doctoral students in China’s top laboratories could access feedback of this quality. Now, anyone with access to Claude or GPT, and with enough skill to ask good questions, can obtain something similar.

In the short term, academia will not fundamentally change because of this, since academic power is not built entirely on the monopoly of knowledge. It is built more deeply on credential certification and network access. Being able to write a paper does not mean you can get it published, because the bottleneck of publication is often not only paper quality, but also whether reviewers know your advisor and whether your institution possesses enough reputational capital. In the long run, however, if AI continues to lower the threshold of knowledge production, then the intermediaries that maintain their status purely through information asymmetry, such as those who do little research themselves but allocate resources through academic administrative power, will face an increasingly severe legitimacy crisis. This change, however, will be slow: the inertia of the guild is far more stubborn than the speed of technological iteration.