AI After the Hype
Some Thoughts on Continuity in Information Technology and Long-Run Growth
Is AI a bubble? Popular opinion increasingly points in that direction. It’s evident to most casual observers that the chatbots have plateaued, and photorealistic generative video and imagery provoke a level of discomfort, if not legal concern, that hardly justifies the money being poured into them. Even with recent refinements like structured outputs and expanded capabilities, what once felt like dramatic leaps now looks more like incremental tuning at the margins. For many, the consumer-facing spectacle that carried the AI hype has begun losing steam. For some, this has manifested as outright backlash.
So AI is obviously a bubble, right? Not necessarily. The mistake is assuming that consumer-facing breakthroughs are where the economic value will accrue. Amid mounting annual losses in the tens of billions, there is an underlying process and class of services that can meaningfully justify today’s investment and total valuations, better reflected in for-use applications like Claude Code or AI Studio. Even if the consumer AI boom disappoints investors, AI’s economic value lies in its continuity with prior information technologies that generated growth through cumulative, often unglamorous, back-end technical improvements.
The more important question is not whether AI is a bubble, but whether consumer applications are an appropriate proxy for economic value at all. AI is best understood not as a break from prior information technologies, but as the latest, and most automated, stage in a long process of knowledge production and diffusion. The lasting economic value will come from accelerating the slow churn of incremental improvement, underpinning long-run productivity growth.
Continuity in Recursive Technological Improvement
First and foremost, LLMs do not represent a radical departure from the history of information technology. We hear about artificial intelligence as if it marks a clean break in humanity’s technological trajectory; machines that can not only think but improve themselves. Too often we get caught up in science fiction mythology and overlook a simpler reality. These systems are sophisticated predictive engines, trained at unprecedented scale on the vast library of existing human writing. When we consider the machine to serve as a line of transmission that can extract and repackage existing knowledge, it starts to more closely resemble prior information technologies albeit with a far broader scope and scale.
The key concept here is recursive improvement. Systems in which the tools used to produce, transmit, or coordinate knowledge also become the medium through which those tools are refined. If we take seriously the idea that the diffusion of technical knowledge drives innovation, AI’s supposed recursion looks less like a break and more like continuation. Each generation of communication networks and information systems has accelerated the replication and refinement of its own production, enabling its own improvement and eventual obsolescence, AI simply automates the process further.
Mechanick Exercises Volume II, Or the Doctrine of Handyworks Applied to the Art of Printing, written and printed by Joseph Moxon in 1683, was the first comprehensive English-language guide on the use of the printing press. It represented an early attempt to codify and spread what had until that point been a largely guarded craft of printing, turning the technical specifics into a quasi-public good. Printing improvements built on this knowledge are thus a product of the very press that produced the pamphlet. Every manual, tutorial, and technical forum since, published in the same medium that it explains, has been a form of recursive improvement. By lowering the cost of transmitting technical knowledge, printing transformed the skill from a localized asset into an easily reproducible output that could be easily disseminated.
The Journal of the Telegraph, published by Western Union (then the Western Union Telegraph Company), offers another example, illustrating how the telegraph industry institutionalised the feedback loop of technical improvement. Beginning circulation in 1867, the journal was a semi-monthly periodical dedicated to “the progress of the telegraph and of electric science”, it gathered reports of line-faults, relay tests, relay standardisation, insulation improvements, and construction methods, collected across the first trans-continental telegraph network, completed just four years prior. The telegraph enabled field staff and engineers to receive and apply improvements in near-real time, from across the country back to Western Union headquarters in New York.
The telegraph was not only a means of communications but a self-refining knowledge distribution system. Treating the telegraph network as both user and producer of its own operational knowledge, Western Union leveraged the publication as a record of innovation and operational standardisation, to reduce downtime, optimise performance, and propagate best practices across thousands of miles of cable and wire. A feedback loop to reduce repair times, standardize practices, and effectively turn geographically dispersed infrastructure into a coordinated system.
Nearly a century later, the same recursive dynamic reappears in the introduction of computing. Early high-level language compilers in the 1950s, A-0 System in 1952 (which would influence systems like COBOL) and FORTRAN developed for IBM in 1957, enabled machines to translate syntactic and semantic languages into executable code, allowing computers to assist in their own programming. Compilers reduced the marginal cost of software development by translating reusable, high-level abstractions into machine-specific instructions, reducing dependence on hardware-specific expertise. Crucially, the documentation for these systems, printed reports, punched-card listings, and computer-generated manuals, were products of the very technologies they described.
Further, in some sense the compiler represented a moment when information technology could begin to directly automate its own improvement. Software could now write software. This extends the self-referential lineage of the telegraph and the printing press and foreshadows the decline in human mediation we see currently in AI. This recursive logic deepened with the rise of networked computing and the internet, where protocols, languages, and documentation (from Request for Comments to Stack Overflow, GitHub and open-source repositories) circulate digitally through the same networks they define. In each case, the medium used to transmit knowledge is the subject of that knowledge, creating a feedback loop that accelerates improvement.
Each successive wave of information technology has embodied the same process of inward iteration and innovation. This pattern aligns with the economic literature on technological progress, exemplified by Joel Mokyr’s work on innovation and growth. Mokyr emphasises the role of “micro-inventions” in combination breakthrough technologies, identifying the cumulative, incremental refinements that arise within cultures of open exchange and scientific inquiry as the primary sources of sustained growth. Progress is not achieved through a single (quasi-) random technological leap forward, but the smaller complementary inventions that cluster around it, fostered by innovation-promoting institutions and culture.
This perspective complements the contribution of information technologies towards innovation through recursive improvement, because these are systems that both enable and evolve through continual refinement and feedback channels. Information technologies matter precisely because they increase the density and speed of these micro-inventions, not because they generate standalone breakthroughs. Thus AI represents less of a leap, with it’s economic effects felt primarily through the marginal improvements and derivative productivity enhancements, more so than the product itself.
What makes AI distinct is that this process can now proceed, in part, without direct human intervention. There is a greater body of work that can be input, and a lower bar for required instruction. But the distance between a compositor tuning their press and a model fine-tuning its own weights may be smaller than we think. What we’re experiencing is a change in the magnitude, not necessarily innovation type, and tracing older feedback systems can help us make sense of how AI fits into the continuity of innovation and progress, and where it stands as an engine for economic growth.
Information Technology and Economic Growth
The fixation on consumer-facing AI products as the primary measure of success reflects a broader tendency to evaluate information technologies by their most visible applications. This has rarely been a reliable guide to economic impact. Consumer products are where new technologies become culturally legible and easily narrativized, but they are not where productivity gains typically originate. The spreadsheet did more for corporate productivity than any consumer software of its era, yet attracted little attention compared to early personal computing. Similarly, enterprise resource planning systems and database management software transformed business operations while video games, CGI and Photoshop captured the public imagination.
Information technologies generate value upstream of consumption, by altering the cost structure of production and coordination inside firms and institutions. These gains are diffuse, incremental, and difficult to attribute; not immediately captured in GDP or quarterly earnings, but visible in steadily rising productivity over time. While front-end and consumer facing applications dominate public attention, information technology lays its most durable foundations in the back-end systems that enable long-run growth.
The applications we encounter most in our daily lives are best understood as demonstrations of peripheral capability rather than the economic core of AI. They reveal what the technology can do, not necessarily where it will matter most. The sense of disappointment surrounding consumer AI reflects less of a technological plateau than a mismatch between expectations shaped by spectacle meeting the mundane reality of how information technologies transform economies. Hence several businesses have abandoned efforts to integrate extraneous AI features into their business models at the consumer-facing level, finding that front-end applications struggled to sustain engagement or justify their costs once the novelty faded.
For every billion-dollar company you’ve heard of, there are dozens you haven’t, firms built on back-end information processing, compliance plumbing, data infrastructure, and logistics systems in industries most people never think about. This is not flashy work, but it is extraordinarily lucrative, and it is precisely these operational domains where AI demonstrates its clearest contributions to productivity. The real returns will not come from the disruptive public face of the technology, they will come from making the world's existing technical infrastructure faster, cheaper, and more capable of iterative improvement. These gains compound not through scale alone, but through repeated small reductions in friction across time and processes.
The pattern is familiar from previous technological waves. The printing press, the telegraph, and early computing each produced their greatest economic effects not through front-end innovation but through recursive improvements in how organizations coordinated and transmitted knowledge. The fax machine, a decidedly unglamorous business technology, revolutionized communication and contracting. The internet's initial economic impact came through email and supply chain management, not consumer applications. The dot-com bubble itself highlights the spectacular failures of explosive consumer hype, but when the bubble burst it left behind the more mundane infrastructure and organizational learning that underpins trillions in present tech value.
The Industrial Revolution itself illustrates this pattern. What we now regard as a dramatic turning point in economic history did not manifest as explosive growth in the short run. It might have appeared that way, with the rapid emergence of novel technologies like the steam engine and power loom at the technological frontier, but at the turn of the nineteenth century, annual growth rates in output per capita were around one percent. A meaningful increase over historical baselines, but hardly a discontinuity that contemporaries would have immediately recognized in their daily lives. The significance is not in any single breakthrough but in the persistence of modest gains, which compounded over time and eventually reshaped living standards in ways only visible in retrospect.
Hockey-stick growth rarely begins with a visible inflection. In its early stages, progress appears linear or even stagnant, driven by marginal improvements whose effects are difficult to distinguish from background variation. The apparent “takeoff” emerges only in retrospect, as these small gains compound and shift the level of output itself. The lesson is not that transformation arrives suddenly, but that sustained, incremental improvement can look inconsequential until it no longer does.
AI sits squarely in this lineage. Its greatest returns will emerge in the back-end systems that coordinate economic activity, where productivity revolutions always begin. While consumer AI is likely overhyped, the underlying economic value remains substantial. Consistent 3-4% economic growth sits at the low end of the more optimistic assessments of AI-enhanced gains, but maintaining this baseline, particularly within advanced high-income economies, is a world-historic achievement. The mistake is understating the effort it takes to maintain the current pace of real productivity growth.
As skepticism grows around chatbots and image generators, the case for AI's impact on organizational productivity becomes clearer. Its comparative advantage lies not in replacing human capabilities but in accelerating the mundane processes of institutional learning and coordination.
The most economically significant applications of AI are those that compress the feedback loops through which organizations learn about their own operations. Faster feedback means faster iteration. Errors are caught sooner, improvements propagate more quickly, and operational knowledge that once took months to surface can inform decisions in days or hours. This is where AI most clearly extends the lineage of prior information technologies, as a tool for codifying, standardizing, and acting on institutional knowledge that would otherwise remain fragmented across departments, documents, and individual experience. In practice, this appears in applications like OCR systems that process years of archived documents in hours, coding assistants that instantly apply style guidelines across entire codebases, and data analysis tools that surface operational patterns humans would take weeks to identify.
In large organizations much of this knowledge already exists untapped, recorded form, logs, reports, tickets, compliance documents, the detritus of routine organizational record-keeping that were too costly to do anything with and too slow to act upon. For decades, firms accumulated operational data with little ability to extract value from it. AI changes this by making unstructured data processing and pattern recognition cheap at scale. What once required dedicated analysts and weeks of work, identifying bottlenecks in supply chains, patterns in customer complaints, correlations between equipment failures, can now happen continuously and automatically.
In practice AI's most sustainable applications cluster around operational infrastructure like fraud detection, compliance monitoring, supply-chain coordination, quality assurance, and predictive maintenance rather than consumer products. These domains reward incremental gains that compound across scale and time, benefitting most from advanced information technology. A minor improvement in fraud detection accuracy, applied to a payment network processing billions daily, can justify substantial AI investment. A reduction in supply-chain forecasting errors compounds across thousands of shipments and inventory decisions. Small percentage improvements, multiplied across large systems and repeated over time can justify substantial investment when applied across large, complex systems.
Conclusion
Whether AI proves to be a bubble depends on the benchmark. If we measure by consumer products, cultural transformation, or visible machine creativity, disappointment is inevitable. If we measure by the historical role of information technologies in driving long-run economic growth, the outlook is considerably more favorable.
Each major wave of information technology has delivered its largest returns not through spectacle, but through improvements in coordination, measurement, and replication, the recursive machinery that accelerates institutional learning. This context helps clarify Paul Krugman’s often-misunderstood 1998 comparison of the internet to the fax machine. His error was not underestimating the internet’s economic importance, but misjudging the relationship between its cultural visibility and its economic impact. Krugman was right. The internet would function like earlier information technologies by reducing communication costs and enabling coordination, but underestimated how quickly those gains would compound into visible economic change relative to their immediate cultural impact.
AI's value will emerge from accelerating the same cumulative processes that historically underpin productivity growth, compressing feedback loops, codifying scattered knowledge, and reducing coordination costs across complex systems. In that sense, AI may be overhyped as a cultural phenomenon, but it remains a credible engine of long-run productivity improvement. The hype will pass, but it’s clear we have reached an inflection point. Whether we call it a bubble depends less on the technology itself than on our ability to look beyond the spectacle and recognize the long-run economic value in boring, recursive processes of incremental improvement.



Super thoughtful framing of AI within the recursive improvement lineage. The compiler comparison is especially useful since that was arguably the first moment machines could write software for themselves, and most people dunno that history. I've been thinking about this same continuity angle after watching enterprise AI deployments quietly compound gains while consumer chatbots plateau. The Mokyr micro-inventions framwork connects it all nicely.
The hockey-stick framing assumes there’s a viable product waiting for its friction-reduction moment. But historically, technologies that showed real value (electricity, cars, phones) saw aggressive early adoption despite massive installation costs—because the benefit was obviously worth it.
Current AI adoption patterns look different: high friction costs and unclear value proposition. That’s not a pre-hockey-stick moment—it’s the expert systems trajectory, where impressive benchmarks don’t translate to sustained business value in deployment.
The compounding innovation narrative mistakes what drove past transformations. It wasn’t marginal improvements accumulating—it was infrastructure and economic barriers collapsing once the core value was proven.