The Abundances and Scarcities of AGI

February 10, 2026

What is the fantasy of artificial intelligence? Two proposals, from the leaders of Anthropic and OpenAI, respectively, summarize the target: “a country of geniuses in a datacenter” producing “intelligence too cheap to meter.” While many will be too cynical to take these remarks seriously, it is worth taking a moment to consider the specific nature of the fantasy they are describing. The asset of cognition itself, made scarce by bounded minds and limited bandwidth, converted to an abundance of information processing to complement or substitute for all of the thoughts we wish were better or do not wish to have at all.

These mantras echo previous eras of transformative technological change in our not-too-distant past. Early internet utopians John Perry Barlow and Stewart Brand expressed similar sentiments in their vision for the “democratization of information,” predicting a world in which information is “cheap to distribute, copy, and recombine — too cheap to meter,” harkening a world where “anyone, anywhere may express his or her beliefs, no matter how singular.” The target has shifted from the production of information to the processing of information, but much of the language remains parallel.

The story of the evolution of the internet is complex and full of contingencies. Nevertheless, from our current vantage point in “Late Stage Web 2.0,” it seems clear that the most utopian visions were doomed, that economic forces would lead to centralization and network effects, and that the success of digital advertising, especially for social media companies, would lead to an engagement model that elevated outrage, polarization, and misinformation.

What would a more holistic view of the likely effects of the internet have looked like in its early days? Here was famous informational theoretician Herbert Simon’s take:

In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

Simon’s conceit — that we should think about (attend to) what is made scarce in modeling the impact of technological change — is simple but powerful. It is, however, a deeply unnatural way of thinking. It is unnatural for the same reason that opportunity cost is often neglected and we display “additive bias” when solving problems, finding it easier to add components rather than take them away. Often, what you see is all there is.

This cognitive bias is made more salient by the particular cultural moment we’re living in. Narratives all over the political spectrum have encouraged us to think additively: the “abundance” movement on the left driven by Ezra Klein and Derek Thompson and the “effective accelerationism” movement on the right driven by venture capitalist Marc Andreesen, while very much at odds in the details, share much in their psychological appeals. Find the right objective function; grow the pie.

Applying the Simon framing to the societal impacts of the internet yields a number of useful dualities beyond his original framing of information and attention. The abundance of legibility begets a scarcity of privacy: the machine of digital advertising has led us to exchange the right to be forgotten for the right to be served cheap goods. The abundance of connection begets a loss of regard: a world in which it is frictionless to find a friend is one where it’s harder to find a neighbor. An abundance of choice begets a scarcity of confidence: the internet did not invent this crucial paradox — but how many shows, videos, posts, and shops does one really need? An abundance of voices begets a scarcity of listening: not everyone can shout in the town square.

Intelligence and Agency

What does AI make abundant and, consequently, what does it make scarce? Our AI protagonists have put forth the first suggestion: intelligence. This is perhaps appropriate for the model of the first wave of artificial intelligence, a paradigm in which AI systems are static oracles, taking in requests and producing either intelligible or intelligent text in response. This paradigm has been enough to prop up the economy of the United States, but realists in the field recognize that these speaking machines are, by design, inert, and that the world is changed principally by action, not intelligence.

Thus, we have entered a phase two of the AI boom/bubble, governed by a new moniker: agency. For decades, frontend (a webpage) and backend (a database) software engineering have been the scarcest of resources, a field of modern artisans paid six figures straight out of college. In reality, most of this work is not anywhere near as complex as the theoretical mathematics that most of the students had to achieve high grades in to secure those positions; at the same time, simultaneously managing the syntax, features, and version control of production codebases is just not something most people are capable of. This paradigm is over. If you have an idea for a digital product that you can express fully (or haphazardly) in natural language, AI coding assistants can build it for you. If you hate using Word, AI will build you a new text editor. If you don’t like Chrome, it will soon be able to make you your own browser. In the world of AI agents, motivation and will are the only blockers to endless agency and empowerment — well, plus the physical world.

Let’s apply the Simon framing to these AI abundances. What is made scarce by abundant intelligence and abundant agency. If we were thinking purely of antonyms, we might say something like stupidity and helplessness. Who wants to keep those around? But that is to miss the point of the framing entirely; the goal is to understand what these abundances consume or make scarce as a second-order effect. An obvious first candidate is the scarcity of expertise. In the world where you can get world-class medical advice, quantum physics education, and literary criticism from an AI system, what is the incentive to hone these skills? The decline of experts is something many commentators have already acknowledged (mostly from the fracturing of our information ecosystem), but AI will speed this process up. And what does agency consume? An abundance of agency begets a scarcity of collaboration. When your individual agency goes up, the opportunity cost of collaboration increases. In both ideation and production, powerful AI systems mean that we rely less on the work of others. AI will be our engineer, our scientist, our critic, and our partner. Individuals and small teams will outpace organizations. Our work will become less intelligible to us, and as a result, will become orders of magnitude less intelligible to others, meaning we will need to overcome large barriers to productively collaborate.

Language

Moving beyond the dominant narratives in Silicon Valley, there are multiple other notable, but slightly more agnostic, abundances generated by AI. The most obvious, but perhaps the most important, is an abundance of language. You don’t say? For thousands of years, humans have distinguished ourselves from animals and machines by our ability to arbitrarily recapitulate language. The spoken word is our invention, the thing that has allowed us to learn about the world and ourselves, develop cultures to independently evolve, and scale our species indefinitely. To mime and to meme is to be human. Overnight, though, Turing’s test has been steamrolled, and in a few short years, we’ve already been displaced as the leading producers of language. The industry’s obsession with intelligence, measured objectively by performance on the International Math Olympiad, has a tendency to obscure the profundity of the abundance of language generated by AI systems that is practically indistinguishable from our own.

The abundance of language begets a scarcity of attestation. The outputs of AI models and the various “wrappers” around them are free-floating. They will cover every surface of the internet; we will find them, to varying degrees, inane, rote, curious, cogent, witty, and profound. We will call it all “slop” when we recognize it. We will compensate for this gap by weakening this longstanding connection between language and mind. The late great philosopher Daniel Dennett warns that our complete inability to avoid treating such language as evidence of personhood will lead, in turn, to the rise of “counterfeit people,” with potential dire consequences for our ability to reason and act.

Responses

What’s the best thing about your AI assistant? That it never needs to sleep! A fourth abundance of AI is the abundance of responses. If you are struggling with a math problem, ChatGPT is there for you. If you are trying to figure out what to make for dinner, a recipe is six words away. If it’s 3:45am and you’re convinced you’ve just made the next scientific breakthrough, your AI is on standby. While companies like Anthropic have played around with the idea that your chatbot should be able to end conversations, as a general rule, as the user, it is nearly impossible to have the last word in an interaction with AI. While it is likely that much will change about the nature of AI “assistants,” it seems unlikely that we will ever scale systems that are standoff-ish and untimely. If they are, we’re probably living in the world of Her; I’d be concerned.

The abundance of responses begets a scarcity of struggle. This can, quite easily, sound elitist and revisionist. “Have you heard of Jevon’s paradox?” the retort might go. Did the printing press, steam engine, and personal computer eliminate productive struggle, or did they simply enable new directions for it? Won’t people simply continue to ask better questions once their easier ones are answered by their AI assistant? Deep down, though, I think most of us realize the way that we learn new concepts, solve hard problems, overcome personal challenges, and make art is often by sitting with an absence of feedback. Feedback is crucial, but it should be on a schedule.

Validation

The abundance of responses — of any kind — is itself a notable facet of AI, but the truth is, these responses are usually not neutral. Odds are, if you have an idea for a new startup, according to your chatbot, it’s a sure winner. Nathan Fielder would have a field day. The world of abundant AI chatbots is also, most likely, a world of abundant validation. The field has coalesced around the term sycophancy, an interesting choice given its connotation of flattery explicitly for one’s own advantage — something to come back to. In the current paradigm, we mostly view trivial sycophancy as harmless and even laughable, and then we treat a few exceptional cases of sycophancy which lead to high profile delusional spirals or even suicide as five alarm fires. But the reality is that this type of scaled validation is a more nebulous but likely profound type of AI abundance.

An abundance of validation begets a scarcity of calibration. Who you are is the weighted sum of how you see yourself and how everyone else sees you. Most people know not to judge the quality of their work by the perspective of their doting grandmother. But AI is marketed to the general public as superintelligent; when it says that you’re on to something, you might trust it. So far, we’ve only had a small number of public figures who have had marked shifts in their level of calibration as a result of AI — it might as well have been schizophrenia. Individual calibration is a hard thing to measure a marginal effect from, but, in the long run, we might expect to see fewer “matches” between people, meaning fewer friendships, marriages, and teams.

Bullshit

The last abundance of AI is an abundance of bullshit. This is certainly related to the abundances of language, response, and validation, but it is also distinct. LLMs allow us to express things we don’t know via mediums we don’t understand in a manner that is nevertheless nearly impossible to detect, and then take credit for it as our own. In so many areas, we rely on style as a signal for substance (or lack thereof) — the poorly worded email a sign of a phishing scam, the grammatically incorrect submission a sign of poor research quality. The abundance of bullshit means that the “filterers” in a selection process lose many of the useful proxies that allowed them to approximate traits and skills that were valuable or dangerous but hard to measure. An abundance of bullshit begets a scarcity of signal. Confusion is up.

Conclusion

To summarize, the fantasy for artificial intelligence is the abundance of intelligence and agency, directly substituting the language of a previous utopian era focused on the production of information to one focused on the processing of information. My contention is not that these fantasies are false, but that they are incomplete. While AI true believers portray these as unequivocal goods, banishing stupidity and helplessness from our lives, even these abundances will likely engender scarcities worth considering: a scarcity of expertise and a scarcity of collaboration, to name a few. Furthermore, the metastasization of AI in society will also generate a multitude of other abundances (and corresponding scarcities): an abundance of language (and a scarcity of attestation), an abundance of responses (and a scarcity of struggle), an abundance of validation (and a scarcity of calibration), and an abundance of bullshit (and a scarcity of signal).

This essay takes a somber tone on the potential second-order effects of AI on individuals and society. It should not be taken as a categorical dismissal of the utopian perspective. But my view is that, by default, our markets and our culture will continue to emphasize those first-order benefits to intelligence and agency. It is possible that we will enter a new political and cultural moment where people realize their latent distaste for this technology. But for the time being, the challenge for all of us is to figure out which processes and practices are worth protecting before they go extinct.