Australian-American economist Justin Wolfers recently appeared on the Prof G markets podcast to discuss everything from tariffs, Trump, the US economy, and the possibility of stagflation. It’s certainly worth a listen; Wolfers is excellent on tariffs, even if he’s a bit weak on inflation (tariffs don’t cause inflation).
But where he really went off the rails was when discussing artificial intelligence, describing the following passage as “the most important conversation in economics that we’re not having”. It’s long, so I’m only going to quote the core parts of the claim:
“Price equals marginal cost. Marginal cost is zero. So, basically, all the work’s getting done. And the AI companies aren’t getting rich, which means it’s either you or your boss. And that’s going to depend a lot on policy, which of those two it is.
Now, let’s tweak things. Let’s say that this is a winner take all market that the best model is better than the second best model by enough that Open AI for instance becomes a monopoly player much the same way Google has in in search. Well, if that’s the case now, OpenAI can charge a very, very high price for the AI. It will charge, if Ed, you earn currently $500 a week, it will charge Scott $499 a week for the AI robot. And Scott will buy that because it saves him a dollar. That means the employer doesn’t get rich, the worker doesn’t get rich, but the stockholders of OpenAI come to own the entire universe.
And so that then says it’s not just that it’s an ownership problem. And now we also have a competition problem, right? If we don’t have competition in the space for LLMs, then one LLM will come to own close to all of GDP. And I’m mildly overstating the case, but there’s a lot of truth there, right? So that then says we need to maintain competition between LLMs, which is very hard because at the same time, you need to maintain an incentive for innovation and markets don’t naturally deliver a large enough incentive for innovation.”
That’s a tidy story but it’s also unrealistic, and quickly falls apart when applied to the real world. Marginal cost isn’t zero, and AI—large language models—is shaping up to be complementary to human workers, not a direct substitute. Even the example of Google is misleading; it might have a monopoly in search, but it’s fragile and still serves consumers well because the market for search is genuinely contestable.
Wolfers continues:
“Let me make it more complex. Imagine you even succeed at getting Open AI, Google, Microsoft, Anthropic, all to continue competing. So then they’re selling AI services at a very low price equal to marginal cost. But in order to train their models, they all need Nvidia chips. That now makes effectively Nvidia the monopolist. Now Nvidia can quadruple the price of its chips and it will effectively extract all of GDP. And so now, not only do you not get rich and Scott doesn’t get rich and Open AI doesn’t get rich, the only people who get rich under this world now is Nvidia. And so now it’s not just a competition problem. It’s a market structure problem.
I hope that I’ve avoided what Scott talked about earlier of wonks and nerds overcomplexifying this. I’m trying to make a very simple point, which is that what’s on the table here is enormous. It’s potentially transformative, but who it delivers for, very small details can have very, very big effects.”
When looking at AI—or any technological change—you need to take what’s called a general equilibrium approach. Changes in one market don’t occur in isolation, but cause ripple effects throughout the economy as firms, workers, and policymakers respond and adapt to it, blunting the extreme outcomes hypothesised by Wolfers.
In the case of AI, competition is already fierce across multiple margins and that’s not likely to ease; there is no moat. Take the following quote from The Information newsletter earlier this week:
“It’s no secret that Nvidia’s monster AI chip business has drawn in lots of potential rivals, from startups like Groq to big firms like Google. So far, the sheer quality of Nvidia’s chips has maintained its edge. But as we’ve reported, here and here, Google is starting to make inroads. Perhaps it’s no coincidence, then, that Nvidia is increasingly giving its customers money, as it did with Lambda recently and CoreWeave earlier this year. Those were relatively small deals, and a way of seeding the landscape with newer cloud firms. But this $100 billion deal with an existing user of its chips gives a whole new dimension to the round-trip concept.”
The realities that Wolfers worries about are unlikely to materialise when the constraints in his model are relaxed. Dominance is fragile, and provided entry and exit is free (e.g. governments don’t ossify incumbents by raising rivals’ costs with regulation), innovation will undermine the ability of any single upstream supplier to ’tax’ the ecosystem.
As the Nvidia example quoted above demonstrates, bottlenecks will shift over time—from GPUs, to energy, to data centres, to who knows where—so rents will move too, preventing a single firm from ever getting close to owning “all of GDP”.
AI is undoubtedly the most important conversation in economics today, and the distribution of gains between workers, employers, and big tech firms is worth considering. But plenty of economists are already having the conversation, and I personally don’t see why the pattern of rents being contested across the stack, and between employers and employees, won’t continue. Provided, of course, that policymakers keep markets open and allow the true bottlenecks—e.g. land permitting for energy/data centres—to scale as necessary.
Really, the risk isn’t that economists aren’t discussing AI; it’s that they reach the wrong conclusions and end up recommending well-intended regulation that stymies innovation, entrenches incumbents, and ends up making most people worse off.