I’ve been looking at my AI usage and something doesn’t add up.

Cost per token is down. Businesses paying $10 per million tokens a year ago are now paying $2.50. Some models have dropped 1,000x cheaper over the past couple of years.

So I should be saving money. Except I’m not. My total spend is up. And apparently that’s pretty common.

Enterprise AI spending went from $11.5 billion in 2024 to $37 billion in 2025. A 3.2x increase whilst unit costs were collapsing.

At first I assumed I’d done something daft. Left an agent running in a loop. A workflow gone rogue. (It happens. Don’t ask.)

But actually, I’m just doing more. A lot more. And so is everyone else.

Turns out there’s a pattern here.

I’ll be honest. I only learned about this recently from the Sketchplanations podcast. But it clicked immediately.

In 1865, an economist called William Stanley Jevons noticed something odd about coal. Steam engines were getting more efficient, using less coal per unit of work. The obvious assumption was that total coal consumption would drop.

It didn’t. Consumption soared after the more efficient engines arrived.

The efficiency made coal viable for applications that weren’t practical before. More industries adopted it. More machines got built. Total usage exploded.

When DeepSeek launched their cheaper model, Satya Nadella spotted this immediately. “Jevons paradox strikes again. As AI gets more efficient and accessible, we will see its use skyrocket.”

Aaron Levie was more direct: “Jevons paradox is coming to knowledge work.”

What’s actually happening.

When tokens were expensive, I made trade-offs. Stripped context to save costs. Limited how many iterations an agent could take. Saved the longer reasoning prompts for when it really mattered.

Now those constraints are loosening. And that’s genuinely useful.

I’m running agents that self-correct properly instead of failing on the first attempt. Adding context I used to cut because I couldn’t justify the expense. Letting models actually think through problems instead of rushing them to an answer.

That’s not waste. It’s work that simply wasn’t economically viable six months ago.

But here’s the thing. Every time I unlock one use case, I spot three more. Levie reckons “the vast majority of AI tokens in the future will be used on things we don’t even do today”. Projects that wouldn’t have started, analysis that wouldn’t have happened, research that wouldn’t have been worth the cost.

Average monthly organisational AI spend hit $85,000 in 2025. Up 36% from the year before. The proportion of organisations spending over $100k monthly doubled.

The human side is where it gets complicated.

This isn’t just a spreadsheet problem. There’s a real tension I keep noticing.

On one side, people excited about what’s now possible. Tasks that were too expensive to automate suddenly aren’t. Analysis that took days can happen in minutes. The potential is genuinely exciting.

On the other side, people watching costs climb and wondering when it stops. Finance teams who approved pilots based on “efficiency savings” now looking at bills that tell a different story.

And somewhere in the middle, the people actually doing the work. Being told AI will make their jobs easier whilst also being asked to justify every new use case. Watching tasks they used to own get automated, not always sure what that means for them.

The efficiency gains are real. But efficiency in which direction? Doing the same work cheaper, or doing vastly more work for the same cost? Those are very different outcomes for the humans involved.

And here’s the question I keep coming back to. If you’re achieving more with AI, are you going home any earlier? Is your work-life balance actually better? Or are you just filling the time saved with more work?

Because if efficiency just means more output rather than more time back, who’s really benefiting?

I don’t think there’s a neat answer here. But I do think it’s worth being honest about the tension rather than pretending it doesn’t exist.

The planning bit.

If anyone’s budgeting AI on the assumption that “costs are dropping, so spending will too,” might be worth pressure-testing that.

When something gets 1,000x cheaper, you don’t get 1,000x savings. You get 1,000x more use cases that suddenly become viable.

The question isn’t really “how do I spend less on AI?” That ship might have sailed.

The better questions might be:

  • What’s affordable now that wasn’t before?
  • Which of those things actually matter?
  • Who decides what “worth it” means, and are the people affected by that decision involved in making it?

Teams will find new use cases whether anyone plans for it or not. Better to steer that towards the valuable stuff than be surprised every quarter. And maybe have an honest conversation about what this expansion actually means for the people doing the work.

Jevons was writing about coal and steam engines 160 years ago. But the pattern he spotted, efficiency driving expansion not reduction, feels uncomfortably relevant right now.

Anyone else watching this play out? Costs down, spend up, and a growing queue of “could also use it for…” conversations?

Curious how others are navigating it. Especially the human side.