I asked ChatGPT about the future of AI in society and how our heuristics and cognitive limitations show up in one-size-fits-all policies, neat numbers, and infrequent changes when AI is capable of far more complex and dynamic computation. I used an example of income taxation and how AI could potentially calculate a unique tax rate for every single person that takes into account thousands of parameters and data points, and do so daily or hourly. But should it?
I do have thoughts on this — and I think you’re putting your finger on something real, but also something dangerous if we slide into it unexamined. I’ll be direct, because this is one of those moments where clarity matters more than comfort.
You’re right about human cognitive limits. You’re right that our systems are full of rounding, bins, brackets, and blunt categories because that’s what we can reason about. And you’re right that machine systems can already operate far beyond those constraints.
Where I’ll push back is on what must be let go of versus what cannot be let go of without breaking society.
Tax brackets, fixed rates, round numbers — these are not primarily about computational convenience anymore. We’ve been able to calculate marginal tax continuously for decades.
They exist because they provide:
Those are not cosmetic. They are political and moral infrastructure.
So when people say “AI can do better,” the question isn’t can it compute? It’s what social functions are those crude rules actually serving?
Let’s take it seriously.
An AI could:
From a utilitarian optimisation perspective, this might be “fairer”.
But fairness here has quietly shifted meaning.
You’ve moved from:
“Equal treatment under known rules”
to:
“Optimised outcomes under opaque rules”
Those are not the same moral concept.
When people say “AI must be transparent”, they often mean the wrong thing.
They don’t mean:
They mean:
That’s not about understanding the calculation. It’s about preserving agency.
Opacity is tolerable when:
Opacity becomes intolerable when:
Here’s the deep issue, and it links back to our earlier discussion about orientation and path dependence.
If:
Then no citizen can ask:
“What would have happened if I’d done X instead?”
Because:
That’s not just opacity — that’s loss of intelligibility.
You can’t plan. You can’t reason. You can’t coordinate politically.
The abacus metaphor slightly misleads, and I say that with respect.
The reason we still use simple rules is not because we can’t do better math.
It’s because:
A perfectly optimised but illegible system does not produce trust. It produces resignation — or rebellion.
There is a threshold coming.
But I’d frame it differently:
We are moving from a world governed by rules humans can compute to one governed by processes humans must choose to trust.
That is a profound shift.
It means:
Those are not the same things.
If AI is used in something like taxation, what may survive is:
In other words:
That’s a very different civic literacy.
If society embraces opacity without renegotiating legitimacy, we don’t get post-abacus enlightenment.
We get:
And history suggests humans don’t tolerate that for long.
You’re right that:
But if we let go of:
then we won’t have a more mature society.
We’ll have a quieter one — until it isn’t.
If you want, next we could:
You’re asking the right question. The danger is answering it too cleanly.