Which jobs are most at risk for AI disruption?
This week, I read a super interesting take:
Jobs with costly errors will continue to require human specialists, because we want accountability.
Jobs where errors are inexpensive or acceptable are the ones where specialists will be replaced by high-agency generalists:
In industries where untrained people equipped with imperfect AI models could make costly mistakes, we are going to see demand for specialized human accountability. This will include sectors such as defense, healthcare, space exploration, biological research, and AI advancement itself — all domains where the variance of prediction models is higher than the acceptable risk threshold. Wherever mistakes can kill and AI can’t prove to be virtually all-knowing, we can expect regulation to enforce natural barriers, and the need to hire experts.
By contrast:
However, for most jobs this is not true. Wherever we are ok with "trying again" after getting a bad AI generation, we will see market disruption. Data science, marketing, financial modeling, education, graphic design, counseling, and architecture will all experience an influx of non-specialized, high-agency individuals.
The game has shifted, and the winning strategy with it. It’s no longer about understanding specialized details; it’s about grasping the high-level global picture. It’s less about knowing how to patch a system and more about knowing that it needs to be patched. It’s more about architecture, and less about implementation. Precisely where generalists thrive.
What do you think? Would you rather be a high-accountability specialist or a high-agency generalist?
Insight inspired by Gian Segato and his essay, Agency is Eating the World.
Now we just have to hope that our writing isn't generalist...