You're operating heavy machinery: burnout risk in AI-augmented engineering teams

The AI productivity story usually goes like this: engineers can now be 10x more productive. If they're not, we're falling behind.
Here's what's actually true: the nature of engineering work has been upended in the space of a few months. The cognitive profile of the job has changed radically, and the operations playbook hasn't caught up. That gap is accelerating burnout risk.
The work changed faster than the leadership playbook did
A year ago, a senior engineer spent a significant portion of their day writing code. That work had a rhythm: periods of deep focus, clear outputs, a sense of tangible, steady progress. It was demanding, but the demands were legible.
It was also sometimes boring. Executing a decision through line by line of code is repetitive work. Not all of it was interesting. In the current AI story, that's great news, but the boring parts were also doing something useful: they were breaking up the intensity. It might not have been recovery time exactly, but it was very often lower-demand work.
AI-augmented engineering has largely eliminated those boring parts. On the whole, that's great. Nobody wants hours of repetitive typing. But it does mean the workday is more likely to remain intense from start to finish. More context switching, more decisions per hour, greater QA load as output volume rises, and a continuous stream of judgment calls as engineers direct the AI rather than write the code themselves. The recovery that used to happen inside the work has to come from somewhere else now. Most engineers have never had to think about this before.
What "operating heavy machinery" actually means now
Engineering teams have always worked with high leverage. A bad architectural decision can cost weeks. But the stakes have shifted. When AI accelerates output, problems propagate faster too. The margin for error is narrower than it used to be.
This is what we mean by operating heavy machinery. Not that the work is dangerous in an obvious sense, but that it carries the potential for greater downstream impact, and so requires a sustained standard of judgment — one that can't be maintained on an empty tank.
Burnout doesn't announce itself. It erodes judgment quietly, and by the time it's visible, the damage is already done. In an AI-augmented team, that erosion is harder to spot and faster to compound.
Why burnout looks different when everyone is augmented
Standard burnout signals like declining output, missed deadlines, or visible disengagement are partly masked when AI is filling the gaps. An engineer running on empty can still ship.
What they're likely to lose first is the quality of judgment: the calls about what to build, what to skip, what the model got subtly wrong, when to push back on a proposed direction. Those are exactly the calls that matter most in an AI-augmented team. They're also the hardest for a leader to see from the outside, until something obvious goes wrong.
This is where early signals are critical. The Fuel-Gauge-Terrain framework is TANK's evidence-based model for burnout prevention: Fuel is the balance between stress and recovery; Gauge is your ability to read your own signals accurately; Terrain is the system environment that makes balance easier or harder.
In AI-augmented teams, fuel is a problem most engineers don't see coming. They've managed high-intensity work before, but always with low-intensity work built in as a natural counterweight. That counterweight is gone. Recovery now has to be deliberate, and many engineers are encountering that requirement for the first time.
Gauge issues tend to compound the problem. The same drive that makes someone good at this work makes early depletion signals easy to dismiss. And if the team is spending more time with machines and less with each other, the social signals that help people notice when a colleague is struggling get weaker too. The early warning system is weaker, precisely when it's needed most.
The terrain problem: what leaders can actually shift
Individual resilience matters, but it's not the main lever.
Terrain is where leaders have the most scope to act. In AI-augmented teams, the terrain questions are more specific than the usual list of meeting culture and role clarity. Here are four worth putting on the table.
Priorities — including what you've decided not to do. AI acceleration means the backlog grows faster too. Without an active deprioritisation practice, the pressure to do everything intensifies even as output rises. A regular, explicit review of what the team has decided not to work on is load management, not process overhead.
Switching off outside work hours. When agents can work overnight and outputs are available to review at any time, the temptation to stay connected is constant. Leaders who actively support disconnection, not just permit it, are making a terrain choice that directly affects their team's recovery capacity.
Development, not just delivery. There's a meaningful difference between directing an AI to produce output and developing as an engineer. The former can become a treadmill; the latter requires space for work that stretches and builds capability. A regular check on whether people are growing or just babysitting output is worth building into the rhythm of the team.
Team connection by design. Senior engineers once spent significant time coaching and pairing with junior colleagues. That contact has shifted toward machines, away from people. Junior engineers lose informal mentorship. Senior engineers lose a source of meaning and connection that came with the coaching role. Both affect fuel by reducing the recovery time that comes with human connection. The informal signals that a colleague may be struggling get harder to read. Team connection may not happen as automatically in this environment; if not, it needs to be designed in.
What we're building — and where to hear about it first
TANK for Teams is the team layer we've been working toward: a burnout risk assessment that gives leads an aggregated view of their team's energy, recovery, and work environment, and a facilitated team retro that ends with one commitment the team itself can act on. Actionable team data, while protecting individual privacy.
We're building tools that sit directly in the developer's environment, providing just-in-time prompting around session goal-setting, progress tracking, and recovery cues. It meets engineers where they actually work.
Navin is presenting this work, including a live demo, at slashNEW (27–28 May) and on the Leadership track at AI Engineer (3–4 June). Both talks are built around how engineers can shift burnout-producing conditions toward ones that produce genuine flourishing, and what leaders can do to shape the terrain that makes that easier.
If you can't make either event, you can still register interest in the beta here. We'll be in touch when we're ready.
