Consider two accountants. One has a job description that says “follow up on delinquent accounts.” The other has a role designed around an outcome: “reduce days sales outstanding.”
The difference isn’t semantic – it’s structural. The second accountant’s manager recognized a necessary business outcome (we need to collect money faster), understood current performance (our DSO is X days), and defined a target (we need to get to Y days). That thinking changes everything: how the role is framed, who gets hired, how goals are set, how performance is evaluated, and what work might be automated.
In a $10M company, reducing DSO by 30 days represents approximately $833,000 in value. That’s not a task. That’s an outcome worth achieving. And it creates a job that has “meaning and purpose” built in.
This distinction sits at the heart of what’s missing from the current conversation about “supermanagers.”
The Josh Bersin Company has been exploring what they call the “supermanager” – a new leadership model built for the AI era. The premise: as AI reshapes how work gets done, management must evolve. Traditional managers focused on supervision, status updates, and performance reviews are becoming obsolete. What organizations need now are leaders who build trust through transparency, model responsible AI adoption, create psychological safety for experimentation, and empower continuous learning.
It’s a compelling framework that captures something important: AI transformation isn’t just a technology challenge – it’s a leadership challenge.
But there’s a prerequisite the supermanager model assumes without addressing. Before managers can “redesign work” or “unlock the potential of superworker teams,” they need to answer a more fundamental question: What should each role on my team actually accomplish? In other words, why is that box even on the org chart?
According to Gallup’s latest survey, only 46% of employees clearly know what’s expected of them at work. And according to Gartner, only one in three AI initiatives delivers productivity gains. The supermanager vision won’t become reality until we complete the blueprint with role clarity as its foundation.
Before we can answer questions about the workforce, we need to answer questions about the work, because work is changing.
Harsh Kundulli, Vice President Analyst, Gartner
The Paradigm a Supermanager Needs
The supermanager model describes what leaders should do – but what do they need to do it well?
Consider the core supermanager responsibilities: redesigning work, setting clear expectations, creating psychological safety for experimentation, and helping teams navigate change. Each of these requires something the model assumes but doesn’t address – a clear understanding of what each role should accomplish.
The data suggests this foundation is largely missing.
Only 46% of employees clearly know what is expected of them at work, according to Gallup’s 2024 analytics – down from 56% in 2020. Only 48% of employees set performance goals collaboratively with their managers, per APQC research. And Josh Bersin recently described a company with 100,000 employees and more than 60,000 job titles, noting that “almost every job is invented for this person.”
This isn’t a manager behavior problem. It’s a thinking problem. Managers have never been taught – or expected – to connect roles to business outcomes. You can’t redesign work without first examining the business itself – what’s not happening that should be? What gap needs to be filled? You can’t create psychological safety for experimentation if no one knows what success looks like. You can’t build trust through transparency if there’s nothing concrete to be transparent about.
Role Clarity Facilitates Supermanager Behaviors
Here’s what changes when a manager knows that a role exists to achieve a specific outcome – like reducing days sales outstanding rather than following up on delinquent accounts.
Trust transforms from an abstraction into something concrete: “I trust you to figure out how to reduce DSO. I don’t need to micromanage your methods because we both know what success looks like.”
Psychological safety gains parameters: “Experiment with different approaches to get there. Some won’t work. That’s fine – we’re learning what actually moves the needle on this specific outcome.”
Support finds a focus: “What’s getting in the way of reducing DSO? What do you need from me to help you get there?”
Without the outcome defined, those same behaviors are vague. “I trust you” – to do what? “Feel safe to experiment” – toward what end? “How can I help?” – with what exactly?
The outcome gives the manager something to orient around. It changes the nature of the relationship from supervising tasks to enabling achievement.
This also explains the “accountability vacuum” that has emerged as organizations overcorrected from autocratic to overly supportive leadership styles. Managers learned to be supportive without having anything concrete to support. Role clarity fills that vacuum – not with a return to command-and-control, but with a shared understanding of what the role should accomplish.
The consequences of this missing foundation show up in AI transformation outcomes.
Only one in three AI initiatives boost productivity, according to Gartner. One in five delivers measurable ROI. Meanwhile, 52% of employees experiencing transformation fatigue blame AI, and nearly half report receiving insufficient training during transformation initiatives.
These aren’t technology failures. They’re clarity failures.
As Bersin himself noted, “If there’s anything that’s going to hold you back in AI transformation, it’s not going to be the tech. It will be HR’s struggle to redesign jobs, roles, workflows, training and employee skills.”
But you can’t redesign jobs without first understanding what those jobs should accomplish. You can’t identify which workflows to automate without knowing what outcomes those workflows should produce.
The supermanager equation as currently written looks something like: Trust + Transparency + AI Fluency = Transformation Success.
But there’s a missing variable that makes all the others work. Role clarity isn’t a replacement for supermanager capabilities – it’s the foundation that makes those capabilities effective.
PropulsionAI Completes the Supermanager Blueprint
The supermanager framework captures something essential about AI-era leadership. Trust, transparency, psychological safety, and continuous learning are exactly what organizations need from their managers right now.
But vision without infrastructure is just aspiration.
PropulsionAI gives supermanagers the foundation their role requires. Traditional job descriptions list tasks. Strategic role design – what PropulsionAI enables – defines outcomes. It guides managers through the thinking that the supermanager model assumes: connecting roles to strategy, clarifying what success looks like, and establishing measurable results.
When managers complete the PropulsionAI process, they can articulate exactly what each role should accomplish and how it connects to organizational success. That clarity cascades into every supermanager responsibility: goals become meaningful, feedback becomes specific, trust becomes concrete, and AI transformation has a clear target.
The supermanager vision is right. Leaders need to evolve. But evolution requires the right environment.
Before your managers can become supermanagers, they need super-clear roles to manage.
Ready to give your managers the foundation they need? Try PropulsionAI 100% free
Want More?
- “The Rise of the Supermanager” – Josh Bersin Company (link)
- “Want a superworker company? Create supermanagers” – Julia Bersin on HR Executive (link)
- “‘Can a machine do this?’ HR’s focus must shift from the workforce to the work, Gartner says” – Jill Barth on HR Executive (link)