Your bad prompt may be a management problem
Have you always had good managers?
I have not.
I have known brilliant managers. People capable of clarifying a vision, building trust, protecting a team and helping others grow.
And I have known the opposite.
Managers capable of turning a motivated team into a waiting room for burnout. Managers who do not listen. Managers who give vague instructions, then blame people for vague results. Managers who confuse control with leadership. Managers who talk about trust with an Excel spreadsheet in one hand and an implicit threat in the other.
For a long time, this problem stayed within the human world.
Then AI arrived.
Suddenly, everyone started talking about prompts, productivity, automation, copilots, intelligent agents, time savings and the transformation of work.
Fine.
But one uncomfortable detail is often forgotten: an AI user is already a manager.
They assign a mission. They frame the context. They clarify the objective. They check the output. They correct. They teach the other party how to work better with them.
Exactly as a manager should do with a human team.
AI does not correct vagueness, it amplifies it
The promise around AI is often magical: produce faster, write faster, analyze faster, decide faster.
But moving faster in confusion is still confusion.
A bad brief given to a colleague rarely produces an excellent result. A bad brief given to AI does not suddenly generate clear thinking. It often produces a clean, structured, convincing answer that misses the point.
That is where the issue becomes serious.
AI does not only require a technical skill. It requires a managerial skill.
A good AI user knows how to explain what they want. They provide context. They define constraints. They express a level of expectation. They challenge the result. They reformulate. They learn through iteration.
A bad AI user does exactly what a bad manager does: they give a vague instruction, expect a miracle, become impatient, criticize the result, then blame the other side.
The machine becomes a very convenient scapegoat.
A prompt is a brief that can no longer hide
For years, many managers survived thanks to comfortable ambiguity.
They could give imprecise instructions, then adjust their judgment afterward. They could blame a team for not understanding an intention that had never been clearly expressed. They could confuse authority with clarity.
With AI, this theatre becomes more visible.
A written prompt leaves a trace. It reveals the quality of thought. It shows whether the objective is clear. It exposes gaps in reasoning. It makes visible the user’s level of precision, method and maturity.
That is uncomfortable.
And useful.
Because a bad prompt is often just a bad brief wearing a digital tie.
AI reveals our true level of leadership
Microsoft reported that 75% of knowledge workers were already using generative AI at work in 2024, while many leaders believed their organization lacked a clear plan to turn individual use into measurable business impact (Microsoft).
That point is central.
AI adoption is not only about access to tools. It depends on work organization, role clarity, decision quality, psychological safety, training and the ability to learn collectively.
McKinsey’s 2025 AI survey shows that moving from AI experimentation to scaled impact remains difficult for many organizations, and that high performers connect AI with structured management practices across strategy, talent, operating model, technology, data, adoption and scaling (McKinsey).
In other words, the best results do not come from those who stack more tools. They come from those who know how to organize usage.
And organizing usage is management.
Bad management simply changes playground
Bad human management often relies on classic flaws.
Vague instructions. Moving targets. No feedback. Chronic impatience. No explanation of why. Obsession with control. A tendency to blame others when the result is not as expected.
With AI, these flaws do not disappear.
They scale.
A vague manager can now generate more vagueness. A rushed executive can produce poorly framed decisions faster. A misaligned team can automate already absurd processes. A confused organization can add yet another layer of technological complexity.
AI does not automatically make an organization more intelligent.
It can industrialize its incoherence.
That is why the topic goes far beyond prompt technique. The issue is not only: “Do you know how to talk to AI?”
It becomes: “Can you formulate a clear, responsible and verifiable intention?”
Managing AI means learning to manage without intimidation
There is one major difference between AI and a human colleague.
AI is not afraid of you.
It does not try to please you as a professional survival strategy. It does not hide discomfort in meetings. It does not nod while internally thinking that your request is impossible to understand. It does not compensate for your imprecision through years of implicit knowledge of your habits.
It responds to what it receives.
That simplicity is powerful.
Facing AI, you cannot rely on the silent dedication of a competent colleague who repairs your brief behind the scenes. You must learn to formulate.
You must specify the expected role. You must indicate the target audience. You must give the required depth. You must explain constraints. You must say what matters. You must verify.
These are the exact gestures of good management.
Artificial intelligence requires relational intelligence
AI is often discussed as an individual tool.
But AI in business is a collective phenomenon.
It changes how teams produce, communicate, document, decide and learn. It also changes power dynamics. Those who use AI well can accelerate. Those who do not may feel threatened. Those who understand use cases can move ahead. Those who suffer them may withdraw.
Harvard Business School notes that generative AI may help leaders become more efficient in some communication tasks, but managerial communication is not just about producing text: listening, trust and employee reception remain decisive (Harvard Business School).
That is the trap: believing that a better-written message is enough to create leadership.
An AI-generated email can be elegant. It may even be flawless. But if the intention is confused, if the culture is toxic, if trust is absent, if decisions do not follow, it remains packaging.
AI can polish the surface. It does not replace managerial courage.
Good managers will turn AI into a learning lever
Good managers have a huge opportunity.
They can use AI to prepare meetings better. To clarify decisions. To test several formulations. To simulate objections. To transform a confused idea into a testable hypothesis. To document learnings. To help employees progress.
MIT Sloan Management Review emphasizes the need to build safe, effective AI practices that are integrated into corporate culture rather than treating AI as a simple productivity gadget (MIT Sloan Management Review).
This is where management becomes central again.
A good manager does not simply ask their team to use AI. They create the conditions for the team to understand where AI helps, where it does not, where it creates risk, where it improves work quality and where it must remain under human control.
They turn AI into a collective learning tool.
Bad managers will build factories of nonsense
Bad managers will do something else.
They will demand faster. More often. More deliverables. More reporting. More slides. More tables. More content. More surveillance.
They will confuse speed with progress.
They will say: “With AI, you should do this in ten minutes.”
They will forget that thinking still takes time. Understanding still takes time. Deciding still takes time. Building a relationship of trust still takes time.
AI can reduce some friction. It does not remove human complexity.
In my book, I explain that innovational intelligence® is based in particular on individual lived experience, team lived experience, culture, psychological safety, communication, decision-making, methods and the application to artificial intelligence, especially in chapter 14 .
That is exactly what AI is putting under pressure today.
The tool enters an organization already shaped by habits, unspoken tensions, fears, power games, talents and blind spots.
The real test: knowing how to delegate to a machine
Delegating is not abandoning.
Delegating means entrusting a mission with a frame, expectations, success criteria and a feedback loop.
Many managers already do not know how to delegate to humans. They are therefore likely to delegate very poorly to AI.
They will ask the tool to “make a strategy,” “prepare a plan,” “write a text,” “analyze a market,” without specifying the context, assumptions, constraints, acceptable sources, expected format or quality criteria.
Then they will say: “AI is not good.”
[Inference] The problem often comes less from the machine than from the human interface asking it to work. This sentence is an analysis based on observed usage patterns, not a universal measurement.
Harvard Business Review reports that the best AI users are more likely to delegate complex tasks with clear objectives, suggesting that delegation quality plays a role in the value extracted from AI (Harvard Business Review).
That is the essential point: AI rewards people who can think clearly.
Will AI improve management?
[Inference] It can improve management if organizations agree to use it as a revealer.
A revealer of clarity. A revealer of method. A revealer of culture. A revealer of courage. A revealer of responsibility.
But it can also amplify existing flaws.
A good manager will likely become more precise, faster, more educational and more structured. A bad manager will likely become more intrusive, more demanding, more confused and harder to follow.
AI does not automatically create good managers.
It exposes those who are not.
Before prompt training, teach briefing
Companies are rushing into prompt training.
That is useful.
But insufficient.
Before learning how to prompt, people must learn how to brief.
A good brief answers a few simple questions:
What is the objective?
For whom?
In what context?
With what constraints?
With which sources?
At what level of quality?
In what format?
With which validation criteria?
What decision must be made at the end?
This discipline applies equally to AI and humans.
Training people in AI without training them in management is like giving faster cars to rushed drivers without teaching them how to stay on the road.
Conclusion: AI does not tolerate vague managers
AI will not only transform jobs.
It will transform the way we reveal our managerial capabilities.
It will show who knows how to clarify, delegate, correct, learn, listen and decide.
It will show who confuses authority with precision.
It will show who talks a lot without framing much.
It will show who knows how to create the conditions for quality work.
With AI, bad managers do not disappear. They become more numerous, because everyone becomes the manager of a machine.
They simply change playground.
So, in your view, will AI improve management or amplify its flaws?
I cover this topic in my keynotes, workshops and advisory work, because a bad prompt is often just a bad brief wearing a digital tie.
References
(Microsoft) = https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part
(McKinsey) = https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
(Harvard Business School) = https://www.library.hbs.edu/working-knowledge/ai-can-help-leaders-communicate-but-cant-make-employees-listen
(MIT Sloan Management Review) = https://mitsloan.mit.edu/ideas-made-to-matter/leadership-and-ai-insights-2025-latest-mit-sloan-management-review
(Harvard Business Review) = https://hbr.org/2026/03/what-the-best-ai-users-do-differently-and-how-to-level-up-all-of-your-employees



