
By Chuck Gallagher | Business Ethics Keynote Speaker | AI Speaker and Author
Boston Consulting Group and MIT Sloan Management Review report that 35% of organizations are already using agentic AI — systems that don’t just recommend but autonomously act — with another 44% planning to adopt soon. Yet 47% of organizations say they don’t have a strategy for what they’re doing with AI at all. Chuck Gallagher, AI ethics speaker and author, argues that agentic AI creates the most consequential accountability gap in modern business: when AI systems make decisions, execute workflows, and take action without human approval, the ethical framework governing who is responsible when something goes wrong is dangerously underdeveloped
Imagine walking into your office Monday morning and discovering that over the weekend, an AI system approved a batch of customer refunds, restructured a marketing campaign, and flagged three employees for performance review — all without a single human being involved. That scenario isn’t hypothetical. BCG reports that effective AI agents are already accelerating business processes by 30% to 50% across finance, procurement, and customer operations. They work around the clock, handle data spikes without additional headcount, and make decisions in real time. The question nobody in the C-suite wants to sit with is this: when one of those decisions is wrong, who answers for it?
What Are AI Agents and Why Are They Different From the AI You’re Already Using?
As an AI ethics speaker and author who works with organizations on governance and accountability, I need to make a distinction that most business leaders haven’t fully processed yet. The AI you’ve been using — ChatGPT drafting an email, Claude summarizing a report, Copilot generating a slide deck — is assistive. You give it a task, it produces an output, you decide what to do with it. Agentic AI is fundamentally different. It doesn’t wait for your approval. It reasons, plans, uses tools, and executes multi-step workflows autonomously. BCG describes it as “both software and colleague” — a system that acts, not just advises.
The BCG and MIT Sloan Management Review study, which surveyed 2,102 executives across 21 industries and 116 countries, found that 76% of executives already view agentic AI as more like a co-worker than a tool. That perception shift is significant. When you treat a system as a colleague, you grant it a level of trust and autonomy that a tool doesn’t receive. The problem is that a colleague has professional obligations, ethical judgment, and legal accountability. An AI agent has none of those. It has whatever guardrails the organization built into it — and BCG’s own research suggests that most organizations haven’t built nearly enough.
Why Is the Governance Gap Around AI Agents So Dangerous?
Here’s the number that should alarm every CEO: 47% of organizations surveyed by BCG and MIT say they don’t have a strategy for what they’re doing with AI. Not a strategy for agentic AI specifically — a strategy for AI at all. Meanwhile, 35% are already deploying agentic systems and 44% plan to join them. That means organizations are deploying autonomous decision-making systems into live workflows without a governance framework to manage them. BCG puts it directly: managing agentic AI purely as a tool or purely as a worker creates critical tensions, including supervision versus autonomy and process retrofitting versus process reimagining.
BCG recommends treating AI agents the way you would treat new employees — giving them access only to what they need, classifying their actions by risk tier, requiring approvals for high-impact decisions, and capping their daily spending authority. That sounds reasonable until you realize how few organizations have actually implemented it. The same study found that 58% of leaders at agentic AI-adopting organizations are calling for governance structure changes within the next three years. Three years. AI agents are making decisions today, and the governance to manage them is three years away. That gap is where ethical failures are born.
When an AI Agent Makes a Harmful Decision, Whose Name Is on It?
As a business ethics keynote speaker who has spent thirty years studying how accountability failures lead to ethical disasters, this is the question I keep pushing in every boardroom: if your AI agent denies a customer’s claim, flags an employee for termination, or approves a transaction that violates compliance standards, who is personally accountable? The AI developer? The vendor? The department head who deployed it? The CEO who approved the budget? The answer in most organizations right now is: nobody, specifically. And “nobody specifically” is the most dangerous answer in any accountability framework.
BCG’s own guidance says companies should “bake in their values as hard rules” and establish tiered autonomy levels with responsible AI controls. IDC forecasts that the number of active AI agents globally will rise from 28 million in 2025 to over 2.2 billion by 2030. That’s not a slow rollout. That’s an exponential expansion of autonomous decision-making systems across every industry, every function, and every market. The organizations that build governance before deployment will be positioned to scale safely. The ones that deploy first and retrofit governance later will learn the same lesson that every industry learns: the cost of fixing something after it breaks is always higher than building it right from the start. BCG reports that 90% of CEOs expect measurable ROI from AI investments in 2026. The question I’d add is: are they measuring the risk with the same rigor they’re measuring the return?
Frequently Asked Questions
What are AI agents and how do they differ from generative AI?
AI agents are autonomous systems that reason, plan, use tools, and execute multi-step workflows without continuous human oversight. Unlike generative AI, which produces outputs that a human reviews and acts on, agentic AI takes action independently — approving transactions, managing workflows, making decisions, and escalating exceptions. Boston Consulting Group describes agentic AI as “both software and colleague,” a characterization supported by their finding that 76% of executives view these systems as co-workers rather than tools. BCG reports that effective AI agents can accelerate business processes by 30% to 50%.
How many companies are using agentic AI in 2026?
According to a BCG and MIT Sloan Management Review study of 2,102 executives across 21 industries and 116 countries, 35% of organizations are already using agentic AI and another 44% plan to adopt it soon. IDC forecasts the number of active AI agents globally will grow from approximately 28 million in 2025 to over 2.2 billion by 2030. BCG identifies 2026 as the pivotal year when organizations shift from isolated agentic pilots to enterprise-wide deployment, with more than 40% of large enterprises reporting they are already scaling implementation.
What governance should organizations have for AI agents?
BCG recommends treating AI agents like new employees: granting access only to what they need (role-based permissions), classifying actions by risk tier, requiring human approval for high-impact decisions, capping daily spending authority, and embedding organizational values as hard rules. Chuck Gallagher, AI ethics speaker and author, argues that governance must also include clear personal accountability — a named individual responsible for the consequences of each agent’s decisions — because without that, “nobody specifically” becomes the default answer when something goes wrong, which is the most dangerous position in any accountability framework.
What are the biggest risks of deploying AI agents without governance?
The primary risk is autonomous decision-making without clear accountability. BCG and MIT found that 47% of organizations deploying AI have no AI strategy at all, yet 35% are already using agentic systems. Without governance, AI agents may approve transactions that violate compliance standards, make hiring or termination recommendations based on biased data, or take actions that expose the organization to regulatory liability. BCG notes that managing agentic AI purely as a tool or purely as a worker creates critical tensions around supervision, autonomy, and process design that most organizations have not resolved.
Will AI agents replace human jobs?
BCG’s March 2026 research concludes that AI will reshape more jobs than it replaces. Most roles will remain but will change substantially as AI agents take over routine, structured tasks. However, 29% of agentic AI leaders expect to offer fewer entry-level roles, and 45% are willing to reduce the number of middle managers. New roles are emerging — AI product owners, model risk managers, responsible AI officers, and systems integrators — but supply of qualified candidates remains limited relative to demand, creating an implementation bottleneck that may slow the pace of workforce disruption.
I’d like to hear from you — is your organization deploying AI agents that make autonomous decisions, and if so, do you have a named individual accountable for the consequences of those decisions? Or is governance still something that’s planned for “next year”? Share your experience in the comments at ChuckGallagher.com, and consider the five questions below.
Related Articles:
Goldman Sachs Says AI Agents Will Act for You. But Whose Interests Will They Serve?
