Goldman Sachs Says AI Agents Will Act for You. But Whose Interests Will They Serve

By Chuck Gallagher | Business Ethics Keynote Speaker | AI Speaker and Author

Goldman Sachs CIO Marco Argenti predicts that AI models are evolving from chat assistants into autonomous agents that browse the internet, access files, execute multi-step tasks, and act on your behalf — essentially becoming personal operating systems. Chuck Gallagher, AI ethics speaker and author, argues that when AI agents shift from answering questions to taking actions with real-world consequences, the ethical questions about trust, transparency, and accountability become urgent — and the governance to manage them doesn’t exist yet.

Goldman Sachs’ Chief Information Officer made a prediction in January 2026 that should have gotten more attention than it did. Marco Argenti — former vice president of technology at Amazon Web Services — said that AI models are no longer chat windows that answer your questions. They’re becoming operating systems that independently access tools, browse the internet, retrieve files, and execute tasks on your behalf. His exact framing: “We used to look at models as a chat that would provide questions and answers. Now we look at models as essentially entities or agents that can perform tasks on your behalf.” That’s not a technology prediction. That’s a fundamental shift in the relationship between humans and AI. And the ethical implications of that shift are enormous.

What Does It Mean When AI Becomes Your Personal Agent?

As an AI ethics speaker and author, I want to make sure business leaders understand what Argenti is describing, because the language sounds benign and the reality is not. A personal AI agent doesn’t just draft an email when you ask it to. It reads your email, decides which ones need responses, writes those responses, and sends them — without waiting for you to review each one. It doesn’t just summarize a contract. It reads the contract, identifies the clauses that conflict with your interests, negotiates revisions with the counterparty’s agent, and presents you with a final version. Argenti predicts that these agents will soon reason across “everything that you’ve read, everything that you’ve written” — your entire professional and personal history becoming the context window for an autonomous system that acts in your name.

Goldman Sachs itself is already building this future. The firm has spent roughly six months collaborating with Anthropic to develop autonomous AI agents for internal processes including client onboarding, compliance checks, and accounting. Argenti calls these “digital co-workers” — not replacements for staff, but productivity multipliers that execute complex, multi-step operations independently. Wall Street analysts expect the largest cloud companies to pour more than half a trillion dollars into AI capital expenditures in 2026 alone. This isn’t experimental. It’s industrial-scale deployment of systems that act without continuous human oversight.

When an AI Agent Acts in Your Name, Who Controls What It Does?

Here’s where the ethics conversation hasn’t caught up to the technology. When you hire a human assistant, there’s a framework of professional responsibility, employment law, and ethical obligation governing their actions. When you authorize an AI agent to act on your behalf, that framework doesn’t exist. The agent operates according to the instructions it was given and the guardrails its developer built in. But whose values does it reflect? The developer’s? The platform’s? Yours? If your AI agent negotiates a deal that disadvantages the other party using information you wouldn’t have shared, who is responsible — you, the agent, or the company that built it?

Argenti predicts that AI will become a “game of scale” driven by mega alliances — massive strategic partnerships between tech companies that create a “winner-takes-most” dynamic. That concentration of power matters ethically. If a handful of companies control the AI agents that billions of people use for financial decisions, healthcare choices, legal interactions, and business negotiations, the potential for systemic bias, conflicts of interest, and opaque decision-making scales with every user who hands over their autonomy. Goldman Sachs Research projects that data center power consumption will jump 175% by 2030, with companies “obsessing over allocating every megawatt of power to activities with the highest return.” The infrastructure is being built at breathtaking speed. The ethical governance is not.

What Should Business Leaders Be Asking Before Deploying AI Agents?

As a business ethics keynote speaker who has watched organizations adopt powerful technologies without adequate governance for thirty years, I’ll offer the same advice I give in every boardroom: the technology is moving faster than your ability to manage its consequences, and the cost of getting it wrong is always higher than the cost of building governance first. Before you deploy an AI agent that acts on behalf of your organization or your customers, you need clear answers to three questions. First, what is this agent authorized to do — and what is it explicitly prohibited from doing? Second, when the agent takes an action that causes harm, who in the organization is personally accountable? Third, can the people affected by the agent’s decisions understand how those decisions were made and challenge them if they’re wrong?

Goldman Sachs is spending six months collaborating with Anthropic to build its agents carefully, with human oversight and defined workflows. That’s the right approach for a firm that understands regulatory scrutiny and reputational risk. The question is whether the thousands of companies racing to deploy agents in 2026 will exercise the same discipline — or whether the pressure to keep pace with competitors will produce the same pattern I’ve seen in every ethics failure I’ve ever studied: speed first, governance later, consequences always.

Frequently Asked Questions

What are personal AI agents and how are they different from chatbots?

Personal AI agents are autonomous systems that don’t just respond to questions but independently perform tasks on a user’s behalf — browsing the internet, accessing files, executing multi-step workflows, sending communications, and making decisions. Goldman Sachs CIO Marco Argenti describes this as AI models evolving from chat interfaces into operating systems that “independently access tools in order to perform tasks.” Unlike a chatbot that produces an output for human review, a personal agent takes action without waiting for approval at each step.

What is Goldman Sachs doing with AI agents?

Goldman Sachs has spent approximately six months collaborating with Anthropic to develop autonomous AI agents powered by Claude Opus 4.6 for internal processes including client onboarding, compliance checks, and accounting. CIO Marco Argenti calls these “digital co-workers” — productivity multipliers that execute complex, multi-step operations independently. Wall Street analysts expect the largest hyperscale cloud companies to invest more than $500 billion in AI capital expenditures in 2026.

What are the ethical concerns with personal AI agents?

When AI agents shift from answering questions to taking autonomous actions with real-world consequences, several ethical concerns emerge: whose values the agent reflects (the developer’s, the platform’s, or the user’s), who is accountable when the agent causes harm, whether people affected by the agent’s decisions can understand and challenge them, and the concentration of power when a few companies control the agents billions of people rely on. Chuck Gallagher, AI ethics speaker and author, argues that the governance framework for managing autonomous agents is dangerously underdeveloped relative to the speed of deployment.

What are AI mega alliances and why do they matter?

Goldman Sachs CIO Marco Argenti predicts that AI will become a “game of scale” in 2026, driven by massive strategic partnerships between technology companies that create a “winner-takes-most” dynamic. These mega alliances — such as the $500 billion Stargate joint venture involving OpenAI, SoftBank, and Oracle — concentrate AI infrastructure, computing power, and agent platforms among a small number of players. The ethical concern is that this concentration could create systemic risks including opaque decision-making, conflicts of interest, and barriers to competition and accountability.

What is the gigawatt ceiling Goldman Sachs predicts for AI?

The gigawatt ceiling refers to the physical constraint on AI growth created by limited electrical power infrastructure. Goldman Sachs Research projects that data center power consumption will increase 175% by 2030 from 2023 levels. The multi-year lead time to bring new power facilities online, combined with the rapid expansion of AI models and agents, means access to electrical power will become a competitive bottleneck. Companies will allocate every available megawatt to activities with the highest return, and the “right set of relationships” with utility providers will become a strategic asset.

Goldman Sachs is betting billions that AI agents will transform how business gets done. I’m not arguing with the prediction — I’m arguing that the governance conversation needs to move at the same speed as the capital. When AI shifts from answering your questions to acting in your name, the ethical stakes change fundamentally. Is your organization preparing for that shift, or assuming someone else will figure out the rules? Share your perspective in the comments at ChuckGallagher.com, and consider the five questions below.

Related Articles: 

Ethics Training: Building a Culture of Integrity Beyond Compliance

From Content to Conversion: How AI-Generated Articles Become Trust, Leads, and Revenue

Leave a Reply