Beyond the AI Hype: Rethinking Data, Power, and Ethics in the Corporate AgeBy Chuck Gallagher | Business Ethics Keynote Speaker and AI Speaker and Author

 “We’re not just training algorithms—we’re training attitudes.”

That’s the underlying message in the recent London School of Economics article, “Rethinking Data Power: Beyond AI Hype and Corporate Ethics.” And it hits at a core ethical tension that’s too often ignored: Who holds the power in the AI era—and what are they doing with it?

As someone who speaks professionally on both business ethics and AI, I often find that we frame ethical questions around AI as if they exist in a vacuum. But they don’t. Ethics in AI isn’t just about model bias, or who owns the code—it’s about who owns the data, and more importantly, who benefits from it.

From Hype to Hegemony: The Power Behind the Platforms

The LSE authors point out that our current AI discourse is saturated with corporate branding, techno-optimism, and a belief that “more data equals more truth.” That’s a seductive narrative—but it’s also a dangerous one.

Why?

Because it conceals the structural power dynamics that underlie data ownership. We often celebrate AI for its innovation, but fail to ask:

  • Who decides which data gets used?

  • Who has the right to be forgotten—or remembered—by an algorithm?

  • Who profits from “data-driven” insights, and who gets profiled, sorted, or surveilled?

This isn’t just an academic issue. It’s a global ethical challenge with real-world consequences for justice, privacy, and equity.

My Perspective: Ethical AI Must Be Rooted in Justice, Not Just Regulation

The article makes a crucial point: ethical AI should not just focus on mitigating harm—it must address the infrastructures of control. That’s the part often missing in corporate “ethics boards” or compliance programs.

Here’s what I believe:

It’s not enough to audit your AI models. You must audit your values.

If we treat data as a commodity, but not the people behind it as humans with dignity and rights, we’re not innovating—we’re extracting. That’s not ethical AI. That’s digital colonization.

The “Power Pivot” We Need in AI Conversations

The authors suggest moving beyond the “ethics of individual algorithms” to an examination of data power itself. I couldn’t agree more.

Too often, we reward companies for building ethical AI tools—while ignoring the unethical ecosystems they operate in. AI ethics must shift from policing outputs to questioning inputs, motives, and structures.

That means asking hard questions like:

  • Is your company hoarding data under the guise of “innovation”?

  • Are data practices reinforcing surveillance capitalism?

  • Do you provide true agency and consent to the people whose data you collect?

If your answer to any of those is vague… you’re not doing ethics. You’re doing PR.

Corporate Ethics vs. Structural Ethics: Why the Distinction Matters

The article critiques the rise of “ethics-washing”—where companies slap ethical slogans on AI practices without addressing deeper power asymmetries. As a former executive myself, I’ve seen this up close: values written on walls that don’t match behavior in boardrooms.

Ethics must be more than performative. It must be structural.

And structural ethics requires redistribution of power, not just reassignment of blame.

Final Thought: Ethics Can’t Be Outsourced

As a business ethics keynote speaker, I often say this:

You can’t outsource integrity to a checklist—or to your legal team.

Similarly, you can’t outsource AI ethics to compliance software or corporate communications. If we want AI to serve society, it must be grounded in fairness, accountability, and transparency—not just efficiency or optimization.

That means centering the voices of those most impacted by data systems—not just those who design them.

As always, I welcome your comments and am happy to respond. Feel free to share your thoughts below.


Leave a Reply