Skip to main content

The Real AI Resistance: Fear, Trust, and the Ethics of Opting OutBy Chuck Gallagher | Business Ethics Keynote Speaker and AI Speaker and Author

A small business owner in Ohio still writes every invoice by hand. A college student disables AI tools on her laptop. A seasoned executive refuses to use ChatGPT, even when it could cut hours off his workweek. Why?

In an era where artificial intelligence can diagnose disease, write code, and compose symphonies, you’d think the question would be “How fast can I adopt this?” But for millions, the question is just the opposite: “Why should I trust this at all?”

A new study from Brigham Young University reveals compelling insights: the reluctance to adopt AI isn’t rooted in ignorance. It’s grounded in fear, ethical unease, and human psychology. As a business ethics keynote speaker and AI speaker and author, I believe these concerns aren’t just valid—they’re vital.

Let’s explore the real reasons why people say “no thanks” to AI—and what ethical leaders must do next.

1. The Fear of Losing Human Connection

The BYU study found that many participants avoided AI because they feared it would replace meaningful human interaction. Whether in education, healthcare, or business, there’s a growing sentiment that AI might streamline the work—but sterilize the relationships.

Ethical Takeaway: If AI becomes efficient but emotionally empty, we lose more than productivity—we lose humanity.

2. Lack of Trust in the Tech

From biased algorithms to opaque data collection, people are skeptical about what AI is really doing behind the scenes. According to the study, mistrust in developers, corporations, or the tools themselves fuels a quiet but growing resistance.

Leadership Insight: Transparency isn’t optional. Companies that explain how their AI works—and why—will earn long-term trust.

3. Discomfort with Delegating Thinking

Some individuals said they avoided AI because it “did too much”—making them feel intellectually lazy or dependent. One participant said using AI felt like “cheating.” Others simply said they liked thinking things through on their own.

Cultural Note: We live in a world that celebrates hustle and intellect. To many, outsourcing thought feels like surrendering self.

4. Ethical and Religious Values

The BYU study uniquely highlighted that for some, faith-based worldviews played a role in AI avoidance. Participants expressed concern that AI might interfere with moral agency or mimic human consciousness in ways they found spiritually troubling.

Important Reminder: Ethical innovation must leave room for pluralism. Not everyone sees AI as neutral—and that’s okay.

5. Privacy and Data Sovereignty

People aren’t just afraid of Big Brother—they’re afraid of Big Data. The idea that AI “learns” from everything we type, say, or share is unsettling to many. Even with anonymized data, the sense of being watched or harvested feels like a violation.

Actionable Ethics: Give people control. Let them opt in—not just opt out—of how their data is used.

So… What Should We Do About It?

If you’re in AI, tech, HR, or leadership, you must understand: resistance to AI is not ignorance—it’s a request for ethics.

We can’t shame people into using AI. We must build systems worthy of their trust.

Ethical AI isn’t just about compliance. It’s about conscience. And the people saying “no” may be giving us the most valuable insight of all.

As always, we welcome your comments and are happy to respond. Feel free to share your thoughts below.

Leave a Reply