Skip to main content

From Repetition to Revolution: How Building Your Own AI Assistant Changes the Ethics of WorkBy Chuck Gallagher – Business Ethics Keynote Speaker | AI Speaker and Author

It started with a question at the end of a keynote in Toronto.

A VP of operations raised her hand and said, “Chuck, I’m using AI every day—but I still feel like I’m failing. I prompt. I paste. I explain. Again and again. Why does it feel like the tech is getting smarter but I’m still doing all the work?”

That moment—honest, vulnerable, and painfully common—brought to light a silent frustration many leaders face. They’ve bought into the promise of AI, but not the process. They know it can think with them, but they’re stuck making it remember them. And at the heart of that inefficiency is an ethical question we’re not asking enough:

Are we teaching people to use AI, or to train it with intention?

Ethical Insight – Automate Repetition, Not Responsibility

What struck me about Alexandra Samuel’s Harvard Business Review article was its call to design your own AI assistant—a digital teammate, not just a tool.

She made the case that custom AI assistants—whether you call them Custom GPTs (OpenAI), Gems (Anthropic), or Projects (Google Gemini)—aren’t just power-user gimmicks. They’re critical building blocks for professionals who want to escape the Groundhog Day of retyping their values, instructions, tone, or frameworks in every prompt.

Now, here’s the ethical dilemma: Are we wasting our team’s cognitive energy on repeat prompts, or are we freeing their creativity for higher-order judgment?

That’s not a UX issue. That’s a leadership choice.

The ethics of delegation apply here. If a leader makes employees explain their goals to the same system 100 times a week, is that productivity? Or is it an ethical oversight that underestimates their time and potential?

Leaders must shift from “how do I use AI?” to “how do I build AI that understands and embodies our principles, so others don’t have to explain them again?”

Real-World Application – Use the Tool to Teach the Tool

I’ve worked with law firms, accounting practices, and manufacturing teams—people who desperately want AI to “just get it.” But “getting it” means training it. That’s where the HBR article shines.

Samuel outlines how to build your own assistant that knows:

  • Your voice
  • Your policies
  • Your preferred formats
  • Your ethical standards

For example, one healthcare compliance director I worked with created a custom GPT that embedded HIPAA summaries, organizational values, DEI priorities, and writing tone—all in one assistant. Instead of rewriting the prompt daily, their AI responded as if it were a trained member of the team.

That wasn’t magic. That was method:

  1. Write detailed instructions once (not 100 times).
  2. Upload your evergreen documents (tone guides, policies, frameworks).
  3. Set constraints: what should the AI never do?
  4. Test it like you would an employee—train, correct, review.

This is how you go from asking AI for help to building AI that’s helpful by design.

Strategic Takeaways – For Leaders Who Want More Than Hype

If you’re leading a team, department, or company—stop thinking about AI as “just another app.” Here’s what the article (and my own experience) say you should do:

  1. Design AI That Remembers

If your assistant doesn’t remember your brand, voice, or values—you haven’t built an assistant. You’ve built a chatbot with amnesia.

  1. Shift Your Training Model

Teach your teams not just how to prompt, but how to configure assistants. Prompting is tactical. Training is strategic.

  1. Store Your Values in AI

Don’t make your employees repeat your code of conduct, DEI priorities, or compliance rules in every prompt. Hardwire them into the AI’s instructions—ethics should be default, not optional.

  1. Conduct AI Readiness Reviews

Ask: Have we trained the AI on what matters most to us? If it replied without our oversight, would it reflect our standards?

  1. Align AI Use with Your Culture

If your culture values clarity, kindness, or accuracy, your AI outputs must model the same. What it produces reflects what you’ve taught it—or failed to teach.

Closing Reflection – From Tools to Trust

That executive in Toronto? She didn’t need a new tool—she needed a new approach. A few weeks later, she messaged me again: “We built a custom assistant. Now my team works on strategy instead of retyping prompts. And they trust the system because they trained it together.”

That’s what this is about.

In business, trust isn’t just what people say—it’s what your systems reinforce. When AI knows your values, people know they matter. When the assistant reflects your culture, your culture becomes scalable.

And when leadership invests in thoughtful design, what you get isn’t just productivity—it’s pride in the process.

Call to Action

As always, I welcome your comments and am happy to respond. Feel free to share your thoughts below.

 

 

Leave a Reply