By Chuck Gallagher | Business Ethics Keynote Speaker & AI Speaker and Author
The Innovation Pitch That Turned Heads
Not long ago, a product development team at a Fortune 100 company walked into a pitch meeting. Their new concept? Off the charts in originality. The kicker? It wasn’t dreamed up in a war room of whiteboards and caffeine-fueled brainstorms.
It was co-developed… with AI.
As eyebrows rose and questions followed, one thing became clear: the way we work as teams is fundamentally changing—and generative AI is now sitting at the conference table.
Breaking Down the Research: What the PYMNTS Article Reveals
In the recent PYMNTS article, researchers from Harvard, Wharton, and Procter & Gamble conducted a fascinating study. They invited 776 professionals from P&G to generate new product ideas.
Half used generative AI, while the other half didn’t.
💡 Key findings:
- Solo workers using AI produced results equivalent to two-person human teams.
- AI helped team members break out of their functional silos, offering ideas outside their expertise.
- The AI-assisted group was three times more likely to produce top-rated product ideas.
That’s not just impressive—it’s disruptive.
Ethics Check: Should We Be Excited or Concerned?
The takeaway is clear: AI enhances human creativity and productivity. But here’s the ethical question:
Are we designing AI to support teamwork—or to replace it?
When one person can perform at the level of a collaborative team using AI, we face a critical fork in the ethical road:
- Will businesses use this tech to empower employees across departments?
- Or will they see it as a way to cut teams and reduce headcount?
That’s where ethics meets efficiency—and leadership must decide what kind of culture they’re building.
The Future of Teamwork: Cross-Functional, AI-Augmented, and Purpose-Driven
Here’s what I believe: AI isn’t here to eliminate collaboration—it’s here to expand the boundaries of what collaboration can mean.
When an R&D specialist generates marketing insights…
When a finance professional offers product ideas…
When a solo thinker becomes a team of possibilities…
That’s not replacing teamwork—that’s redefining it.
But integrity matters:
- Are we transparent about AI’s role?
- Are we preserving human accountability?
- Are we applying AI ethically across the organization?
If we don’t ask these questions now, we risk turning a generative tool into a divisive wedge.
Practical Takeaways for Ethical AI Adoption in Teams:
- Define the role of AI in collaboration
→ Is it a tool? A partner? A substitute? - Create guardrails
→ Ensure AI-generated input is transparent and vetted. - Cross-train teams with AI exposure
→ Let everyone play with the tech—not just IT. - Reinforce human ownership
→ Ideas can come from AI, but the responsibility must stay human. - Build inclusive innovation cultures
→ Don’t just reward outcomes—celebrate ethical use of tools that get you there.
Final Thought: It’s Not “AI or Teams.” It’s “Ethical AI with Teams.”
The PYMNTS article makes one thing clear—AI can replicate and even amplify the power of team collaboration. But it’s up to leadership to make sure that power is used ethically, transparently, and inclusively.
Because at the end of the day, no one wants to work for a company where “teamwork” has been replaced with tools and trust is an afterthought.
5 Questions to Reflect or Discuss:
- Should AI be treated as a member of the team—or just a resource?
- How can businesses ethically balance AI-powered efficiency with job protection?
- Are your teams clear about when AI is being used in collaboration?
- How do you protect idea ownership when AI plays a role in ideation?
- What steps is your company taking to ethically integrate AI into day-to-day teamwork?
