
By Chuck Gallagher — Business Ethics Keynote Speaker and Trainer
TL;DR: Chuck Brooks’ Forbes outlook on cybersecurity in 2026 makes a critical point: cyber risk is no longer an IT problem, it is a governance problem. Chuck Gallagher, business ethics keynote speaker, argues that ransomware, AI-enabled phishing, identity attacks, and supply chain breaches are not just technical failures — they are downstream consequences of the choices leaders make about transparency, pressure, and accountability.
A finance manager at a mid-sized manufacturer gets a Microsoft Teams call from someone who looks and sounds exactly like the CFO. The voice is right. The face moves the way it should. The request is urgent and the dollar amount is a nine-figure wire to a supplier in another country. She authorizes it. By the time anyone realizes the CFO never made that call, the money is gone and the lawyers are circling. That kind of attack used to be a thought experiment. In 2026 it is a Tuesday.
Chuck Brooks made the case in his late-December Forbes column that 2026 will be defined by what he called hyper-innovation and hyper-risk — acceleration in artificial intelligence, quantum computing, IoT, and connected infrastructure happening alongside an attack surface that has never been larger. He argued that the question for leaders is no longer whether to invest in cybersecurity but how deeply to embed it in corporate strategy. I agree with him on the diagnosis. I want to push the argument one step further. The companies that will survive the next eighteen months are not the ones with the biggest security budgets. They are the ones whose leaders have stopped treating cybersecurity as a technology purchase and started treating it as an ethics decision.
The breach is the symptom. The culture is the cause.
Brooks points out that despite years of awareness training, the human element remains the dominant variable in most breaches — now amplified by AI-generated phishing and synthetic media. IBM’s most recent Cost of a Data Breach study put the global average cost per breach at roughly $4.4 million, and the World Economic Forum has continued to project annual cybercrime losses well into the trillions. Those are staggering numbers. But the more honest numbers are the ones inside organizations: how many employees clicked a link they suspected, kept silent because raising it felt like an admission of weakness, or watched a peer cut a security corner under deadline pressure and said nothing.
As a business ethics keynote speaker, I have argued at ChuckGallagher.com that almost every breach worth studying has a moment somewhere upstream where a person knew something was off and chose not to say it. The technology did not fail first. The culture did. Brooks calls for measurable behavior change and executive engagement instead of generic awareness programs, and he is right — but behavior change does not happen because the policy says so. It happens because the environment makes telling the truth safer than concealing the gap.
Why is identity now the real perimeter?
Brooks identifies identity as the primary security control in 2026 because stolen credentials and AI-driven phishing have become the dominant attack methods. The Verizon Data Breach Investigations Report has shown for several years running that the use of stolen credentials is the single most common attack vector, and ransomware continues to appear in roughly a third of all breaches. What this tells us is that the firewall metaphor has aged badly. There is no wall to defend anymore. There are only people — employees, vendors, contractors, board members — each carrying credentials that, if abused, become the front door.
That shift carries an ethical weight that most boardrooms still underestimate. When identity is the perimeter, every person with access becomes part of the security posture, which means every leadership decision about workload, fatigue, incentive structure, and psychological safety is also a security decision. A salesperson who has been told to hit a quarterly number at any cost is a salesperson who will rationalize bypassing a multi-factor prompt. A junior engineer who has been told that admitting a mistake gets you fired is an engineer who will quietly close the ticket and hope nobody notices. These are not technology problems. They are leadership problems wearing a technology costume.
What should leaders actually do differently in 2026?
Brooks closes his outlook with a line worth repeating: the companies that do well in 2026 will be the ones that see cybersecurity as a strategic pillar of the whole business, not an IT cost center. As an AI ethics speaker and author, I would add a sharper test. Walk into any executive committee meeting and ask three questions. Does our incentive system punish the people who raise security concerns or reward them? Does our AI governance assume that any tool we can deploy, we should deploy? Does our reporting culture make it possible for an employee to say “I made a mistake” before that mistake becomes a 60 Minutes segment? If the honest answer to any of those is no, the budget for new security tools is not the first problem to solve.
The deeper point Brooks gestures at — and the one I want to underline — is that resilience is the new measure of success. Detection, containment, recovery. None of those happen at speed unless people inside the company are willing to surface bad news fast. As a business ethics keynote speaker, I have watched too many organizations discover after the fact that an employee saw the warning sign weeks earlier and stayed quiet because the culture taught them to. In a year defined by deepfakes, agentic AI attackers, and quantum-era encryption questions, the single most valuable investment a leader can make is in the conditions that make the truth tellable.
Frequently Asked Questions
Why are most cybersecurity breaches still caused by human behavior in 2026? Because attackers have shifted from targeting systems to targeting people, and AI has made deception faster and more convincing. The Verizon DBIR has consistently found that the use of stolen credentials is the leading attack vector, and Chuck Brooks notes in his 2026 Forbes outlook that AI-enabled social engineering now amplifies traditional phishing dramatically. The technical controls have improved. The human conditions — pressure, fatigue, fear of speaking up — have not.
What is the difference between cybersecurity compliance and cybersecurity ethics? Compliance is about meeting an external standard on paper. Ethics is about telling the truth when no one is auditing. A company can be fully compliant and still have leaders who quietly defer fixes, conceal known gaps, or punish employees who raise concerns. As a business ethics keynote speaker, I have seen breached organizations whose compliance reports were spotless until the day the lawsuit was filed.
How does AI governance fit into a cybersecurity strategy for 2026? AI governance defines who is accountable when an AI system causes harm, leaks data, or is weaponized against the organization. Chuck Brooks identifies AI as both the most powerful defensive tool and the most powerful offensive tool of 2026, which means leaders need formal policies for model use, vendor AI tools, and employee AI experimentation. Without governance, every employee with an AI tool becomes an unsupervised security decision.
Why is supply chain security a board-level issue and not just a procurement issue? Because attackers increasingly compromise third-party vendors and software providers as a way to breach dozens of downstream customers at once. The SolarWinds and Kaseya incidents showed that a single compromised vendor can cascade across thousands of organizations in days. Boards are accountable for material risk, and a vendor breach that takes down operations is now categorically a board issue.
How should companies prepare for the quantum computing threat to encryption? By inventorying where sensitive long-lived data lives and beginning the migration to post-quantum cryptographic standards before the threat fully arrives. The U.S. National Institute of Standards and Technology finalized its first post-quantum encryption standards in 2024, and major federal guidance now expects organizations to plan transitions over the coming years. The harvest-now-decrypt-later threat means data stolen today could be decrypted in five to ten years, so the planning horizon is already compressed.
Join the conversation
If you sit on an executive team or a board, I want to hear from you in the comments below. When was the last time your organization treated a near-miss security event as a culture diagnostic rather than an IT incident? The honest answers to that question tell you more about your true cyber posture than any audit report. Share your perspective, push back if you disagree, and use the questions below to start a conversation inside your own organization.
Five Questions for Further Thought and Consideration
- If a junior employee on your team noticed a cybersecurity gap tomorrow, would they tell their manager within an hour, within a week, or never at all — and what does that answer reveal about your culture?
- Where in your incentive structure does speed or revenue quietly compete with security, and who is accountable when those two pressures collide?
- How would your organization respond if a deepfake of your CEO authorized a fraudulent transaction — and have you ever walked through that scenario as a leadership team?
- Which of your critical vendors have you actually audited for cybersecurity posture, and which have you simply trusted because the contract said you should?
- If your most material long-lived data were stolen today and decrypted in 2032, what would the consequences be — and what are you doing about post-quantum encryption right now?
Related Articles:
Why Politicians Won’t Fix the Laws That Let Them Profit
White-Collar Crime Without Punishment: A View From the Inside
