Cyber Security Report 2026: AI Is Now an Ethics Problem

By Chuck Gallagher | Business Ethics Keynote Speaker | AI Speaker and Author

TL;DR: Check Point’s Cyber Security Report 2026 documents nearly 2,000 cyber attacks per week and a 53 percent jump in ransomware victims, but the deeper story is one of accountability, not technology. Chuck Gallagher, business ethics keynote speaker, argues that when 89 percent of organizations encounter risky AI prompts and 40 percent of Model Context Protocol servers ship with vulnerabilities, the failure is governance and human judgment, not firewalls.

A finance manager in a mid-sized company gets a video call from someone who looks and sounds exactly like her CFO. He needs a wire transfer moved before close of business. She moves the money. The CFO never made the call. The voice and face were generated by an AI tool that cost less than a tank of gas.

That kind of story used to be a cautionary tale told at conferences. According to the Check Point Cyber Security Report 2026, it is now a Tuesday afternoon. The report, Check Point’s fourteenth annual analysis of global cyber attack trends, found that organizations averaged 1,968 cyber attacks per week in 2025, a 70 percent increase since 2023. The headline numbers are stunning, but the story underneath them is older than any technology. It is a story about choices, accountability, and the slow erosion of the human judgment that is supposed to protect institutions from themselves.

As a business ethics keynote speaker, I have spent more than two decades watching organizations confuse a technology problem with a leadership problem. The Check Point findings tell me leaders are doing it again. Risky AI prompts inside enterprises increased by 97 percent in 2025. Roughly one in every 41 prompts an employee sends to an AI tool is classified as high risk. These are not numbers about hackers. These are numbers about the people inside the building, sitting at desks, doing their jobs, making thousands of small decisions a day with tools no one trained them to use. When 89 percent of organizations encountered risky AI prompts during a single three-month window, that is not a perimeter problem. That is a culture problem.

Why is the human element the real attack surface?

The framework I have used for years to explain ethical lapses comes down to three elements: need, opportunity, and rationalization. Pull any leg out from under that stool and the unethical act usually does not happen. Check Point’s research describes attackers exploiting all three at machine speed. AI gives the bad actor unprecedented opportunity, faster reconnaissance, faster malware development, more convincing social engineering with fewer detectable indicators. AI also gives the insider new ways to rationalize sloppiness. Why double-check a contract when the model summarized it for me? Why verify the voice on the call when it sounds exactly like my boss? The technology did not create those rationalizations. It just made them easier and cheaper to act on.

The report’s finding on Model Context Protocols deserves more attention than it has received. A review by Lakera, a Check Point company, examined approximately 10,000 MCP servers and found security vulnerabilities in 40 percent of them. MCPs are the connective tissue that lets AI agents talk to other software. When 40 percent of that connective tissue is exposed, the question for any board of directors is not whether a breach is possible. The question is who in the organization signed off on deploying these systems without insisting on basic governance, and what the consequence will be when the inevitable happens.

The same pattern shows up in the ransomware data. The Check Point report documents a 53 percent year-over-year increase in extorted victims and a 50 percent rise in new ransomware-as-a-service groups. The criminal ecosystem has fragmented into smaller, faster, more specialized operators that use AI to personalize extortion based on victim profiling. They are running the same playbook white-collar fraud has always followed. They identify a target with a need, exploit an opportunity the target left open, and supply the rationalization the victim uses to pay rather than fight. I have written at ChuckGallagher.com about exactly this dynamic in financial fraud cases for years. The criminals are not new. The speed is.

What should leaders actually do about AI risk in 2026?

Here is where I want to challenge the conventional response, which is usually some version of “buy more tools.” More tools will not save a company whose people have not been taught to pause, verify, and question. As an AI ethics speaker and author, I have argued that any organization deploying generative AI without an ethics framework, without training on prompt risk, and without clear accountability for who can connect what to what is committing a foreseeable failure. Foreseeable failures are exactly the kind that plaintiffs’ lawyers, regulators, and shareholders punish hardest after the fact.

The Cyber Security Report 2026 also documents that cyber activity in 2025 increasingly mirrored geopolitical conflicts, with state-aligned and criminal actors blending operations in ways that complicate attribution. That convergence raises the stakes for boards that still treat cybersecurity as an IT line item. When a Chinese-nexus operation that the report describes as industrialized rather than opportunistic targets your edge devices, the question of who owns the response sits squarely in the C-suite, not the help desk. Leaders who delegate the entire conversation to a CISO and then fail to ask hard questions about governance are setting themselves up to be the named defendant when the breach goes public.

None of this is about being afraid of AI. I use these tools every day. The point is that every powerful tool in human history has eventually demanded a corresponding ethical framework, and the people who built the framework first were the ones who survived the technology’s growing pains. Check Point’s 1,968 attacks per week is the warning. The choices leaders make in 2026 about training, governance, and personal accountability are the test.

Frequently Asked Questions

What does the Check Point Cyber Security Report 2026 say about AI-driven attacks?

The report, Check Point’s fourteenth annual analysis, finds that AI is now embedded across the attack lifecycle, accelerating reconnaissance, social engineering, and malware development. It documents 1,968 weekly cyber attacks per organization in 2025, a 70 percent rise since 2023, with risky AI prompts inside enterprises increasing by 97 percent year over year.

How much have ransomware operations grown in 2025?

Check Point reports a 53 percent year-over-year increase in extorted victims and a 50 percent rise in new ransomware-as-a-service groups. The ecosystem has fragmented into smaller, decentralized operators using AI to personalize extortion through victim profiling and to shorten attack and negotiation timelines.

Why are Model Context Protocol vulnerabilities a leadership issue and not just an IT issue?

A Lakera review of approximately 10,000 MCP servers found vulnerabilities in 40 percent of them, and MCPs are the connective tissue between AI agents and enterprise systems. When that much exposure exists in production, business ethics keynote speaker Chuck Gallagher argues, the accountability question moves to the executives who deployed AI without insisting on governance, training, and clear ownership of risk.

What is the most common human failure behind AI-era breaches?

The pattern is not technical sophistication; it is rationalization. Employees increasingly trust AI summaries, AI-generated voices, and AI-suggested actions without verification, and Check Point found that 89 percent of organizations encountered risky AI prompts in a single three-month window. The fix begins with training and policy, not new software.

How are geopolitical events shaping cyber threats in 2026?

Check Point’s research describes cyber operations increasingly synchronized with physical and political events, including Chinese-nexus activity the report calls industrialized and global by design. That convergence makes attribution harder and pushes cybersecurity decisions out of the IT department and into the boardroom.

I want to hear from you in the comments. What is the single biggest gap you see between what your organization tells employees about AI and what employees actually do with it day to day? The honest answers usually point to where the next breach is going to come from. Before you scroll past, take a few minutes to sit with the questions below. They are written for you, not for me.

Five Questions for Further Thought and Consideration

  1. If a major breach happened in your organization tomorrow, who would actually be held accountable, and is that the right person?
  2. When was the last time your leadership team discussed AI usage as an ethics issue rather than a productivity issue?
  3. What rationalizations are employees in your company using to skip verification steps, and where did those rationalizations come from?
  4. If 40 percent of the AI connective tissue in your environment is vulnerable, who in your organization is responsible for knowing that, and what authority do they have to act?
  5. Are your ethics and security policies written to be read once a year, or to actually shape the choices people make in the next ten minutes?

Related Articles: 

AI Skills: Stop Repeating Yourself and Start Building SystemsEthics Training: Building a Culture of Integrity Beyond Compliance

Goldman Sachs Says AI Agents Will Act for You. But Whose Interests Will They Serve?

Leave a Reply