Skip to content

Kaspersky: AI Will Dominate Companies’ Cybersecurity in 2026

  • News

The rise of deepfakes, open source models and automated AI-powered cyberattacks will dominate companies’ cybersecurity environment in 2026, according to a recent report by Kaspersky’s experts. AI will become a critical factor for competitive advantage, operational resilience, corporate reputation, and business continuity. The rapid advance of language models and generative AI are boosting both defense and cybercriminal capabilities.

The fact that 70% of people in Latin America are still unfamiliar with deepfakes creates a serious exposure for organizations in the region. Low awareness makes employees, partners, and customers more susceptible to increasingly convincing fraud and impersonation attacks.

The most disruptive aspect of 2026 will not only be the technical capabilities of deepfakes or AI, but also their direct impact on business decision-making. Companies will have to operate in an environment where it will no longer be possible to assume that information is authentic by default, forcing them to rethink controls, internal policies, and security models from the design stage. This loss of certainty will transform how transactions are approved, risks are managed, and trust is protected within the organization,” says Claudio Martinelli, Managing Director for the Americas at Kaspersky.

Trends for 2026

Deepfakes. One of the main trends in 2026 will be that deepfakes are moving from the margins into the mainstream. What was once experimental is now showing up across many channels and use cases. Deepfakes are becoming more realistic and creating them is getting simplier, requiring little or no technical skill. Companies are increasingly treating deepfakes as a real business risk and are training employees to spot them and avoid fraud.

Real-time video & audio deepfakes are improving fast too, making fraud, social engineering, and targeted attacks harder to spot, even for trained employees. While real time impersonation such as changing their face or voice during a video call, still demands advanced technical expertise, their realism is rapidly improving. The use of virtual cameras make them a growing threat in targeted cyberattacks.

Open-source AI models are rapidly closing the gap with closed systems, but without the same safeguards or oversight. This means the same technologies can be used just as easily for legitimate innovation as for malicious attacks, further complicating an already complex threat landscape.

AI automation. Cybercriminals are also starting to use AI across the full attack chain, from writing malware to scanning for vulnerabilities, while actively hiding signs of AI involvement to evade investigation.

At the same time, distinguishing what is real from what is fake is becoming harder. Attackers can now produce polished phishing emails, convincingly replicate brand identities, and build fraudulent websites that look legitimate. As large companies normalize AI-generated content in marketing and communications, the signals people once relied on to spot manipulation are disappearing.

Embedding in the Risk Matrix & Governance

In this environment, decisions on where and how AI is used should be treated as a core risk issue. AI adoption needs to be embedded in the corporate risk matrix and governed by clear internal policies. Sensitive data, critical processes, and key decisions should not rely on AI without impact assessments, strong controls, regular audits, and human oversight to ensure regulatory alignment and maintain full control over information and operations.

Kaspersky practical recommendations for companies

  • Tighten web access controls. Regulate which sites and online tools employees can use to block malicious platforms, fraud sites, and high-risk content at the source.
  • Enforce application control across devices. Apply strict controls on desktops and mobile devices to limit which apps are allowed. This helps prevent legitimate tools from being misused for fraud, impersonation, or unauthorized communication.
  • Invest in continuous employee training. Technology alone is not enough. Human error remains the primary entry point for attacks, making ongoing awareness and training essential.

For leadership teams, the message is clear: resilience in 2026 will depend on combining AI-enabled defenses with strong governance and well-trained people.

Kaspersky Security Bulletin 2025: Statistics, on how artificial intelligence (AI) will shape the cybersecurity landscape in 2026.