Wednesday, April 15, 2026
  • About
  • Advertise
  • Careers
  • Contact
NewsTrendsKE
  • Business
    • Deals
  • OpEds
  • Sustainability
  • Women in Business
  • Lifestyle
  • Featured
  • Technology
    • Phones
  • Sports
  • World
  • Contact Us
No Result
View All Result
NewsTrendsKE
No Result
View All Result

Home » Technology » Kaspersky shares AI cybersecurity predictions for 2026

Kaspersky shares AI cybersecurity predictions for 2026

Editor by Editor
7 January 2026
in Technology
Reading Time: 3 mins read
A A
Share on FacebookShare on TwitterShare on WhatsApp

Kaspersky experts outline how the rapid development of AI is reshaping the cybersecurity landscape in 2026, both for individual users and for businesses. Large language models (LLMs) are influencing defensive capabilities while simultaneously expanding opportunities for threat actors.

Deepfakes are becoming a mainstream technology, and awareness will continue to grow. Companies are increasingly discussing the risks of synthetic content and training employees to reduce the likelihood of falling victim to it. As the volume of deepfakes grows, so does the range of formats in which they appear. At the same time, awareness is rising not only within organisations but also among regular users: end consumers encounter fake content more often and better understand the nature of such threats. As a result, deepfakes are becoming a stable element of the security agenda, requiring a systematic approach to training and internal policies.

Also Read

Over a million banking accounts compromised as financial threats move to credential theft

13 April 2026
Zoho Artificial Intelligence

Alarm as Unreleased AI Breaks Free During Safety Test

9 April 2026
Load More

Deepfake quality will improve through better audio and a lowering barrier to entry. The visual quality of deepfakes is already high, while realistic audio remains the main area for future growth. At the same time, content generation tools are becoming easier to use: even non-experts can now create a mid-quality deepfake in just a few clicks. As a result, the average quality continues to rise, creation becomes accessible to a far broader audience, and these capabilities will inevitably continue to be leveraged by cybercriminals.

Online deepfakes will continue to evolve but remain tools for advanced users. Real-time face and voice swapping technologies are improving, but their setup still requires more advanced technical skills. Wide adoption is unlikely, yet the risks in targeted scenarios will grow: increasing realism and the ability to manipulate video through virtual cameras make such attacks more convincing.

Efforts to develop a reliable system for labelling AI-generated content will continue. There are still no unified criteria for reliably identifying synthetic content, and current labels are easy to bypass or remove, especially when working with open-source models. For this reason, new technical and regulatory initiatives aimed at addressing the problem are likely to emerge.

Open-weight models will approach top closed models in many cybersecurity-related tasks, which create more opportunities for misuse.Closed models still offer stricter control mechanisms and safeguards, limiting abuse. However, open-source systems are rapidly catching up in functionality and circulate without comparable restrictions. This blurs the difference between proprietary models and open-source models both of which can be used efficiently for undesired or malicious purposes.

The line between legitimate and fraudulent AI-generated content will become increasingly blurred. AI can already produce well-crafted scam emails, convincing visual identities, and high-quality phishing pages. At the same time, major brands are adopting synthetic materials in advertising, making AI-generated content look familiar and visually “normal.” As a result, distinguishing real from fake will become even more challenging, both for users and for automated detection systems.

AI will become a cross-chain tool in cyberattacks and be used across most stages of the kill chain. Threat actors already employ LLMs to write code, build infrastructure, and automate operational tasks. Further advances will reinforce this trend: AI will increasingly support multiple stages of an attack, from preparation and communication to assembling malicious components, probing for vulnerabilities and deploying tools. Attackers will also work to hide signs of AI involvement, making such operations harder to analyse.

“While AI tools are being used in cyberattacks, they are also becoming a more common tool in security analysis and influence how SOC teams work.Agent-based systems will be able to continuously scan infrastructure, identify vulnerabilities, and gather contextual information for investigations, reducing the amount of manual routine work. As a result, specialists will shift from manually searching for data to making decisions based on already-prepared context. In parallel, security tools will transition to natural-language interfaces, enabling prompts instead of complex technical queries,” adds Vladislav Tushkanov, Research Development Group Manager at Kaspersky.

Tags: AICyber AttacksCyber SecurityCybercriminalKaspersky
Previous Post

Samsung Highlights How Open Ecosystems Unlock the Real Promise of Home AI at CES 2026

Next Post

OPPO Kenya Launches OPPO A6x, with Long-lasting Battery Life and Smooth Performance

Related Posts

Technology

Over a million banking accounts compromised as financial threats move to credential theft

13 April 2026
Zoho Artificial Intelligence
Technology

Alarm as Unreleased AI Breaks Free During Safety Test

9 April 2026
Cathy Ibal Headshot
OpEds

Cathy Ibal: How brands and publishers are navigating a changing news media landscape

7 April 2026
Health

World Health Day: Kaspersky Flags Rising Cyber Risks in Telemedicine and Digital Healthcare

7 April 2026
Epra

EPRA Cuts VAT On Fuel, Then Diesel Jumps: Who Really Wins?

14 April 2026

Crucial Role of Insurance in Kenya’s Economic Growth

20 June 2023
Absa Kenya

Absa Bank Kenya Partners with Transafrica Motors to Drive Growth in Transport and Logistics

14 April 2026
RUBiS Energy Kenya Fanikiwa na Ultra Tec

Petrol Stations Head for a Long Night Over New VAT Rules

14 April 2026
Selina, a homeowner from Nanyuki poses for a photo outside her home

Kenya’s Trillion Shillings Real Estate Boom Exposed: Why 99% Still Can’t Afford Homes

14 April 2026
Stanbic Bank Kenya

Stanbic Unit Trust Funds Post Strong Gains on Fixed Income Surge

14 April 2026
NewsTrendsKE

NewsTrendsKE

A News Blog For Readers Who Want More

Follow us on social media:

  • About
  • Advertise
  • Careers
  • Contact

©2026 NewsTrendsKE.

No Result
View All Result
  • Business
    • Deals
  • OpEds
  • Sustainability
  • Women in Business
  • Lifestyle
  • Featured
  • Technology
    • Phones
  • Sports
  • World
  • Contact Us

©2026 NewsTrendsKE.

Go to mobile version