Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Is your organization actually AI-ready? #cybersecurity #aisecurity #ainews

According to our CEO @Ev Kontsevoy, this isn't a "nice to have" anymore."It will be required if you don't want to fail." For the last two years, most companies have treated AI as an experiment. But as Ev explains in this clip, 2026 is the year AI graduates from the labs and enters production. This shift changes the requirements for everything – from how we secure identity, to who we hire. To help you navigate this transition, we’re breaking down Ev’s 2026 Cybersecurity Predictions.

Securing the Future of AI Browsing with 1Password and Perplexity

Join Anand, VP of Product and AI at 1Password, and Kyle Polley from Perplexity for a fireside chat about building the future of secure, AI-native browsing. 1Password and Perplexity are partnering to bring privacy, transparency, and trust to the Comet Browser — the world’s first AI browser and personal assistant. Learn why security must be built in from the start, and how end-to-end encryption and zero-knowledge architecture protect users in the age of AI.

Top 5 Enterprise Cloud Security Solutions to Consider in 2026

You’re likely dealing with a cloud footprint that grows faster than your ability to govern it. New workloads appear overnight. Developers spin up serverless services without telling security. SaaS systems store sensitive data outside your visibility. And identities connect everything together, which means one compromised token can trigger a multi-cloud incident. This constant expansion creates a monitoring gap—one that attackers understand better than anyone.

Vibe Coding and GenAI Security: Balancing Speed with Risk

If you think AI-generated code is saving you time and boosting productivity, you’re right. But here’s the problem: it’s also likely introducing security vulnerabilities. However, there are GenAI security practices that can be weaved into your workflow to help protect your apps. The software development landscape is shifting under our feet.

GreyNoise Findings: What This Means for AI Security

Late last week, GreyNoise published one of the clearest signals we have seen that AI systems are no longer just research targets. They are operational targets. Their honeypot infrastructure captured 91,403 attack sessions between October 2025 and January 2026, revealing two distinct campaigns systematically mapping AI deployments at scale. This is a meaningful inflection point.

Using LLMs, CVSS, and SIEM Data for Runtime Risk Prioritization

A recent University of North Carolina Wilmington study tested whether general-purpose large language models could infer CVSS v3.1 base metrics using only CVE description text, across more than 31,000 vulnerabilities. The results show measurable progress, but they also expose a hard limit that matters far more than model selection: Model quality helps, but missing context sets a ceiling on reliability.

Building Trust and Autonomy in the Age of Agentic AI - Saqib Khan, Global Field CIO at Tanium

Speaking with iTnews, Saqib Khan, Global Field CIO at Tanium, explores how real-time, trustworthy endpoint data forms the foundation of Agentic AI. He explains why confidence in data sources is key to enabling autonomous decision-making, reducing incidents, and driving faster, more reliable outcomes across IT and cybersecurity environments.

Sensitive Data Is the Common Thread Across Most OWASP Top 10 Issues. Here's Why

The OWASP Top 10 is usually presented as a list of technical failures. Broken access control. Injection. Insecure design. Misconfiguration. Each category points to something that went wrong in the application. What it doesn’t say explicitly is what was actually at risk when it went wrong. In most real incidents, the answer is not “the application.” It’s the data inside it. Sensitive data is the reason attackers care about OWASP failures in the first place. Credentials.

Cryptographic Key Management Is Becoming a Structural Constraint in Automotive - Download our Whitepaper

Automotive engineering teams are being asked to deliver faster, with less tolerance for failure. Software-defined vehicle programmes, secure OTA rollouts, zonal and service-oriented architectures, and continuous feature delivery are now baseline expectations. In parallel, regulatory pressure is increasing — from WP.29 (R155/R156), ISO/SAE 21434, and the forthcoming EU Cyber Resilience Act — tightening requirements around software integrity, traceability, and lifecycle governance.

From Dugouts to Data Lakes: Applying Moneyball to the AI SOC

In AI-powered security, advantage comes not from automation alone, but from clear insight into how decisions are made. At Arctic Wolf, home to one of the world’s largest commercial security operations centers (SOC), we process over 10 trillion security events weekly. Rather than chasing automation for its own sake, we build AI that scales human expertise – preserving judgment where it matters most. But what is the optimal combination of humans and machines for security operations?