For us humans to interact with the online world, we need a gateway: keyboard, screen, browser, device. What is called "human detection" online are patterns that humans use when interacting with such devices.
We have launched a new Trust Layer to help enterprises operate more safely and effectively as AI agents and other forms of automation shape the web as we know it. This exciting new era reflects a broader shift in how organisations need to think about digital traffic.
On February 9, 2026, security researcher Adnan Khan publicly disclosed a vulnerability chain (dubbed "Clinejection") in the Cline repository that turned the popular AI coding tool's own issue triage bot into a supply chain attack vector. Eight days later, an unknown actor exploited the same flaw to publish an unauthorized version of the Cline CLI to npm, installing the OpenClaw AI agent on every developer machine that updated during an eight-hour window.
We are thrilled to announce a strategic partnership with Cline Bot Inc. to bridge the gap between autonomous speed and enterprise trust. By embedding Snyk’s security intelligence directly into Cline’s autonomous loops, we are delivering an end-to-end automated secure coding workflow that empowers developers to innovate with confidence. The evolution of AI coding tools is accelerating rapidly. We have moved from simple completion to sophisticated chat, and now to full autonomy.
2025 changed the shape of digital risk. In 2026, the impact accelerates. The fastest-growing threats no longer look like traditional attacks. They arrive through apparently legitimate automated access – AI agents, LLM crawlers, and delegated automation interacting directly with revenue-critical systems. They don’t trigger alarms. They quietly extract value, distort pricing logic, and reshape digital economics at scale.
Cybersecurity tools and procedures were designed to provide full defence against predictable threats that followed patterns that would raise alarms. Familiar CAPTCHAs, IP blocks, browser checks, browser fingerprinting, and login restrictions would provide a protective layer for businesses to ensure only genuine users were using their website, or app, or API responsibly. This layer of cybersecurity used to distinguish human from bot.
Security teams have spent years refining their ability to detect and stop malicious bots. That work remains critical. Automated traffic now accounts for more than half of all web traffic, according to Imperva's 2025 Bad Bot Report. What has changed is the scale and influence of legitimate bots and the blind spots they introduce into modern security programs.
The web is entering a new phase. Artificial intelligence is beginning to act on behalf of people rather than simply assisting them. AI agents are now browsing, comparing, and buying, taking on the decisions that once sat firmly in human hands. This marks the start of the agentic marketplace, an emerging ecosystem where autonomous systems interact, negotiate, and transact across digital platforms.
Picture your online shopping site overwhelmed with fake orders, your customer accounts being drained one after another, or your essential APIs flooded by an endless wave of automated attacks. This is the reality businesses face today—thanks to a fully automated army of cyber criminals determined to cause harm. In this digital bot invasion, businesses of all kinds are under urgent pressure to establish defenses that effectively fight this digital threat.
Today, we are announcing a new approach to catching bots: using models to provide behavioral anomaly detection unique to each bot management customer and stop sophisticated bot attacks.