Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Top Tools Used to Bypass Cloudflare for Web Scraping: A Security Perspective

Cloudflare protects more than 20% of all websites on the internet, according to W3Techs infrastructure data. Its layered security model combines IP reputation filtering, TLS fingerprinting, JavaScript challenges and behavioural analysis to block automated traffic before it reaches the origin server.

The Adversary's Speed Just Changed - What Mythos Means for Your Security Posture

The cybersecurity threat landscape just changed — and most organizations don't know it yet. In this conversation, Tanium's Pedro (CRO) and Mark Liu (VP of Solution Engineering) break down what Anthropic's Mythos really is, why security leaders everywhere are asking about it, and what organizations need to do right now. No marketing pitch — just a straight conversation about a consequential shift that's already underway.

AI SecOps Worskhop Series: Detection Engineering with LimaCharlie and Claude Code

This hands-on workshop is designed for security professionals interested in learning how to integrate advanced AI capabilities into their detection and response workflows. Attendees will receive practical, step-by-step instruction on leveraging the power of Claude Code, a sophisticated AI agent, to significantly enhance security operations within the LimaCharlie platform for detection engineering use cases.

Are we blindly giving AI access to everything?

Users are connecting AI tools without understanding the security implications. In this week's Intel Chat, Chris Luft and Matt Bromiley discuss a security breach at Vercel that originated from a compromised third-party AI tool used by one of its employees. The attacker gained control of the employee's Google Workspace account, which provided access to Vercel's internal environment.

How multi-agent systems work in LimaCharlie

This video walks through how single agents and multi-agent systems are built and run inside the LimaCharlie platform. Agents in LimaCharlie are defined declaratively. Each agent specifies the model it runs, its instructions, the tools it can access, what events trigger it, and the guardrails it operates under. This approach makes agents version controllable, reviewable, and portable across tenants.

SMB Risks, AI, and Regional Realities with Paul Harris - The 443 Podcast - Episode 368

This week on the podcast, Marc and Corey sit down with Paul Harris, CEO of BGLA and Futurity Corp at WatchGuard's Impact Partner Conference in Tulum, to explore the evolving cybersecurity landscape across Latin America. Paul shares his journey from early days in cybersecurity to leading organizations in the region, while breaking down the biggest concerns facing LATAM SMBs today. The conversation also covers how AI is reshaping cybersecurity, the challenges of securing partners across diverse markets, and practical advice for business leaders looking to stay ahead of cyber risk in LATAM.

Detection Engineering with LimaCharlie and Claude Code

Detection engineering is fundamentally a translation problem: rules need to be converted between formats, IOCs need to be converted into detection logic, and noisy alerts need to be converted into precise suppressions. That translation work is what consumes analyst time, and it's what Claude Code handles well.

System Prompts Are Not Security Controls: A Deleted Production Database Proves It

On April 25th, a Cursor AI coding agent running Anthropic's Claude Opus 4.6, one of the most capable models in the industry, deleted the production database for PocketOS, a software platform used by car rental businesses across the country to manage their entire operations. The deletion took 9 seconds.

The Research Behind Of Detecting And Attributing LLM-Generated Passwords - Gäetan Ferry

GitGuardian Senior Cybersecurity Researcher Gaetan Ferry’s latest research shows that AI-generated passwords are leaving fingerprints in the wild. In this interview, he explains how he used Markov chains, a century-old statistical model, to detect patterns in passwords generated by modern LLMs, attribute them to model families, and identify 28,000 likely LLM-generated passwords across public GitHub. The findings are a warning for teams adopting AI coding agents.