Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Detection Engineering with LimaCharlie and Claude Code

Detection engineering is fundamentally a translation problem: rules need to be converted between formats, IOCs need to be converted into detection logic, and noisy alerts need to be converted into precise suppressions. That translation work is what consumes analyst time, and it's what Claude Code handles well.

SMB Risks, AI, and Regional Realities with Paul Harris - The 443 Podcast - Episode 368

This week on the podcast, Marc and Corey sit down with Paul Harris, CEO of BGLA and Futurity Corp at WatchGuard's Impact Partner Conference in Tulum, to explore the evolving cybersecurity landscape across Latin America. Paul shares his journey from early days in cybersecurity to leading organizations in the region, while breaking down the biggest concerns facing LATAM SMBs today. The conversation also covers how AI is reshaping cybersecurity, the challenges of securing partners across diverse markets, and practical advice for business leaders looking to stay ahead of cyber risk in LATAM.

How multi-agent systems work in LimaCharlie

This video walks through how single agents and multi-agent systems are built and run inside the LimaCharlie platform. Agents in LimaCharlie are defined declaratively. Each agent specifies the model it runs, its instructions, the tools it can access, what events trigger it, and the guardrails it operates under. This approach makes agents version controllable, reviewable, and portable across tenants.

Continuous Threat Exposure Management (CTEM): The Complete Guide to Proactive Cybersecurity

The cybersecurity landscape has fundamentally changed. Organizations today manage sprawling digital environments - cloud workloads, remote endpoints, SaaS applications, third-party APIs, and hybrid infrastructure - all of which expand the attack surface at a pace that traditional security programs simply cannot match.

What SOC Analysts Actually Want From AI

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo Rick Bosworth is a cybersecurity marketing executive with nearly two decades of experience driving GTM strategy across technology startups. His uniquely technical perspective bridges the gap between complex solutions and practical customer outcomes. Rick has deep expertise spanning EDR, CNAPP, CWPP, AppSec, CTEM, and agentic SecOps.

Privacy in Enterprise AI: Why It's the Foundation, Not a Feature

Last week, OpenAI released Privacy Filter, an open-weight model for detecting and redacting PII in text. It is a thoughtful release: Apache 2.0 licensed, able to run locally, designed for high-throughput workflows, and built to go beyond regex-based detection. This is good news for everyone building enterprise AI. Privacy at the model layer is getting real attention. What we liked most was how clearly OpenAI described the role of the model.

How Do AI Agents Create Data Exfiltration Risk?

AI agents create data exfiltration risk by combining three capabilities that are dangerous together: access to private data, exposure to untrusted content, and the ability to communicate externally. When all three exist in one agent, an attacker can hide instructions inside an email, document, or webpage the agent processes and trick it into sending sensitive data out. No software vulnerability is required. The attacker doesn't need to break in. They just need to talk to your agent.