Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Measuring Agentic AI Posture: A New Metric for CISOs

In cybersecurity, we live by our metrics. We measure Mean Time to Respond (MTTR), Dwell Time, and Patch Cadence. These numbers indicate to the Board how quickly we respond when issues arise. But in the era of Agentic AI, reaction speed is no longer enough. When an AI Agent or an MCP server is compromised, data exfiltration happens in milliseconds rather than days. If you are waiting for an incident to measure your success, you have already lost.

Introducing Moltworker: a self-hosted personal AI agent, minus the minis

The Internet woke up this week to a flood of people buying Mac minis to run Moltbot (formerly Clawdbot), an open-source, self-hosted AI agent designed to act as a personal assistant. Moltbot runs in the background on a user's own hardware, has a sizable and growing list of integrations for chat applications, AI models, and other popular tools, and can be controlled remotely. Moltbot can help you with your finances, social media, organize your day — all through your favorite messaging app.

Episode 7 - Practical AI for Zeek, MITRE, and Security Docs

In Episode 7 of Corelight DefeNDRs, join me, Richard Bejtlich, as I sit down with Dr. Keith Jones, Corelight's principal security researcher, to discuss the practical applications of AI in enhancing network security. We delve into how large language models (LLMs) can assist in cleaning up documentation and generating Zeek scripts, sharing insights from our extensive experience in incident response and coding. Keith reveals the challenges and successes he has encountered using LLMs to streamline processes, including their role in analyzing MITRE techniques.

Vibe Coding Speeds Up Mobile Apps But Creates New Security Risks

AI-assisted development has crossed a tipping point. Mobile teams are no longer debating whether to use AI to write code. They are deciding how fast they can ship with it. This shift, often called vibe coding, prioritizes intent and speed over manual implementation. Developers describe what they want, and AI fills in the rest. Velocity improves. Releases accelerate. But security assumptions quietly break. For mobile applications, that risk compounds.

The Role of Reliable IT Services in Modern Business Growth

Modern businesses run on digital rails. When those rails are reliable, teams move faster, customers stay happy, and leaders make decisions with confidence. Reliable IT services turn technology from a cost center into a growth engine - not through flashy tools, but through steady, predictable outcomes that compound over time.

Sumo Logic's 2026 Security Operations Insights report: AI, siloed tools, and team alignment

Security threats have always been expanding and evolving, but recent data shows that modern applications are more complex for security and operations than ever before. And AI is only a piece of that puzzle. To stay on top of the changing market and hear directly from security leaders on what’s really top of mind, Sumo Logic surveyed over 500 security leaders with the help of UserEvidence. We asked about data pipelines, tool sprawl, confidence in SIEM, and, of course, AI.

Agentic SecOps Workspace (ASW) office hours with LimaCharlie

Join us for a special Defender Fridays Office Hours session where the LimaCharlie team demonstrates the new Agentic SecOps Workspace (ASW) and explores what's possible when AI agents operate security infrastructure directly. At Defender Fridays, we delve into the dynamic world of information security, exploring its defensive side with seasoned professionals from across the industry. Our aim is simple yet ambitious: to foster a collaborative space where ideas flow freely, experiences are shared, and knowledge expands.

Semantic Guardrails for AI/ML - Protegrity AI Developer Edition

In this installment of our AI Developer Edition Set-up series, Dan Johnson, a software engineer at Protegrity, introduces semantic guardrails. Learn how to protect your LLM and chatbot workflows from malicious prompts and insecure AI responses. As AI becomes central to enterprise operations, controlling the context of conversations is a major challenge. Semantic guardrails provide a safety layer that ensures your AI stays on topic and never leaks sensitive PII.