Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI on the Radar: Securing AI Driven Development

Join Vandana and Rob in this insightful webinar exploring the rapidly evolving landscape of AI security. As we shift from simple query-response models to complex autonomous agents that can plan, execute code, and access sensitive APIs, the traditional security "locks" are no longer sufficient. This session dives deep into the OWASP AI Exchange, a community-driven initiative providing practical guidance and technical controls for securing AI systems.

Last call on 398-day certificates

The bell rings. Last call for 398-day certificates is March 15. After that, every CA is required to cut you off at 200 days. Some have already stopped serving them early. The rest follow in two weeks. The irony of good certificate management is that when it works, nobody notices. No alerts, no outages, no 2am pages. The only time it gets attention is when something expires. Which means the teams doing it well rarely have the budget or the political capital to fix the process before it breaks.

Best Security for K8s Clusters: A Runtime-First Approach

Why does traditional Kubernetes security fall short? Static scanners flag thousands of CVEs but can’t tell you which ones are actually loaded into memory and exploitable—only about 15% are loaded at runtime. Traditional tools also create siloed visibility, with CSPM, vulnerability scanners, and EDR each seeing only one slice of your environment. This makes it impossible to spot lateral movement or connect events across cloud, cluster, container, and application layers.

ARMO Behavioral AI Workload Security

AI is not just another workload category. It is the first category of workloads that decides what to do at runtime. And that changes everything about how security must work in the cloud. For years, cloud security evolved around deterministic systems. You deploy code. That code follows defined logic paths. If something unexpected happens, such as a new process, an unusual outbound connection, or privilege escalation, you investigate and respond.

AI Risk Management: Process, Frameworks, and 5 Mitigation Methods

AI risk management is the process of identifying, assessing, and mitigating risks associated with artificial intelligence systems to ensure they are developed and used responsibly. It involves using frameworks like the NIST AI Risk Management Framework to address technical, ethical, and social challenges, including data bias, privacy violations, and security vulnerabilities.