Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Building continuous compliance with Aikido and Comp AI

Compliance evidence only works if it reflects the current state of the system. At Aikido, we’ve always treated compliance as a byproduct of good security, not a separate exercise teams need to prepare for. That’s why Aikido integrates with multiple compliance platforms. The goal is simple: let teams use the security data generated in Aikido wherever they run their compliance programs, without changing how they work or maintaining parallel processes.

Attackers Can Use LLMs to Generate Phishing Pages in Real Time

Researchers at Palo Alto Networks’ Unit 42 warn of a proof-of-concept (PoC) attack technique in which threat actors could use AI tools to generate malicious JavaScript in real time on seemingly innocuous webpages. “Once loaded in the victim's browser, the initial webpage makes requests for client-side JavaScript to popular and trusted LLM clients (e.g., DeepSeek and Google Gemini, though the PoC could be effective across a number of models),” the researchers write.

Managing Software Supply Chain Security for the AI Era

Artificial intelligence has fundamentally changed how we build software. Generative AI tools help developers write code faster, automate mundane tasks, and solve complex logic problems in seconds. But this speed comes with a hidden cost. When you accelerate development without adjusting your security posture, you inadvertently accelerate risk. Relying on AI-generated code and open-source packages in cloud environments can expose your organization to serious, often silent, vulnerabilities.

Viberails: Guardrails for AI Operations.

Sr. Technical Content Strategist The recent attention on OpenClaw brought something we've known for a while at LimaCharlie into sharp focus: Unrestricted AI operations are extremely powerful and incredibly risky. The security challenges presented by AI adoption can rival the productivity gains it delivers. Unrestricted AI agents can read credentials, execute commands, send emails, and make API calls without meaningful oversight.

Cloudflare AI Security Suite: Protect AI-powered apps with Firewall for AI

AI is powerful and organizations continue to adopt AI at a rapid pace, but without protections in place, it’s risky. In this session, you'll learn about the risks Enterprises face around AI and how Cloudflare provides a layered security approach incorporating AI Security. We’ll walk through how you can secure your AI-powered applications with Cloudflare.

6 Top AI Pentesting Platforms in 2026

AI penetration testing has moved beyond experimentation and into operational reality. What started as automation layered on top of traditional scanners has evolved into platforms capable of simulating attacker behavior, validating exploit paths, and continuously reassessing exposure as environments change.

Agentic AI Security and Regulatory Readiness: A Security-First Framework

AI is getting smarter; instead of just waiting for us to tell it what to do, it's starting to jump in, make its own calls, and get whole jobs done by itself. These independent systems can mess with data, use tools, and talk to people in all sorts of places, often doing things way faster than we can keep an eye on. This means we need a new way to stay safe, one that's all about managing what these AIs do and making sure we can always see what's happening and know who's responsible.

MomentProof Deploys Patented Digital Asset Protection

MomentProof, Inc., a provider of AI-resilient digital asset certification and verification technology, today announced the successful deployment of MomentProof Enterprise for AXA, enabling cryptographically authentic, tamper-proof digital assets for insurance claims processing. MomentProof's patented technology certifies images, video, voice recordings, and associated metadata at the moment of capture, ensuring claims evidence is protected against AI-based manipulation, deepfakes, and other malicious digital alterations.