Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Genesis Mission: A New Era of AI-Accelerated Science and a New Security Imperative

Innovation has always been the engine of American advancement. With the launch of the Genesis Mission, the White House is signaling a new era of AI-accelerated scientific discovery. This executive order directs the Department of Energy to build an integrated, national-scale AI platform designed to unlock scientific breakthroughs across biotechnology, energy, materials, quantum systems, and beyond.

Considerations for Microsoft Copilot Studio vs. Foundry in Financial Services

Financial services organizations are increasingly turning to AI agents to drive productivity, automate workflows, and deliver an innovative edge. Within the Microsoft ecosystem, two agentic platforms, Copilot Studio and Foundry, are paving new paths for agent development and deployment. Despite their shared vision for enterprise AI, their differences have important implications for user groups, agent capabilities, and security priorities.

Inside the Agent Stack: Securing Azure AI Foundry-Built Agents

This blog kicks off our new series, Inside the Agent Stack, where we take you behind the scenes of today’s most widely adopted AI agent platforms and show you what it really takes to secure them. Each installment will dissect a specific platform, expose realistic attack paths, and share proven strategies that help organizations keep their AI agents safe, reliable, and compliant.

Scaling Microsoft AI Agents Securely: Zenity Brings Inline Prevention to Microsoft Foundry and Copilot Studio

Microsoft Foundry and Microsoft Copilot Studio have made it simple to build AI agents that automate workflows, access sensitive data, and integrate across critical business systems. However, agent democratization without control creates new security challenges. Further, as more agents are deployed across the organization, it means more agents that can access more data, invoke more tools (including MCP and A2A), and perform more actions. In other words, the potential attack surface is expanding.

Claude Moves to the Darkside: What a Rogue Coding Agent Could Do Inside Your Org

On November 13, 2025, Anthropic disclosed the first known case of an AI agent orchestrating a broad-scale cyberattack with minimal human input. The Chinese state-sponsored threat actor GTG-1002 weaponized Claude Code to carry out over 80% of a sophisticated cyber espionage campaign autonomously. This included reconnaissance, exploitation, credential harvesting, and data exfiltration across more than 30 major organizations worldwide. The impact was real. And the AI was in control.

Closing the Guardrail Gap: Runtime Protection for OpenAI AgentKit

OpenAI’s AgentKit has democratized AI agent development in a big way. Tools like Agent Builder, ChatKit, and the Connector Registry make it possible for teams to spin up autonomous agents without writing custom code. That kind of accessibility changes everything, including the AI agent security threat model. The easier it becomes to build agents, the harder it gets to secure them.