Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How AI Makes APIs Even More Dangerous #apirisks #aisecurity #airisks #ai

AI and agent-based technologies are accelerating the use—and the risk—of APIs. Experts from Wallarm and Oracle explain how every new AI app or agent can instantly connect to dozens of APIs, multiplying your attack surface. Learn why the combination of AI and APIs is creating new security challenges you can't ignore.

CrowdStrike Announces Integration with ChatGPT Enterprise Compliance API

CrowdStrike is announcing a native integration between CrowdStrike Falcon Shield SaaS security and the OpenAI ChatGPT Enterprise Compliance API, adding visibility and security posture capabilities for mutual customers’ ChatGPT Enterprise environments. This integration helps security teams inventory and monitor AI agents across their organization — including who created them, what they access, and how they’re shared — so teams can consistently apply existing security controls.

How CrowdStrike Secures AI Agents Across SaaS Environments

AI agents are being rapidly embedded into the SaaS ecosystem to streamline operations, trigger complex workflows, and interact with sensitive data and systems. From automating calendar updates to executing code and accessing cloud data stores, they are becoming integral to business processes. But with this integration comes risk. AI agents are often quickly deployed across SaaS environments by employees, without centralized tools to govern them.

Apono's AIPowered Access Assistant - Faster, Easier Access Requests Ask ChatGPT

Here’s a streamlined version: Introducing Apono Access Assistant, our AI companion that speeds up access requests without sacrificing security. It handles three scenarios: mapping tasks to the right permissions, showing you what resources you can reach, and diagnosing permission errors. In this demo you’ll see it resolve an S3 access issue in seconds by creating a temporary read‑only role and revoking it when you’re done.

Announcing Secure Data Exchange for Agentic AI

PwC recently did an AI agent survey where they found the following: This all sounds great, right? For many reasons it is, but agentic AI creates a challenge of visibility for organizations into how AI agents are communicating with each other and external third-party vendors. Imagine a multitude of AI agents autonomously exchanging data across a complex mesh of third-party vendors and applications.

Beyond LLMs: The Strategic Need for MCP Security

Large language models (LLMs) are transforming enterprise operations, but their growing use introduces a critical security challenge: securing how they access sensitive data and integrate with existing tools. This is where Model Context Protocol (MCP) servers become a vital, yet often overlooked, part of AI security. These servers act as the crucial link, enabling LLMs to securely connect with diverse data sources and tools, significantly expanding attack surfaces that demand our immediate attention.