Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

CrowdStrike Achieves FedRAMP High Authorization

The evolving landscape of state-sponsored threats demands the highest levels of security for federal systems and critical infrastructure. As part of our longstanding commitment to protecting federal agencies and critical infrastructure, the AI-native CrowdStrike Falcon platform has achieved Federal Risk and Authorization Management Program (FedRAMP) High Authorization — the U.S. government’s most stringent cloud security standard.

Announcing a new joint product offering from Tines and Elastic

Today, we’re excited to share that Tines Workflow Automation is now available directly through Elastic. Countless mutual customers already benefit from combining Tines' orchestration and automation capabilities with Elastic Security and Observability, allowing them to strengthen defenses, ensure operational resilience, and maximize the return on their existing investments.

A litmus test for AI agents

What is an ”AI agent”? Confusion abounds. There is also some consensus: agents must of course be AI-driven systems. They should have some degree of autonomy, and they should be able to use tools in addition to understanding and reasoning. But why isn't, say, ChatGPT an agent? According to most definitions out there, it actually is. Yet most (including OpenAI themselves) don’t describe it that way.

Data Leaks and AI Agents: Why Your APIs Could Be Exposing Sensitive Information

Most organizations are using AI in some way today, whether they know it or not. Some are merely beginning to experiment with it, using tools like chatbots. Others, however, have integrated agentic AI directly into their business procedures and APIs. While both types of organizations are undoubtedly realizing remarkable productivity and efficiency benefits, they may not know they are putting themselves at a significant security risk.

Shadow IT: What Are the Risks and How Can You Mitigate Them?

Using unapproved tools, software, and devices poses a significant risk to your organization. You never know what vulnerabilities so-called “shadow IT” may introduce, leaving your sensitive data and systems exposed to potential threats. In this article, we define the term shadow IT and explore several reasons why employees use unapproved software.

Why the Future of DLP Is Invisible, Invincible, and Inexpensive

Legacy DLP solutions, as well as CASB and app-native DLP solutions, face significant challenges in providing comprehensive coverage across modern SaaS, AI apps, and endpoints. Lack of visibility, clumsy deployments, and expensive implementations are common drawbacks of using these tools — and they leave big gaps in data loss prevention. Even today, we’re still seeing the same problems that have persisted for decades in today’s DLP solutions.

The ROI of threat intelligence: Measuring the Value Beyond Detection

Cybersecurity investment is a critical balancing act between cost and protection. Threat intelligence is often seen as a crucial part of this equation, providing insights that help businesses anticipate and prevent cyber attacks. Yet when it comes to evaluating the return on investment (ROI) of threat intelligence, the focus often remains narrowly on its role in threat detection. This limited perspective misses the broader strategic value that high-quality intelligence brings.

Cloudflare for AI: supporting AI adoption at scale with a security-first approach

AI is transforming businesses — from automated agents performing background workflows, to improved search, to easier access and summarization of knowledge. While we are still early in what is likely going to be a substantial shift in how the world operates, two things are clear: the Internet, and how we interact with it, will change, and the boundaries of security and data privacy have never been more difficult to trace, making security an important topic in this shift.

Take control of public AI application security with Cloudflare's Firewall for AI

Imagine building an LLM-powered assistant trained on your developer documentation and some internal guides to quickly help customers, reduce support workload, and improve user experience. Sounds great, right? But what if sensitive data, such as employee details or internal discussions, is included in the data used to train the LLM?