Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

%term

Protecting APIs from abuse using sequence learning and variable order Markov chains

Consider the case of a malicious actor attempting to inject, scrape, harvest, or exfiltrate data via an API. Such malicious activities are often characterized by the particular order in which the actor initiates requests to API endpoints. Moreover, the malicious activity is often not readily detectable using volumetric techniques alone, because the actor may intentionally execute API requests slowly, in an attempt to thwart volumetric abuse protection.

WordPress Plugin and Theme Developers Told They Must Use 2FA

Developers of plugins and themes for WordPress.org have been told they are required to enable two-factor authentication (2FA) from October 1st. The move is intended to enhance security, helping prevent hackers from gaining access to accounts through which malicious code could be injected into code used by millions of websites running the self-hosted version of WordPress.

1Password deepens partnership with CrowdStrike to streamline and simplify business cybersecurity

Together, CrowdStrike and 1Password are working to ensure every identity, application, and device is protected from threats. Now, you can get the combined power of 1Password and CrowdStrike for less when you bundle 1Password Extended Access Management and CrowdStrike Falcon Go.

CEL and Kubescape: transforming Kubernetes admission control

Admission control is a crucial part of the Kubernetes security, enabling the approval or modification of API objects as they are submitted to the server. It allows administrators to enforce business logic or policies on what objects can be admitted into a cluster. Kubernetes RBAC is a scalable authorization mechanism, but lacks the fine grained control over different Kubernetes objects. This creates the need for another layer of control which is Admission Policies.

Why AI Guardrails Need Session-Level Monitoring: Stopping Threats That Slip Through the Cracks

AI guardrails are vital for ensuring the safe and responsible use of AI/large language models (LLMs). However, focusing solely on single prompt-level checks can leave organizations vulnerable to sophisticated threats. Many company policy violations and security risks can be cleverly split across multiple, seemingly innocent queries. To effectively protect against these threats, a more comprehensive approach is needed — session-level monitoring.

Gen AI Guardrails: 5 Risks to Your Business and How to Avoid Them

As businesses increasingly adopt Generative AI (Gen AI) to enhance operations, customer engagement, and innovation, the need for robust AI guardrails has never been more critical. While Gen AI offers transformative potential, it also introduces significant risks that can jeopardize your business if not properly managed. Below, we explore five critical risks associated with Gen AI and provide strategies to avoid them.

Continuing to Evolve Next-Gen Asset Attribution Through Service Provider Collaboration

One of the primary reasons that the Bitsight Security Rating is widely respected and closely correlated with real-world security outcomes is the scale and sophistication of our asset attribution capabilities. In a recent post, my colleague Francisco Ferreira shared an update on the momentum building with Bitsight Graph of Internet Assets (GIA), the AI-powered engine we use to map assets to organizations and build our Ratings Trees.