Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Open-sourcing OpenPubkey SSH (OPKSSH): integrating single sign-on with SSH

OPKSSH makes it easy to SSH with single sign-on technologies like OpenID Connect, thereby removing the need to manually manage and configure SSH keys. It does this without adding a trusted party other than your identity provider (IdP). We are excited to announce OPKSSH (OpenPubkey SSH) has been open-sourced under the umbrella of the OpenPubkey project.

CVE-2025-1974: Critical Unauthenticated RCE Vulnerability in Ingress NGINX for Kubernetes

On March 24, 2025, ingress-nginx maintainers released fixes for multiple vulnerabilities that could allow threat actors to take over Kubernetes clusters. Ingress is a Kubernetes feature that defines how workload Pods are exposed to the network, while an Ingress Controller implements those rules by configuring the necessary local or cloud resources. According to Kubernetes, ingress-nginx is deployed in over 40% of Kubernetes clusters.

Streamline your security workflows with Google SecOps and Datadog Observability Pipelines

As security threats increase in complexity and scale, modern SIEM solutions are becoming key choices by CISOs for consolidating security monitoring and incident response. Organizations relying on Google or Google Cloud infrastructure are increasingly adopting Google Security Operations (SecOps) to unify their security stack and workflows.

How to strengthen compliance across the software development life cycle by shifting left

Maintaining compliance and minimizing security risks has become more complex than ever before. Regulatory frameworks such as GDPR, HIPAA, and SOC 2 require organizations to implement strict measures to protect customer data, secure their network and systems, and respond to audit investigations.

GitHub Supply Chain Attack: CVE-2025-30066 and CVE-2025-30154 Expose Secrets Across 218 Repositories

A major supply chain attack has exposed sensitive CI/CD secrets in GitHub Action tj-actions/changed-files, known as CVE-2025-30066, across 218 repositories. This incident has raised significant concerns about security and is connected to an earlier attack on the other GitHub Action, reviewdog/action-setup@v1, tracked as CVE-2025-30154. While only 4% of the 5,416 repositories that were affected had secrets leaked, the damage is severe.

What Is Shoulder Surfing? Tips to Protect Your Personal Information

Not all threats to your accounts and privacy happen online. They can happen right next to you. The stranger sitting next to you on the metro, coffee shop, or airport may not be some innocent stranger. Instead, they could be looking for an opportunity and the right moment to look over your shoulder and steal your passwords or personal information. Shoulder surfing attacks happen when someone watches you enter sensitive information, such as a PIN or password, into your device or account.

Enterprise Fraud Management (EFM): The Essential Guide

Fraud has moved from an IT issue to a boardroom topic across industries. The more complex the fraud, the bigger the financial, brand, and customer risk. E-commerce fraud, for example, is expected to cost from $44.3 billion in 2024 (when it was last reported) to $107 billion in 2029, a 141% increase. And that’s just one industry. When the stakes are this high, you can’t blindly chase threats.

Leveraging map-reduce and LLMs for enhanced cybersecurity network detection

In my security research role at Corelight, I often have to go through large, complex data sets to detect subtle anomalies and threats. It reminds me of a famous quote by Abraham Lincoln: Give me six hours to chop down a tree and I will spend the first four sharpening the axe. For me, that means investing time up front to build tools that allow a large language model (LLM) to do the heavy lifting on key tasks, namely those that teams of analysts would have handled in the past.

What is a Data Poisoning Attack?

Data poisoning is a sophisticated adversarial attack designed to manipulate the information used in training artificial intelligence (AI) models. By injecting deceptive or corrupt data, attackers can hurt model performance, introduce biases, or even create security vulnerabilities. As AI models increasingly power critical applications in cybersecurity, healthcare, finance, and many other industries, maintaining the integrity of their training data is absolutely critical.