Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

What Are Deepfakes?

A deepfake is a form of media, such as a photo or video, generated by Artificial Intelligence (AI) to depict real or non-existent people performing actions they never did. AI manipulates a picture, video or voice recording to analyze a person’s characteristics and then blends those characteristics with existing footage using unique algorithms.

Examples of Personally Identifiable Information (PII)

Some examples of Personally Identifiable Information (PII) include your phone number, email address, license plate number, birth date, Social Security number (SSN) and medical records. Many aspects of your identity can be considered PII, so it’s important to understand what they are and how to protect them. Continue reading to learn how you can protect your PII from falling into the wrong hands and how Keeper can help.

Active Directory Hardening: Best Practices and Checklist

As cyber threats continue to be more sophisticated, the need for active directory security becomes paramount. Most Windows-based environments are heavily reliant on the AD configuration hence it’s a common target for intruders. This article outlines essential practices for AD hardening to protect your organization’s assets.

Empowering Developers in AppSec: Scaling and Metrics

This is the second instalment of a two-part blog post. The blogs are based on one of our “AppSec Talk” YouTube videos, featuring Kondukto Security Advisor Ben Strozykowski and Rami McCarthy, a seasoned security engineer with experience at Figma and Cedar Cares. In that video, Ben and Rami delved into the critical role developers play in the security program and the application security lifecycle.

Fortifying Networks Against Inbound Threats and Outbound Data Loss Should be an Organizational Priority

Interactive, hands-on keyboard attack campaigns are employed by today’s most proficient threat actors to penetrate organizational defenses. The network perimeter is typically the initial line of defense against unauthorized access to an organization’s network and the sensitive data it contains. After infiltration, attackers establish command-and-control (C&C) and data exfiltration channels to receive malicious payloads and export stolen data.

Secure your Elastic Cloud account with multifactor authentication (MFA)

In an era where cyber threats are constantly evolving, protecting your identity and data from unauthorized access is more critical than ever. That's why we're excited to bring you the enhanced multifactor authentication (MFA) for Elastic Cloud. This feature significantly strengthens the security of your Elastic Cloud user and deployment data by aligning with industry best practices. You can go to Elastic Cloud and complete your MFA setup today.

The Difference Between Pentesting, DAST and ASM

Penetration testing, dynamic application security testing (DAST), and attack surface management (ASM) are all strategies designed to manage an organization’s digital attack surface. However, while each aids in identifying and closing vulnerabilities, they have significant differences and play complementary roles within a corporate cybersecurity strategy. Let’s take a quick look at the definition of each of these strategies.

Prioritize Security Without Sacrificing Productivity: Balancing Identity Management and Risk Tolerance

In the fast-paced, large-scale world of digital business, establishing and managing an acceptable risk tolerance related to user identities — both human and machine — is a critical element of organizational security. At the forefront of this challenge is the need to strike the right balance between ensuring robust security and maintaining an environment that doesn’t impede innovation. After all, identities are the new perimeter in the cloud.

Harden your LLM security with OWASP

Foundationally, the OWASP Top 10 for Large Language Model (LLMs) applications was designed to educate software developers, security architects, and other hands-on practitioners about how to harden LLM security and implement more secure AI workloads. The framework specifies the potential security risks associated with deploying and managing LLM applications by explicitly naming the most critical vulnerabilities seen in LLMs thus far and how to mitigate them.