Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

API Security: Authorization, Rate Limiting, and Twelve Ways to Protect APIs

41% of organizations suffered an API security incident, where a majority (63%) were data breaches. This is despite 90% of them using authentication policies in place, according to a survey by 451 Research. No surprises there, as authentication is just one piece of the API security puzzle. In this blog, we’ll cover the 12 methods that technology leaders need to incorporate to secure and protect APIs.

Achieving Zero Trust Maturity with Cato SSE 360

Trust is a serious issue facing enterprise architectures today. Legacy architectures are designed on implicit trust, which makes them vulnerable to modern-day attacks. A Zero Trust approach to security can remedy this risk, but transitioning isn’t always easy or inexpensive. CISA, the US government’s Cybersecurity and Infrastructure Security Agency, suggests a five-pillar model to help guide organizations to zero trust maturity.

Don't Choose a Cloud Storage Service Without Asking These 10 Critical Cybersecurity Questions

As the demand for cloud storage continues to rise, individuals and businesses alike are faced with the critical decision of choosing a reliable and secure cloud storage provider. While the convenience and accessibility offered by cloud storage are undeniable, it is essential to prioritize cybersecurity and data protection when entrusting sensitive information to a third-party provider.

Six Key Security Risks of Generative AI

Generative Artificial Intelligence (AI) has revolutionized various fields, from creative arts to content generation. However, as this technology becomes more prevalent, it raises important considerations regarding data privacy and confidentiality. In this blog post, we will delve into the implications of Generative AI on data privacy and explore the role of Data Leak Prevention (DLP) solutions in mitigating potential risks.

Hype vs. Reality: Are Generative AI and Large Language Models the Next Cyberthreat?

Generative AI and large language models (LLMs) have the potential to be used as tools for cybersecurity attacks, but they are not necessarily a new cybersecurity threat in themselves. Let’s have a look at the hype vs. the reality. The use of generative AI and LLMs in cybersecurity attacks is not new. Malicious actors have long used technology to create convincing scams and attacks.

How to secure Generative AI applications

I remember when the first iPhone was announced in 2007. This was NOT an iPhone as we think of one today. It had warts. A lot of warts. It couldn’t do MMS for example. But I remember the possibility it brought to mind. No product before had seemed like anything more than a product. The iPhone, or more the potential that the iPhone hinted at, had an actual impact on me. It changed my thinking about what could be.

Large-Scale "Catphishing" that Targets Victims Looking for Love

For all the recent focus on artificial intelligence and its potential for deepfake impostures, the boiler room is still very much active in the criminal underworld. WIRED describes the ways in which people in many parts of the world (Ireland, France, Nigeria, and Mexico) have been recruited to work as freelancers for a company that seeks to profit from lonely people looking for love. This is how a typical operation runs.

Snyk top 10 code vulnerabilities report

Earlier this year, we released a report on the top 10 open source vulnerabilities from data based on user scans — giving you an inside look into the most common (and critical) vulnerabilities Snyk users found in their third-party code and dependencies. Building on this trend, we decided to look into the most common vulnerabilities in first-party code. While OWASP served as a guiding light for open source security intel, gathering data on proprietary code was a bit more complex.