Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

How To Discover PII and Privacy Vulnerabilities in Structured Data Sources

In this video, we walk through the process of discovering personally identifiable information (PII) and identifying potential privacy vulnerabilities within structured data sources. First, you will connect Protecto to your data repository. Then, we will show you how to access the Privacy Risk Data within your data assets catalog, obtain information on active users, access privileges, data owners, and recommendations for dealing with privacy risks.

Breaking the Barrier of Dynamic Testing: Detect and Autoconfigure Entry Points With CI Spark

Finding deeply hidden and unexpected vulnerabilities early in the development process is key. However, time to invest in proactive tests is limited. Prioritizing speed over security is common. Our new AI-assistant CI Spark closes this gap and enables both speed and security. CI Spark makes use of LLMs to automatically identify attack surfaces and to suggest test code. Tests generated by CI Spark work like a unit test that automatically generates thousands of test cases.

The new master mind of cybercrimes: Artificial intelligence

Imagine an AI overlord sitting in a dark basement, plotting world domination through cybercrime. While the idea might seem like a sci-fi flick, it’s actually closer to reality than we think. AI has emerged as a game changer in a constantly evolving cyber landscape. AI algorithms can learn and adapt to security measures quickly, making them the ultimate cyber villains.

Deep learning in security: text-based phishing email detection with BERT model

Phishing emails are fraudulent or malicious emails that are designed to deceive recipients and trick them into revealing sensitive information, such as login credentials, financial details, or personal data. Phishing email contents usually employ various social engineering techniques that are likely to manipulate recipients, leading to significant damage to personal or corporate information security.

Top considerations for addressing risks in the OWASP Top 10 for LLMs

Welcome to our cheat sheet covering the OWASP Top 10 for LLMs. If you haven’t heard of the OWASP Top 10 before, it’s probably most well known for its web application security edition. The OWASP Top 10 is a widely recognized and influential document published by OWASP focused on improving the security of software and web applications. OWASP has created other top 10 lists (Snyk has some too, as well as a hands-on learning path), most notably for web applications.

How to safeguard your AI ecosystem: The imperative of AI/ML security assessments

The widespread use of Artificial intelligence (AI) and machine learning (ML) introduce their own security challenges; an AI/ML security assessment can help. AI and ML provide many benefits to modern organizations; however, with their widespread use come significant security challenges. This article explores the vital role of AI/ML security assessments in unearthing potential vulnerabilities, from lax data protection measures to weak access controls and more.

8 questions about AI and compliance

AI is one of the hottest topics in tech right now. More than half of consumers have already tried generative AI tools like ChatGPT or DALL-E. According to a Gartner poll, 70% of executives say their business is investigating and exploring how they can use generative AI, while 19% are in pilot or production mode. Business use cases for AI range from enhancing the customer experience (38%), revenue growth (26%), and cost optimization (17%).

Keeping cybersecurity regulations top of mind for generative AI use

Can businesses stay compliant with security regulations while using generative AI? It’s an important question to consider as more businesses begin implementing this technology. What security risks are associated with generative AI? It's important to earn how businesses can navigate these risks to comply with cybersecurity regulations.

The Stealthy Threat of AI Prompt Injection Attacks

Just last week the UK’s NCSC issued a warning, stating that it sees alarming potential for so-called prompt injection attacks, driven by the large language models that power AI. The NSCS stated “Amongst the understandable excitement around LLMs, the global tech community still doesn‘t yet fully understand LLM’s capabilities, weaknesses, and (crucially) vulnerabilities.