Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

CIO POV: Impactful AI Programs Start with 'Why'

Generative AI (GenAI) has the power to transform organizations from the inside out. Yet many organizations are struggling to prove the value of their GenAI investments after the initial push to deploy models. “At least 30% of GenAI projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs or unclear business value,” according to Gartner, Inc.

Dive into AI and LLM learning with the new Snyk Learn learning path

Snyk Learn, our developer security education platform, just got better! We have expanded our lesson coverage and created a new learning path that covers the OWASP Top 10 for LLMs and GenAI, and is entirely free! As AI continues to revolutionize industries, ensuring the security of AI-driven systems has never been more critical.

ChatGPT vs Cyber Threats: The REAL Role of AI in Cybersecurity

Unlock the truth about using Large Language Models (LLMs) in cybersecurity - are they the next big thing or just another trend? In this episode of Razorwire, your host, James Rees, brings together cybersecurity expert Richard Cassidy and data scientist Josh Neil to talk about the use of AI and large language models (LLMs) in cybersecurity and their role in threat detection and security. Join us for a discussion on the capabilities and limitations of these technologies, sparked by a controversial LinkedIn post.

Data Security in AI Systems: Key Threats, Mitigation Techniques and Best Practices

Artificial Intelligence (AI) has evolved into a vital part of modern businesses. Its reliance on large amounts of data drives efficiency and innovation. However, the need for data security in AI systems has grown critical with this increasing dependence on AI. Sensitive data used in AI must be protected to avoid breaches and misuse. This post will explore critical threats to AI data security, discuss mitigation techniques, and present best practices to help organizations safeguard their AI systems.

Securing AI and LLM: The Critical Role of Access Controls

As more companies leverage Artificial Intelligence (AI) and Large Language Models (LLMs) to maximize productivity and accelerate growth, the responsibility of safeguarding data has become increasingly critical. In this environment, robust access controls are not just a security measure but a fundamental aspect of responsible AI usage. This article will explore what access controls are, why they are essential for AI and LLM security, and how organizations can implement them effectively.

LLMs Gone Wild: AI Without Guardrails

From the moment ChatGPT was released to the public, offensive actors started looking to use this new wealth of knowledge to further nefarious activities. Many of the controls we have become familiar with didn’t exist in its early stages. The ability to request malicious code or the process to execute an advanced attack was there for the asking from an open prompt. This proved that the models could provide adversarial recommendations and new attacks never before seen.