Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Build Security Workflows in Seconds with AI Workflow Builder

In today’s fast-moving threat landscape, Hyperautomation is essential. But building workflows from scratch? That’s time you don’t have. That’s why we started with a library of pre-built templates, helping teams quickly configure security automation workflows. Templates made automation more accessible. Now, we’re taking the next step in that evolution and introducing Torq’s AI Workflow Builder. By harnessing the power of AI, we’re going beyond templates.

Is Character AI Safe? Artificial Intelligence and Privacy - Issues and Challenges

Since the technological “birth” of Artificial Intelligence and ChatGPT, many people are wondering what on earth they would do without AI in their lives. Since July 2024 ChatGPT has had 200 million weekly active users worldwide and attracted nearly 2.5 billion site visitors. However, ChatGPT is not the only AI out there.

Harden your LLM security with OWASP

Foundationally, the OWASP Top 10 for Large Language Model (LLMs) applications was designed to educate software developers, security architects, and other hands-on practitioners about how to harden LLM security and implement more secure AI workloads. The framework specifies the potential security risks associated with deploying and managing LLM applications by explicitly naming the most critical vulnerabilities seen in LLMs thus far and how to mitigate them.

CrowdStrike Drives Cybersecurity Forward with New Innovations Spanning AI, Cloud, Next-Gen SIEM and Identity Protection

Today’s threat landscape is defined by adversaries’ increasing speed and quickly evolving tactics. Now more than ever, it is imperative organizations unify and accelerate their security operations to detect, identify and respond to threats at the rapid pace of the adversary. This isn’t always straightforward.

CIO POV: Impactful AI Programs Start with 'Why'

Generative AI (GenAI) has the power to transform organizations from the inside out. Yet many organizations are struggling to prove the value of their GenAI investments after the initial push to deploy models. “At least 30% of GenAI projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs or unclear business value,” according to Gartner, Inc.

Dive into AI and LLM learning with the new Snyk Learn learning path

Snyk Learn, our developer security education platform, just got better! We have expanded our lesson coverage and created a new learning path that covers the OWASP Top 10 for LLMs and GenAI, and is entirely free! As AI continues to revolutionize industries, ensuring the security of AI-driven systems has never been more critical.

ChatGPT vs Cyber Threats: The REAL Role of AI in Cybersecurity

Unlock the truth about using Large Language Models (LLMs) in cybersecurity - are they the next big thing or just another trend? In this episode of Razorwire, your host, James Rees, brings together cybersecurity expert Richard Cassidy and data scientist Josh Neil to talk about the use of AI and large language models (LLMs) in cybersecurity and their role in threat detection and security. Join us for a discussion on the capabilities and limitations of these technologies, sparked by a controversial LinkedIn post.