Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

We Need to Teach Our AIs to Securely Code

I have been writing about the need to better train our programmers in secure coding practices for decades, most recently here and here. At least a third of data compromises involved exploited software and firmware vulnerabilities and we are on our way to having over 47,000 separate, publicly known vulnerabilities this year. There are at least 130 new vulnerabilities learned and publicly reported every day, day after day. That is a lot of exploitation. That is a lot of patching.

Data Overload in the AI Era: Why Aggregation and Prioritization Are Non-Negotiable

AI was supposed to make our lives easier. Vendors promised it would cut through complexity, detect threats faster, and lighten the load on already overworked security teams. But if you’ve been paying attention, you know the truth: AI has given us more noise than ever. Corey Brunkow from Horizon3.ai joined Nucleus co-founder and CPO, Scott Kuffer, to unpack this problem during a recent webinar. AI helps attackers move faster, but on the defensive side, it’s created a flood of data.

How to Ensure Data Privacy with AI: A Step-by-Step Guide

AI sits in everyday workflows: assistants answering customer questions, copilots helping developers, and RAG apps searching internal knowledge. That means personal and sensitive data flows through prompts, vector stores, and integrations you didn’t have a year ago. Privacy can’t be an end-of-quarter compliance push anymore. It needs to live in your pipelines and apps the way logging and monitoring do.

Hybrid Detection Architecture: Rules, ML, and LLMs in Concert

Security teams are drowning in complexity. Modern networks generate millions of events daily, attackers constantly shift tactics, and the tools meant to protect us often work in isolation, blind to what their neighbors are seeing. That mythical single solution that would catch everything? It's sitting in the graveyard next to perpetual motion machines and honest vendor pricing.

A CISO's Guide to the Business Risks of AI Development Platforms

The tools designed to build your next product are now being used to build the perfect attack against it. Generative AI platforms can spin up a pixel-perfect replica of your brand's login page in minutes, launching high-fidelity phishing campaigns at a scale and speed that legacy security models cannot handle. This isn't an emerging threat; it's an industrialized phishing engine that’s already being weaponized against businesses.

Snyk and Cognition partner to enhance security for AI-native development

Today, Snyk is excited to announce a new partnership with Cognition that significantly advances security within the software development lifecycle, validating our "Secure at Inception" model. This collaboration introduces new integrations, Snyk for Devin and Snyk for Windsurf, which directly embed Snyk Studio's security intelligence into Cognition's AI-native developer tools.

What is shadow AI and what can you do about it?

Organizations across industries are actively investing in AI to streamline operations, boost productivity, and stay ahead in competitive markets. However, most proceed with caution when rolling out new AI solutions internally as they need to meet standards for AI security, compliance, and responsible use through rigorous testing and assessments. ‍ At the same time, teams may occasionally adopt AI solutions outside formal channels to simplify their workload.