Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

LangGraph and Reflection Agents - This Week in AI

In the ever-evolving terrain of artificial intelligence, OpenAI's LangGraph is making waves by introducing a groundbreaking approach to code generation and analysis. With the prominence of tools like GitHub Co-Pilot and the popularity of projects such as GPT-engineer, the demand for innovative solutions in this domain has never been higher. LangGraph aims to meet this demand by leveraging a flow paradigm inspired by recent advancements like AlphaCodium to enhance the efficiency of code generation.

DevSecOps in an AI world requires disruptive log economics

We’ve been talking about digital transformation for years (or even decades?), but the pace of evolution is now being catapulted forward by AI. This rapid change and innovation creates and relies upon exponential data sets. And while technology is rapidly evolving to manage and maintain these massive data sets, legacy pricing models based on data ingest volume are lagging behind, making it economically unsustainable.

A Complete Step-by-Step Guide to Achieve AI Compliance in Your Organization

AI compliance has become a pivotal concern for organizations in a rapidly evolving technological landscape. It is inconceivable to overlook the growing importance of AI compliance, particularly for entities deeply entrenched in AI operations. It involves an intricate intersection of legal, ethical, and regulatory dimensions, emphasizing the need for a cohesive approach to ensure comprehensive AI compliance.

Microsoft and OpenAI Team Up to Block Threat Actor Access to AI

Analysis of emerging threats in the age of AI provides insight into exactly how cybercriminals are leveraging AI to advance their efforts. When ChatGPT first came out, there were some rudimentary security policies to avoid it being misused for cybercriminal activity. But threat actors quickly found ways around the policies and continued to use it for malicious purposes.

5 security best practices for adopting generative AI code assistants like GitHub Copilot

Not that long ago, AI was generally seen as a futuristic idea that seemed like something out of a sci-fi film. Movies like Her and Ex Machina even warned us that AI could be a Pandora's box that, once opened, could have unexpected outcomes. How things have changed since then, thanks in large part to ChatGPT’s accessibility and adoption!

Synopsys and GenAI

There is enormous attention on generative AI (GenAI) and its potential to change software development. While the full impact of GenAI is yet to be known, organizations are eagerly vetting the technology and separating the hype from the real, pragmatic benefits. In parallel, software security professionals are closely watching the practical impact of GenAI and how application security testing (AST) must adapt as adoption increases.

Mend.io Launches Mend AI

Securing AI is a top cybersecurity priority and concern for governments and businesses alike. Developers have easy access to pre-trained AI models through platforms like Hugging Face and to AI-generated functions and programs through large language models (LLMs) like GitHub Co-Pilot. This access has spurred developers to create innovative software at an enormously fast pace.