Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

DevSecOps in an AI world requires disruptive log economics

We’ve been talking about digital transformation for years (or even decades?), but the pace of evolution is now being catapulted forward by AI. This rapid change and innovation creates and relies upon exponential data sets. And while technology is rapidly evolving to manage and maintain these massive data sets, legacy pricing models based on data ingest volume are lagging behind, making it economically unsustainable.

A Complete Step-by-Step Guide to Achieve AI Compliance in Your Organization

AI compliance has become a pivotal concern for organizations in a rapidly evolving technological landscape. It is inconceivable to overlook the growing importance of AI compliance, particularly for entities deeply entrenched in AI operations. It involves an intricate intersection of legal, ethical, and regulatory dimensions, emphasizing the need for a cohesive approach to ensure comprehensive AI compliance.

Microsoft and OpenAI Team Up to Block Threat Actor Access to AI

Analysis of emerging threats in the age of AI provides insight into exactly how cybercriminals are leveraging AI to advance their efforts. When ChatGPT first came out, there were some rudimentary security policies to avoid it being misused for cybercriminal activity. But threat actors quickly found ways around the policies and continued to use it for malicious purposes.

5 security best practices for adopting generative AI code assistants like GitHub Copilot

Not that long ago, AI was generally seen as a futuristic idea that seemed like something out of a sci-fi film. Movies like Her and Ex Machina even warned us that AI could be a Pandora's box that, once opened, could have unexpected outcomes. How things have changed since then, thanks in large part to ChatGPT’s accessibility and adoption!

Mend.io Launches Mend AI

Securing AI is a top cybersecurity priority and concern for governments and businesses alike. Developers have easy access to pre-trained AI models through platforms like Hugging Face and to AI-generated functions and programs through large language models (LLMs) like GitHub Co-Pilot. This access has spurred developers to create innovative software at an enormously fast pace.

The Risks of Automated Code Generation and the Necessity of AI-Powered Remediation

Modern software development techniques are creating flaws faster than they can be fixed. While using third-party libraries, microservices, code generators, large language models (LLMs), etc., has remarkably increased productivity and flexibility in development, it has also increased the rate of generating insecure code. An automated and intelligent solution is needed to bridge the widening gap between the introduction and remediation of flaws.

Defensive AI: Cloudflare's framework for defending against next-gen threats

Generative AI has captured the imagination of the world by being able to produce poetry, screenplays, or imagery. These tools can be used to improve human productivity for good causes, but they can also be employed by malicious actors to carry out sophisticated attacks. We are witnessing phishing attacks and social engineering becoming more sophisticated as attackers tap into powerful new tools to generate credible content or interact with humans as if it was a real person.