Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

The Risks of Automated Code Generation and the Necessity of AI-Powered Remediation

Modern software development techniques are creating flaws faster than they can be fixed. While using third-party libraries, microservices, code generators, large language models (LLMs), etc., has remarkably increased productivity and flexibility in development, it has also increased the rate of generating insecure code. An automated and intelligent solution is needed to bridge the widening gap between the introduction and remediation of flaws.

Defensive AI: Cloudflare's framework for defending against next-gen threats

Generative AI has captured the imagination of the world by being able to produce poetry, screenplays, or imagery. These tools can be used to improve human productivity for good causes, but they can also be employed by malicious actors to carry out sophisticated attacks. We are witnessing phishing attacks and social engineering becoming more sophisticated as attackers tap into powerful new tools to generate credible content or interact with humans as if it was a real person.

Cloudflare announces Firewall for AI

Today, Cloudflare is announcing the development of Firewall for AI, a protection layer that can be deployed in front of Large Language Models (LLMs) to identify abuses before they reach the models. While AI models, and specifically LLMs, are surging, customers tell us that they are concerned about the best strategies to secure their own LLMs. Using LLMs as part of Internet-connected applications introduces new vulnerabilities that can be exploited by bad actors.

Elastic introduces Elastic AI Assistant

Elastic® introduces Elastic AI Assistant, the open, generative AI sidekick powered by ESRE to democratize cybersecurity and enable users of every skill level. The recently released Elasticsearch Relevance Engine™ (ESRE™) delivers new capabilities for creating highly relevant AI search applications. ESRE builds on more than two years of focused machine learning research and development made possible through Elastic’s leadership role in search use cases.

Greening the Digital Frontier: Sustainable Practices for Modern Businesses

The push towards digital transformation has significantly improved efficiency, productivity, and accessibility for businesses globally. However, the environmental footprint of digital operations has increasingly become a focus for concern. As companies continue to leverage digital technologies, the need for integrating sustainable practices into their operations has never been more critical. This article delves into the environmental impact of digitalisation and outlines practical strategies for businesses aiming to achieve sustainability in the digital age.

Closing the loop on AI point solutions to deliver context and visibility

Today most organisations are thinking about or deploying AI and, in effect, trying it out. This is supported by Gartner, which states that approximately 80% of enterprises will have used generative artificial intelligence (GenAI) application programming interfaces (APIs) or models by 2026. As AI drives value for organisations, it is fuelling further demand and adoption.

Demystifying GenAI security, and how Cato helps you secure your organizations access to ChatGPT

Over the past year, countless articles, predictions, prophecies and premonitions have been written about the risks of AI, with GenAI (Generative AI) and ChatGPT being in the center. Ranging from its ethics to far reaching societal and workforce implications (“No Mom, The Terminator isn’t becoming a reality… for now”). Cato security research and engineering was so fascinated about the prognostications and worries that we decided to examine the risks to business posed by ChatGPT.

AI governance and preserving privacy

AT&T Cybersecurity featured a dynamic cyber mashup panel with Akamai, Palo Alto Networks, SentinelOne, and the Cloud Security Alliance. We discussed some provocative topics around Artificial Intelligence (AI) and Machine Learning (ML) including responsible AI and securing AI. There were some good examples of best practices shared in an emerging AI world like implementing Zero Trust architecture and anonymization of sensitive data. Many thanks to our panelists for sharing their insights.