Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

NEW in Elastic Security 8.15: Automatic Import, Gemini models, and AI Assistant APIs

Elastic Security 8.15 is now available, enhancing our mission to modernize security operations with AI-driven security analytics. Key features include the brand new Automatic Import to streamline data ingestion and onboarding, support for Google’s Gemini 1.5 Pro and Flash large language models (LLMs), a new set of APIs for the Elastic AI Assistant, on-demand file scans for the Elastic Defend integration, and a redesigned way of pivoting between different contexts.

AI in the enterprise: 3 ways to mitigate AI's security and privacy risks

Artificial Intelligence (AI) has the potential to revolutionize how businesses operate. But with this exciting advancement come new challenges that cannot be ignored. For proactive security and IT leaders, prioritizing security and privacy in AI can’t simply be a box-checking exercise; it's the key to unlocking the full potential of this wave of innovation.

Beyond the Noise: Achieving Accurate API Inventory with AI

The prevalence of APIs in today's digital environment is undeniable. They are crucial for modern applications, enabling seamless communication and data exchange between different software components. The rise of AI and machine learning has further accelerated API adoption, not only for accessing data and resources but also for rapid API development and deployment.

A security expert's view on Gartner's generative AI insights

Snyk’s goal has always been to empower developers to build fast but safely. This is why we created the developer security category and why we were amongst the first advocates of “shifting left.” Now, AI has changed the equation. According to Gartner, over 80% of enterprises will have used generative AI APIs or models, or deployed their own AI model, by 2026.

Introducing our report, CISO Perspectives: Separating the realityof AI fromthe hype

The explosion of AI has ignited both excitement and apprehension across various industries. While AI is undeniably having a positive impact on engineering and customer service teams, cybersecurity and IT practitioners remain cautious. Concerns about data privacy, the inflexibility of disparate tools, and the sensitive nature of many mission-critical workflows—which, more often than not, require some level of human oversight—fuel a deep mistrust of LLMs by these teams.

AI Tools Have Increased the Sophistication of Social Engineering Attacks

The Cyber Security Agency of Singapore (CSA) has warned that threat actors are increasingly using AI to enhance phishing and other social engineering attacks, Channel News Asia reports. The CSA’s report found that cybercriminals are selling tools that automate these attacks, allowing unskilled threat actors to launch sophisticated attacks.

LLM Security: Splunk & OWASP Top 10 for LLM-based Applications

As a small kid, I remember watching flying monkeys, talking lions, and houses landing on evil witches in the film The Wizard of Oz and thinking how amazing it was. Once the curtain pulled back, exposing the wizard as a smart but ordinary person, I felt slightly let down. The recent explosion of AI, and more specifically, large language models (LLMs), feels similar. On the surface, they look like magic, but behind the curtain, LLMs are just complex systems created by humans.

Sentinels of Ex Machina: Defending AI Architectures

The introduction, adoption, and quick evolution of generative AI has raised multiple questions about implementing effective security architecture and the specific requirements for protecting all aspects of an AI environment as more and more organizations begin using this technology. Recent security reports on vulnerabilities that expose Large Language Model (LLM) components and jailbreaks for bypassing prompting restrictions have further shown the need for AI defenses.