Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Corelight data and LLMs

Corelight has been an innovator and leader in AI and Large Language Model (LLM) adoption for almost 2 years. We introduced our first use of LLMs in our Open NDR platform Investigator in November of 2023. Since then, we have continued to push the boundaries of the possible by working with AI model builders on cybersecurity-specific training and expanding LLM use within Investigator to include data analysis and summaries.

Corelight announces industry's first MCP server exposing detailed network data and alerts

Corelight’s GenAI Accelerator Pack features the industry's first Model Context Protocol (MCP) server, specifically designed to facilitate easier access to detailed network data and alerts for cybersecurity AI agents and enhance the analysis of network security information. The announcement comes at a pivotal moment for cybersecurity.

Evaluating AI Security: Performance vs. Safety

Evaluating AI Security: Performance vs. Safety In this video, A10 Networks' security leaders Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar discuss the crucial considerations for evaluating AI security within an organization. Madhav Aggarwal emphasizes the following points: AI companies operating in this evolving frontier. It is often difficult to attain both through the same AI model. This segment delves into why achieving both can be a complex task through the same model.

Will AI replace human pen testers?

It’s become pretty standard to expect the help of AI with automating tasks, with penetration testing being no exception. As AI-driven tools grow more sophisticated, some have posed the question: could these systems render the traditional human pen tester obsolete entirely? We’ll explore the strengths and limitations of AI when it comes to offensive security and predict the role human red team expertise still has to play in an increasingly automated world.

How Startups Are Outsourcing Sales to AI Bots

Startups are often very disorganized. They've got big dreams but small wallets. Building a sales team eats up their cash and time. That's where AI bots come in. These tools can close deals, find leads, and work around the clock. Today, startups using AI for sales are outpacing everyone else. This blog breaks down how it works, why it's a game-changer, and how you can jump in.

We Asked 100+ AI Models to Write Code. Here's How Many Failed Security Tests.

If you think AI-generated code is saving time and boosting productivity, you’re right. But here’s the problem: it’s also introducing security vulnerabilities… a lot of them. In our new 2025 GenAI Code Security Report, we tested over 100 large language models across Java, Python, C#, and JavaScript. The goal? To see if today’s most advanced AI systems can write secure code. Unfortunately, the state of AI-generated code security in 2025 is worse than you think.

Undercover Investigations: How AI is Supercharging Romance Scams

As someone that’s been in the industry for over 20 years, I’ve seen my fair share of online scams. But this is the kind of story you hear and can’t quite believe. At the last RSA cybersecurity conference, a colleague of mine–someone who lives and breathes digital security, a CISO–admitted he’d been taken in by an online romance scam. My first thought was, how?

New research uncovers four security challenges caused by unmanaged AI access

At this point, it’s almost cliché to say “AI is here, and it is changing everything.” Whether it’s accelerating productivity or reshaping employee workflows, AI is ushering in a new era of operational possibilities. But as we all know, beneath this transformation lies a complex and evolving security challenge.

What is AI system prompt hardening?

As generative AI tools like ChatGPT, Claude, and others become increasingly integrated into enterprise workflows, a new security imperative has emerged: system prompt hardening. A system prompt is a set of instructions given to an AI model that defines its role, behavior, tone, and constraints for a session. It sets the foundation for how the model responds to user input and remains active throughout the conversation.