Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Identify common security risks in MCP servers

AI adoption is rapidly increasing, and with that comes a steady influx of useful but potentially vulnerable tools and services still maturing in the AI space. The Model Context Protocol (MCP) is one example of new AI tooling, providing a framework for how applications integrate with and supply context to large language models (LLMs). MCP servers are central to developing AI assistants and workflows that are deeply integrated with your environment.

Ep 2: Hacked together: fast, safe prototyping with AI

Join security experts Adam White, Chas Clawson, and Seth Williams as they explore how AI-first development is reshaping the way cybersecurity teams build, test, and deploy solutions. Traditional development cycles often leave critical ideas trapped in backlogs, but with Gen-AI and language models, security teams can now move from concept to prototype in hours, not months.

What We Found with OpenAI's Codex CLI Tool

In this video, I explore OpenAI’s Codex CLI tool to see how powerful it really is for coding with AI. But things quickly go off the rails… what started as a simple test ended with a surprise identity verification request. Apparently, to continue using the tool, I need to submit a government-issued ID and a photo of myself—something I didn’t expect at all. I talk through the process, show the error I ran into, and share my honest thoughts on this level of access and how invasive it feels for a developer tool.

LLMs Are Not Goldfish: Why AI Memory Poses a Risk to Your Sensitive Data

We’ve all heard the myth: goldfish have a memory span of just a few seconds. While that’s debatable in marine biology circles, it’s useful as a metaphor in tech, especially when talking about memory, risk, and AI. The problem is, large language models (LLMs) are not goldfish. In fact, they have incredible memory. And increasingly, that memory isn’t just session-based. It’s persistent, long-term, and system-connected. That changes everything.

A New Chapter in Mobile Security: Tackling Human Risk with AI-Powered Social Engineering Protection

This week marks a milestone in the evolution of mobile endpoint security. At a time when attackers are moving faster and targeting smarter, Lookout is proud to unveil a breakthrough initiative: AI-powered social engineering protection—the first solution of its kind built to detect and disrupt human-targeted attacks at the mobile edge.

Why AI Infrastructure Growth Demands Next-Gen Cybersecurity and PAM

Global Artificial Intelligence (AI) infrastructure spending is projected to surpass $200 billion by 2028, according to research from the International Data Corporation (IDC). As organizations rapidly deploy more complex AI systems, the demand for high-performance infrastructure, like Graphics Processing Units (GPUs) and AI accelerators, is surging. This growth exponentially increases computing power, energy consumption and data exchange across hybrid and cloud environments.

Deploying Gen AI Guardrails for Compliance, Security and Trust

AI guardrails are structured safeguards, whether technical, security or ethical, which are designed to guide AI systems so they operate safely, responsibly, and within intended boundaries. Much like highway guardrails that prevent vehicles from veering off course, these measures ensure AI remains aligned with organizational policies, regulations, and ethical values.