Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

Aligning to Secure the AI-Driven Enterprise

Next week marks a pivotal moment for Zenity as we gather for our Sales Kickoff (SKO). While SKOs are traditionally about aligning teams on goals and strategies, ours represents much more than that. It’s a celebration of the massive growth in the AI Agent space, the opportunities it creates, and our recommitment to supporting customers as they navigate this transformative and increasingly security-conscious era.
Featured Post

2025 Predictions - Navigating Through the Challenges and Opportunities Ahead

As we enter 2025, the global economic landscape remains a mix of challenges and potential shifts that will shape markets and industries worldwide. From high interest rates to the evolving impact of AI, there are several key factors that will define the year ahead. While there will be friction in some areas, persistence, agility and out-of-the-box thinking will ensure a competitive edge.

How to Secure AI and Protect Patient Data Leaks

AI systems bring transformative capabilities to industries like healthcare but introduce unique challenges in protecting patient data. Unlike traditional applications, AI systems rely on conversational interfaces and large datasets to train, test, and optimize performance, often including sensitive patient information. AI systems pose complex risks to patient data privacy and AI data security that cannot be effectively managed using traditional methods.

Cut Through the Hype: Tips for Evaluating AI Solutions for an Autonomous SOC

As C-suites and boards are bombarded with headlines about AI revolutionizing cybersecurity, it’s no wonder they’re putting pressure on SOC leaders to adopt AI. The promise of AI in the SOC is rightfully alluring. An AI-native autonomous SOC has the potential to create a world where AI Agents collaborate with each other to take care of repetitive tasks and handle the majority of low-level alerts, freeing your human team up for strategic, proactive work. The hurdle?

Securing GenAI Development with Snyk

From design to deployment, the rise in AI tools and AI-generated code is changing developers’ workflows, enabling them to focus on more creative and complex tasks. However, while 96% of developers use AI coding assistants to streamline their work, it can have a negative impact on security teams. One-fifth of AppSec teams surveyed said they face significant challenges securing AI-generated code due to how quickly it’s produced.

Secure AI Agent Development: Trends and Challenges

In the rapidly evolving landscape of artificial intelligence (AI), the development of AI Agents has become a focal point for enterprises… nearly all of them. According to recent IBM research, 99% of respondents are exploring or actively developing AI agents. This surge in interest also serves to underscore the necessity for secure AI agent development.

AI chat resets your view of business process automation

In this guest post, Eric Newcomer, Principal Analyst at Intellyx explores the practical applications and limitations of generative AI. Generative AI is a game-changing technology. Chat bots seem like magic compared to a traditional static web search. You submit questions in natural human language and receive back complete sentences and paragraphs. But it isn’t always clear what it is really good for, given limitations such as hallucinations and inaccurate answers, and possible bias.

Using Structured Storytelling for Effective Defense with Microsoft Security Copilot

In my experience, computers are only as smart as the person in front of them. Same with AI. The results are dependent on the prompts given. Today, users typing prompts from their brains into Microsoft Security Copilot may find it hard to get value. Prompts with adequate specificity are difficult to create, let alone repeat.