Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

Protect Sensitive Data with Key Privacy Enhancing Techniques

In today’s digital world, protecting sensitive data is more critical than ever. Organizations handle vast amounts оf information daily, much оf which includes sensitive data like Personally Identifiable Information (PII), financial details, and confidential business records. The exposure of this data can lead to severe consequences, including identity theft, financial loss, and reputational damage.

Is your SIEM ready for the AI era? Essential insights and preparations

A head-spinning series of acquisitions and mergers is transforming the security information and event management (SIEM) market. Behind this market shakeup is the ongoing technological shift from traditional, manually intensive SIEM solutions to AI-driven security analytics. Legacy systems — characterized by manual processes for log management, investigation, and response — no longer effectively address today’s fast-evolving cyber threats.

Employee Cybersecurity Awareness Training Strategies for AI-Enhanced Attacks

With the adoption of AI in almost every sphere of our lives and its unending advancement, cyberattacks are rapidly increasing. Threat actors with malicious intent use AI tools to create phishing emails and other AI-generated content to bypass traditional security measures. On the bright side, the security capabilities of AI are limitless. AI-enhanced attacks refer to cybersecurity events that use artificial intelligence to compromise individuals' and organizations' safety.

Prompt Sanitization: 5 Steps for Protecting Data Privacy in AI Apps

As Generative AI (GenAI) and Large Language Models (LLMs) become integral to modern apps, we face a critical challenge of protecting sensitive user data from inadvertent exposure. In this article, we’ll explore the importance of content filtering in LLM-powered apps, and provide strategies for its implementation. Looking for step-by-step tutorials on prompt sanitization for OpenAI, Langchain, Anthropic, and more? Skip down to the “Tutorials & further learning” section below.

"It's so important that the CISO gets a seat at the table": a Q&A with Trace3's Gina Yacone

A leading voice in cybersecurity, Gina Yacone is a trusted advisor to senior security leaders, guiding them through emerging trends and recommending strategies to strengthen defenses. She was also recently named Cybersecurity Woman Volunteer of the Year 2024. As regional and advisory CISO at the elite technology consultancy Trace3, she also participates in the Trace3 AI Center of Excellence (CoE) Champion Program, keeping her at the forefront of AI and security innovation.

Can We Truly Test Gen AI Apps? Growing Need for AI Guardrails

Unlike traditional software, where testing is relatively straightforward, Gen AI apps introduce complexities that make testing far more intricate. This blog explores why traditional software testing methodologies fall short for Gen AI applications and highlights the unique challenges posed by these advanced technologies.

Beyond Snapshots: The Need For Continuous Penetration Testing

By James Rees, MD, Razorthorn Security Times must change (and always will) and nowhere is this more true than in the realm of technological advancement. Thirty years ago, the technological landscape was vastly different from what we have today and technological change has outpaced Moore’s Law for some time now. Information security must keep pace with these advancements. This has become especially true with the advent of AI.

Shining a Light on Shadow AI: What It Is and How to Find It

After speaking to a wide spectrum of customers ranging from SMBs to enterprises, three things have become clear: Add that together, and we get Shadow AI. This refers to AI usage that is not known or visible to an organization’s IT and security teams. Shadow AI comes in many forms, but in this blog we’ll stick to a discussion of Shadow AI as it pertains to applications. Application security teams are well aware that AI models come with additional risk.