Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

Massive AI Call Center Data Breach: 10 million Conversations Leaked, Heightening Fraud Risks

In a significant breach, over 10 million customer conversations from an AI-powered call center platform in the Middle East have been exposed. This incident has raised alarm bells regarding the security vulnerabilities of AI platforms widely used in sectors such as fintech and e-commerce. As AI platforms become integral to business operations, the risks of compromised data tracking and brand impersonation have also escalated.

Essential Guide to PII Data Discovery: Tools, Importance, and Best Practices

Personally Identifiable Information (PII) is data that can uniquely identify an individual, such as an employee, a patient, or a customer. “Sensitive PII” refers to information that, if compromised, could pose a greater risk to the individual’s privacy and misuse of information for someone else’s gains.

5 Things to Look Out for with AI Code Review

Imagine slashing the time spent on code reviews while catching more bugs and vulnerabilities than ever before. That’s the promise of AI-driven code review tools. With 42% of large and enterprise organizations already integrating AI into their IT operations , the future of software development is here. These tools can swiftly detect syntax errors, enforce coding standards, and identify security threats, making them invaluable to development teams. However, as powerful as AI is, it has its pitfalls.

Building Trust in AI: Structured, Evidence-Backed Summaries for Seamless SOC Shift Transfers

Gal Peretz is Head of AI & Data at Torq. Gal accelerates Torq’s AI & Data initiatives, applying his vast expertise in deep learning and natural language processing to advance AI-powered security automation. He also co-hosts the LangTalks podcast, where he discusses the latest in AI and LLM technologies. Staying ahead of evolving cyber threats means more than just keeping up — it means outsmarting the adversary with intelligent, proactive solutions that supercharge your team.

Foundations of trust: Securing the future of AI-generated code

Generative artificial intelligence (GenAI) has already become the defining technology of the 2020s, with users embracing it to do everything from designing travel itineraries to creating music. Today’s software developers are leveraging GenAI en masse to write code, reducing their workload and helping reclaim their valuable time. However, it’s important developers account for potential security risks that can be introduced through GenAI coding tools.

Introducing The Riscosity AI Governance Suite

Clients can empower their employees to securely leverage any browser-based AI tool. The Riscosity browser extension will scan and block prompts with sensitive information in real time. Admins can use the intuitive Riscosity dashboard to set RBAC rules and keep a pulse on any AI tools being used – including any attempts to share sensitive information. The bottom line… we’re providing an AI firewall for your company, without the headaches of difficult deployment.

Cybersecurity Awareness Month: AI Safety for Friends and Family

Happy October! The leaves are changing and everyone is starting to get ready for the upcoming holidays, but let’s not forget one of the most important holidays of the year—Cybersecurity Awareness Month! Though our audience is almost entirely cybersecurity experts, we wanted to put something together to help the less technical people in our lives learn more about AI and cybersecurity, because Cybersecurity Month is for everyone.

EP 63 - Jailbreaking AI: The Risks and Realities of Machine Identities

In this episode of Trust Issues, host David Puner welcomes back Lavi Lazarovitz, Vice President of Cyber Research at CyberArk Labs, for a discussion covering the latest developments in generative AI and the emerging cyberthreats associated with it. Lavi shares insights on how machine identities are becoming prime targets for threat actors and discusses the innovative research being conducted by CyberArk Labs to understand and mitigate these risks.

Why Presidio and Other Data Masking Tools Fall Short for AI Use Cases Part 1

Data privacy and security are critical concerns for businesses using Large Language Models (LLMs), especially when dealing with sensitive information like Personally Identifiable Information (PII) and Protected Health Information (PHI). Companies typically rely on data masking tools such as Microsoft’s Presidio to safeguard this data. However, these tools often struggle in scenarios involving LLMs/AI Agents.