Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

Massive AI Call Center Data Breach: 10 million Conversations Leaked, Heightening Fraud Risks

In a significant breach, over 10 million customer conversations from an AI-powered call center platform in the Middle East have been exposed. This incident has raised alarm bells regarding the security vulnerabilities of AI platforms widely used in sectors such as fintech and e-commerce. As AI platforms become integral to business operations, the risks of compromised data tracking and brand impersonation have also escalated.

Essential Guide to PII Data Discovery: Tools, Importance, and Best Practices

Personally Identifiable Information (PII) is data that can uniquely identify an individual, such as an employee, a patient, or a customer. “Sensitive PII” refers to information that, if compromised, could pose a greater risk to the individual’s privacy and misuse of information for someone else’s gains.

5 Things to Look Out for with AI Code Review

Imagine slashing the time spent on code reviews while catching more bugs and vulnerabilities than ever before. That’s the promise of AI-driven code review tools. With 42% of large and enterprise organizations already integrating AI into their IT operations , the future of software development is here. These tools can swiftly detect syntax errors, enforce coding standards, and identify security threats, making them invaluable to development teams. However, as powerful as AI is, it has its pitfalls.

Building Trust in AI: Structured, Evidence-Backed Summaries for Seamless SOC Shift Transfers

Gal Peretz is Head of AI & Data at Torq. Gal accelerates Torq’s AI & Data initiatives, applying his vast expertise in deep learning and natural language processing to advance AI-powered security automation. He also co-hosts the LangTalks podcast, where he discusses the latest in AI and LLM technologies. Staying ahead of evolving cyber threats means more than just keeping up — it means outsmarting the adversary with intelligent, proactive solutions that supercharge your team.

Foundations of trust: Securing the future of AI-generated code

Generative artificial intelligence (GenAI) has already become the defining technology of the 2020s, with users embracing it to do everything from designing travel itineraries to creating music. Today’s software developers are leveraging GenAI en masse to write code, reducing their workload and helping reclaim their valuable time. However, it’s important developers account for potential security risks that can be introduced through GenAI coding tools.