Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Technology

How AI Is Making Phishing Attacks More Dangerous

Phishing attacks occur when cybercriminals trick their victims into sharing personal information, such as passwords or credit card numbers, by pretending to be someone they’re not. Artificial Intelligence (AI) has made it easier for cybercriminals to carry out phishing attacks by writing believable phishing messages, mimicking people’s voices, researching targets and creating deepfakes.

Fundamentals of GraphQL-specific attacks

Developers are constantly exploring new technologies that can improve the performance, flexibility, and usability of applications. GraphQL is one such technology that has gained significant attention for its ability to fetch data efficiently. Unlike the traditional REST API, which requires multiple round trips to the server to gather various pieces of data, GraphQL allows developers to retrieve all the needed data in a single request.

The Imperative of Data Loss Prevention in the AI-Driven Enterprise

As organizations increasingly integrate artificial intelligence (AI) into their operations, the nature of data security is undergoing significant transformation. With AI’s ability to process vast amounts of data quickly, the risk of data breaches and leaks has grown exponentially. In this context, Data Loss Prevention (DLP) has (re)emerged as a critical component for IT professionals seeking to safeguard sensitive information.

Gen AI Guardrails: 5 Risks to Your Business and How to Avoid Them

As businesses increasingly adopt Generative AI (Gen AI) to enhance operations, customer engagement, and innovation, the need for robust AI guardrails has never been more critical. While Gen AI offers transformative potential, it also introduces significant risks that can jeopardize your business if not properly managed. Below, we explore five critical risks associated with Gen AI and provide strategies to avoid them.

Why AI Guardrails Need Session-Level Monitoring: Stopping Threats That Slip Through the Cracks

AI guardrails are vital for ensuring the safe and responsible use of AI/large language models (LLMs). However, focusing solely on single prompt-level checks can leave organizations vulnerable to sophisticated threats. Many company policy violations and security risks can be cleverly split across multiple, seemingly innocent queries. To effectively protect against these threats, a more comprehensive approach is needed — session-level monitoring.

Introducing Tines Workbench

You trust us with your most important workflows, and we take that trust seriously. In developing AI in Tines, we’ve been laser-focused on helping users leverage AI without exposing their organizations to security and privacy risks. But we also spoke with so many teams struggling to fully realize AI's potential impact. They wanted AI to do more, while still preserving those all-important security and privacy guardrails.

Twilio Breach: 33M Phone Numbers Exposed #apiattacks #apisecurity #dataleaks #databreach #twilio

A major security breach at Twilio exposed 33 million phone numbers due to an unauthenticated API. Watch this video to understand the risks and learn essential API security practices to protect your organization from similar threats.

Protecting APIs from abuse using sequence learning and variable order Markov chains

Consider the case of a malicious actor attempting to inject, scrape, harvest, or exfiltrate data via an API. Such malicious activities are often characterized by the particular order in which the actor initiates requests to API endpoints. Moreover, the malicious activity is often not readily detectable using volumetric techniques alone, because the actor may intentionally execute API requests slowly, in an attempt to thwart volumetric abuse protection.