Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

How AI Is Transforming Cybersecurity with Predictive Capabilities

Unless you've been avoiding the internet entirely, you've probably noticed the rise of sophisticated cyberattacks making headlines. From data breaches to ransomware, these threats aren't just increasing in number - they're becoming more complex and harder to detect. Enter artificial intelligence (AI), the unsung hero quietly reshaping cybersecurity. But how exactly does AI use its predictive superpowers to stay ahead of hackers? Let's dive in.

Healthcare Data Masking: Tokenization, HIPAA, and More

Healthcare data masking unlocks the incredible potential of healthcare data for analytics and AI applications. The insights from healthcare data can revolutionize the industry from improving patient care to streamlining operations. However, the use of such data is fraught with risk. In the United States, Protected Health Information (PHI) is regulated by the Health Insurance Portability and Accountability Act (HIPAA), which sets stringent requirements to safeguard patient privacy.

Strengthen LLMs with Sysdig Secure

The term LLMjacking refers to attackers using stolen cloud credentials to gain unauthorized access to cloud-based large language models (LLMs), such as OpenAI’s GPT or Anthropic Claude. This blog shows how to strengthen LLMs with Sysdig. The attack works by criminals exploiting stolen credentials or cloud misconfigurations to gain access to expensive artificial intelligence (AI) models in the cloud. Once they gain access, they can run costly AI models at the victim’s expense.

Predicting cybersecurity trends in 2025: AI, regulations, global collaboration

Cybersecurity involves anticipating threats and designing adaptive strategies in a constantly changing environment. In 2024, organizations faced complex challenges due to technological advances and sophisticated threats, requiring them to constantly review their approach. For 2025, it is crucial to identify key factors that will enable organizations to strengthen their defenses and consolidate their resilience in the face of a dynamic and risk-filled digital landscape.

LLMs - The what, why and how

LLMs are based on neural network architectures, with transformers being the dominant framework. Introduced in 2017, transformers use mechanisms called attention mechanisms to understand the relationships between words or tokens in text, making them highly effective at understanding and generating coherent language. Practical Example: GPT (Generative Pre-trained Transformer) models like GPT-4 are structured with billions of parameters that determine how the model processes and generates language.

AI-Powered Investment Scams Surge: How 'Nomani' Steals Money and Data

Cybersecurity researchers are warning about a new breed of investment scam that combines AI-powered video testimonials, social media malvertising, and phishing tactics to steal money and personal data. Known as Nomani — a play on "no money" — this scam grew by over 335% in H2 2024, with more than 100 new URLs detected daily between May and November, according to ESET's H2 2024 Threat Report.

4 tips for securing GenAI-assisted development

Gartner predicts that generative AI (GenAI) will become a critical workforce partner for 90% of companies by next year. In application development specifically, we see developers turning to code assistants like Github Copilot and Google Gemini Code Assist to help them build software at an unprecedented speed. But while GenAI can power new levels of productivity and speed, it also introduces new threats and challenges for application security teams.