Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Difference Between Cybersecurity AI and Machine Learning

In what feels like 10 minutes, cybersecurity AI and machine learning (ML) have gone from a concept pioneered by a handful of companies, including SenseOn, to a technology that is seemingly everywhere. In a recent SenseOn survey, over 80% of IT teams told us they think that tools that use AI would be the most impactful investment their security operations centre (SOC) could make.

Five worthy reads: How non-human identities are shaping the cybersecurity landscape

Five worthy reads is a regular column on five noteworthy items we have discovered while researching trending and timeless topics. This week’s article elucidates what non-human identities are and why they are garnering attention today. Undoubtedly, today’s digital environment is burgeoning with technological advancements across various spheres, and cybersecurity is no exception. We are in an era where automation, cloud computing, and AI play a more critical role than humans.

EP 65 - Machine Identities, AI and the Future of Security with the 'Identity Jedi'

In this episode of the Trust Issues podcast, host David Puner and David Lee, aka “The Identity Jedi,” delve into the evolving landscape of identity security. They discuss the critical challenges and advancements in securing both human and machine identities. Lee shares insights on the fear and misconceptions surrounding AI, drawing parallels to pop culture references like Marvel’s Jarvis.

The Truth About How Generative AI Can Be Used In Cybersecurity

Thanks to ChatGPT, you’ve probably heard a lot about generative AI technology over the last few years. Generative AI is artificial intelligence technology that works by taking input data like a request, processing it through different algorithms, and producing an output based on learned patterns. ChatGPT is a generative AI chatbot. 91% of security teams use generative AI, but 65% don’t fully understand the implications.

Phishing Campaign Impersonates OpenAI To Collect Financial Data

Cybercriminals are impersonating OpenAI in a widespread phishing campaign designed to trick users into handing over financial information. The emails inform users that a payment for their ChatGPT subscription was declined, inviting them to click a link in order to update their payment method. The phishing emails appear fairly convincing, but trained users could spot some red flags. The most obvious giveaway is that the emails were sent from “info@mtacom,” which is clearly unrelated to OpenAI.

CrowdStrike Launches AI Red Team Services to Secure AI Innovation

As organizations race to adopt generative AI (GenAI) to drive efficiency and innovation, they face a new and urgent security challenge. While AI-driven tools and large language models (LLMs) open vast opportunities, they also introduce unique vulnerabilities that adversaries are quick to exploit. From data exposure to supply-chain risks, the potential for threats to AI systems is growing just as fast as the technology itself.