Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

Google's Vertex AI Platform Gets Freejacked

The Sysdig Threat Research Team (Sysdig TRT) recently discovered a new Freejacking campaign abusing Google’s Vertex AI platform for cryptomining. Vertex AI is a SaaS, which makes it vulnerable to a number of attacks, such as Freejacking and account takeovers. Freejacking is the act of abusing free services, such as free trials, for financial gain. This freejacking campaign leverages free Coursera courses that provide the attacker with no-cost access to GCP and Vertex AI.

The rise of AI in software development

Generative artificial intelligence tools are changing the world and the software development landscape significantly. Our webinar series will help you understand how. The popular press continues to reverberate with stories about the miracles of generative artificial intelligence (GAI) and machine learning (ML), and all the ways it might be used for good—and for bad. There’s hardly a tech company that isn’t talking about how GAI/ML can enhance its offerings.

Meet Lookout SAIL: A Generative AI Tailored For Your Security Operations

Today, cybersecurity companies are in a never-ending race against cyber criminals, each seeking innovative new tactics to outpace the other. The newfound accessibility of generative artificial intelligence (gen AI) has revolutionized how people work, but it's also made threat actors more efficient. Attackers can now quickly create phishing messages or automate vulnerability discoveries.

AI's Role in Cybersecurity: Black Hat USA 2023 Reveals How Large Language Models Are Shaping the Future of Phishing Attacks and Defense

At Black Hat USA 2023, a session led by a team of security researchers, including Fredrik Heiding, Bruce Schneier, Arun Vishwanath, and Jeremy Bernstein, unveiled an intriguing experiment. They tested large language models (LLMs) to see how they performed in both writing convincing phishing emails and detecting them. This is the PDF technical paper.

The Risks of AI-Generated Code

AI is fundamentally transforming how we write, test and deploy code. However, AI is not a new phenomenon, as the term was first coined in the 1950s. With the more recent release of ChatGPT, generative AI has taken a huge step forward in delivering this technology to the masses. Especially for development teams, this has enormous potential. Today, AI represents the biggest change since the adoption of cloud computing. However, using it to create code comes with its own risks.

5 Intriguing Ways AI Is Changing the Landscape of Cyber Attacks

In today's world, cybercriminals are learning to harness the power of AI. Cybersecurity professionals must be prepared for the current threats of zero days, insider threats, and supply chain, but now add in Artificial Intelligence (AI), specifically Generative AI. AI can revolutionize industries, but cybersecurity leaders and practitioners should be mindful of its capabilities and ensure it is used effectively.

WormGPT and FraudGPT - The Rise of Malicious LLMs

As technology continues to evolve, there is a growing concern about the potential for large language models (LLMs), like ChatGPT, to be used for criminal purposes. In this blog we will discuss two such LLM engines that were made available recently on underground forums, WormGPT and FraudGPT. If criminals were to possess their own ChatGPT-like tool, the implications for cybersecurity, social engineering, and overall digital safety could be significant.

The Risks and Rewards of ChatGPT in the Modern Business Environment

ChatGPT continues to lead the news cycle and increase in popularity, with new applications and uses seemingly uncovered each day for this innovative platform. However, as interesting as this solution is, and as many efficiencies as it is already providing to modern businesses, it’s not without its risks.