Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

August 2023

How to use AI for software development and cybersecurity

We’ve seen how technology can evolve at warp speed, and AI has emerged as both a revolutionary force and a tantalizing enigma. Whether you're a seasoned developer seeking to expand your toolkit or a security enthusiast on a quest for clarity in the realm of AI, embarking on the journey to demystify this dynamic field can be both exhilarating and overwhelming.

How to recognize real AI in cybersecurity?

The term artificial intelligence is used to describe an IT system’s simulation of human intelligence processes, such as the ability to adapt, solve problems or plan. Artificial intelligence systems cover several of these features at present and, with the advent of ChatGPT, their use has become widespread in everyday life. However, this has also resulted in organizations exploiting the term "artificial intelligence," seeking to capitalize on its appeal.

Benefits and Uses of Artificial Intelligence for the IoT

Artificial Intelligence (AI) and the Internet of Things (IoT) are two of the most transformative technologies of the 21st century. The integration of AI and IoT has opened a whole new world of possibilities, with smart devices and systems that can learn and adapt to their environment, making them more efficient and effective. Fundamentally, AI is the ability of machines to learn from data and make decisions based on that data.

Artificial Intelligence in IoT: Enhancing Connectivity and Efficiency

Artificial intelligence (AI) and Internet of Things (IoT) are two of the most talked about technologies in the recent years. AI refers to the ability of machines to learn and make decisions without human intervention. IoT, on the other hand, is a network of devices that are connected to the internet and can communicate with each other. The combination of these two technologies, known as AIoT, has the potential to revolutionise the way we live and work.

Protecto & DLP: Your Digital Shield for LLM - ChatGPT, Bard - Interactions

Dive into the world of Large Language Models (LLMs) like ChatGPT and Bard confidently. Learn how Protecto, combined with our innovative Data Loss Prevention (DLP) portal, ensures seamless interactions without compromising your sensitive data. Your AI conversations just got a whole lot safer!

Exploring the Digital Marketing Landscape in 2023: The New and Emerging Trends

As we navigate the transformative realm of 2023, digital marketing continues to evolve at an unparalleled pace, impacting how brands connect with consumers and drive business growth. The winds of change continue to steer digital marketing towards new territories, thanks to AI advancements, deeper personalization, and a more pronounced focus on ethics and sustainability. In this article, we delve into the new and emerging trends that are shaping digital marketing this year.

The parallels of AI and open source in software development

Parallels between the history of open source and the rise of AI in software development can teach us valuable AppSec lessons. The front page news about generative artificial intelligence (GAI) taking over software development from poor human developers has waned a bit. But there is no doubt that the technology will continue to transform the software development space over time.

CyberArk Global CIO on Balancing AI Opportunities and Risks

Generative artificial intelligence (AI) has officially arrived at the enterprise and is poised to disrupt everything from customer-facing applications and services to back-end data and infrastructure to workforce engagement and empowerment. Cyberattackers also stand to benefit: 93% of security decision makers expect AI-enabled threats to affect their organization in 2023, with AI-powered malware cited as the No. 1 concern.

Transforming Uncertainty into Certainty: Introducing Rubrik AI-Powered Cyber Recovery

Today, cyberattacks pose the most significant threat to an organization’s data. The Spring 2023 Rubrik Zero Labs report, based on research from over 1,600 IT and Security professionals, revealed that 99% of IT and security leaders were informed of at least one attack in their own environment in 2022.

Security Researchers Share Insights on Black Hat 2023 Topics and Trends

Shocking to no one: Artificial Intelligence (AI) was a huge topic at Black Hat USA 2023, but what did we learn about it? With no shortage of talks on it, there are many insights to take into account. We asked highly skilled Software Security Researchers who attended both Black Hat and DEFCON to weigh-in on the most insightful moments, particularly related to AI. Here’s what we found.

Discover The Best AI Tools: Best Practices To Use It Safely

AI tools have become increasingly popular in various industries as businesses recognize their potential to revolutionize processes and drive innovation. These tools leverage advanced algorithms and machine learning techniques to automate tasks, analyze vast amounts of data, and generate valuable insights. In 2022, around 35% of businesses worldwide used AI tools and 61% of employees say AI helps to improve their work productivity.

AI Automation Can Help, But Not Replace

Discover the symbiotic relationship between AI and human roles in business. While automation has its place, it doesn't supplant human presence. AI augments tasks, and you won't be replaced by AI but rather by someone empowered by it. Even small businesses face challenges affording AI integration. A real-world example from a solicitor's office sheds light on the reality for small to medium-sized businesses. Join the conversation about the delicate balance between technology and human touch in the modern business landscape.

Enhancing Code Security with Generative AI: Using Veracode Fix to Secure Code Generated by ChatGPT

Artificial Intelligence (AI) and companion coding can help developers write software faster than ever. However, as companies look to adopt AI-powered companion coding, they must be aware of the strengths and limitations of different approaches – especially regarding code security. Watch this 4-minute video to see a developer generate insecure code with ChatGPT, find the flaw with static analysis, and secure it with Veracode Fix to quickly develop a function without writing any code.

AI can crack your passwords. Here's how Keeper can help.

As AI becomes more advanced, it’s important to consider all the ways AI can be used maliciously by cybercriminals, especially when it comes to cracking passwords. While AI password-cracking techniques aren’t new, they’re becoming more sophisticated and posing a serious threat to your sensitive data. Thankfully, password managers like Keeper Security exist and can help you stay safe from AI-password threats.

Ransomware Attacks Surge as Generative AI Becomes a Commodity Tool in the Threat Actor's Arsenal

According to a new report, cybercriminals are making full use of AI to create more convincing phishing emails, generating malware, and more to increase the chances of ransomware attack success. I remember when the news of ChatGPT hit social media – it was everywhere. And, quickly, there were incredible amounts of content providing insight into how to make use of the AI tool to make money.

Do You Use ChatGPT at Work? These are the 4 Kinds of Hacks You Need to Know About.

From ChatGPT to DALL-E to Grammarly, there are countless ways to leverage generative AI (GenAI) to simplify everyday life. Whether you’re looking to cut down on busywork, create stunning visual content, or compose impeccable emails, GenAI’s got you covered—however, it’s vital to keep a close eye on your sensitive data at all times.

Q2 Privacy Update: AI Takes Center Stage, plus Six New US State Laws

The past three months witnessed several notable changes impacting privacy obligations for businesses. Coming into the second quarter of 2023, the privacy space was poised for action. In the US, state lawmakers worked to push through comprehensive privacy legislation on an unprecedented scale, we saw a major focus on children's data and health data as areas of concern, and AI regulation took center stage as we examined the intersection of data privacy and AI growth.

Can machines dream of secure code? From AI hallucinations to software vulnerabilities

As GenerativeAI expands its reach, the impact of software development is not left behind. Generative models — particularly Language Models (LMs), such as GPT-3, and those falling under the umbrella of Large Language Models (LLMs) — are increasingly adept at creating human-like text. This includes writing code.

Coffee Talk with SURGe: The Interview Series featuring Jake Williams

Join Audra Streetman and special guest Jake Williams (@MalwareJake) for a discussion about hiring in cybersecurity, interview advice, the challenges associated with vulnerability prioritization, Microsoft's Storm-0558 report, and Jake's take on the future of AI and LLMs in cybersecurity.

Dark AI tools: How profitable are they in the underground ecosystem?

Threat actors are constantly looking for new ways or paths to achieve their goals, and the use of Artificial Intelligence (AI) is one of these novelties that could drastically change the underground ecosystem. The cybercrime community will see this new technology either as a business model (developers and sellers) or as products to perpetrate their attacks (buyers).

AI's Role in the Next Financial Crisis: A Warning from SEC Chair Gary Gensler

TL;DR - The future of finance is intertwined with artificial intelligence (AI), and according to SEC Chair Gary Gensler, it's not all positive. In fact, Gensler warns in a 2020 paper —when he was still at MIT—that AI could be at the heart of the next financial crisis, and regulators might be powerless to prevent it. AI's Black Box Dilemma: AI-powered "black box" trading algorithms are a significant concern.

Google's Vertex AI Platform Gets Freejacked

The Sysdig Threat Research Team (Sysdig TRT) recently discovered a new Freejacking campaign abusing Google’s Vertex AI platform for cryptomining. Vertex AI is a SaaS, which makes it vulnerable to a number of attacks, such as Freejacking and account takeovers. Freejacking is the act of abusing free services, such as free trials, for financial gain. This freejacking campaign leverages free Coursera courses that provide the attacker with no-cost access to GCP and Vertex AI.

The rise of AI in software development

Generative artificial intelligence tools are changing the world and the software development landscape significantly. Our webinar series will help you understand how. The popular press continues to reverberate with stories about the miracles of generative artificial intelligence (GAI) and machine learning (ML), and all the ways it might be used for good—and for bad. There’s hardly a tech company that isn’t talking about how GAI/ML can enhance its offerings.

The Dark Side of AI: Unmasking its Threats

Artificial Intelligence (AI) has come roaring to the forefront of today’s technology landscape. It has revolutionized industries and will modernize careers, bringing numerous benefits and advancements to our daily lives. However, it is crucial to recognize that AI also introduces unseen impacts that must be understood and addressed for your employees and your organization as a whole. Watch James McQuiggan, Security Awareness Advocate at KnowBe4, in this thought-provoking on-demand webinar where he’ll discuss the unforeseen threats of AI and how to protect your network.

Meet Lookout SAIL: A Generative AI Tailored For Your Security Operations

Today, cybersecurity companies are in a never-ending race against cyber criminals, each seeking innovative new tactics to outpace the other. The newfound accessibility of generative artificial intelligence (gen AI) has revolutionized how people work, but it's also made threat actors more efficient. Attackers can now quickly create phishing messages or automate vulnerability discoveries.

AI's Role in Cybersecurity: Black Hat USA 2023 Reveals How Large Language Models Are Shaping the Future of Phishing Attacks and Defense

At Black Hat USA 2023, a session led by a team of security researchers, including Fredrik Heiding, Bruce Schneier, Arun Vishwanath, and Jeremy Bernstein, unveiled an intriguing experiment. They tested large language models (LLMs) to see how they performed in both writing convincing phishing emails and detecting them. This is the PDF technical paper.

The Risks of AI-Generated Code

AI is fundamentally transforming how we write, test and deploy code. However, AI is not a new phenomenon, as the term was first coined in the 1950s. With the more recent release of ChatGPT, generative AI has taken a huge step forward in delivering this technology to the masses. Especially for development teams, this has enormous potential. Today, AI represents the biggest change since the adoption of cloud computing. However, using it to create code comes with its own risks.

5 Intriguing Ways AI Is Changing the Landscape of Cyber Attacks

In today's world, cybercriminals are learning to harness the power of AI. Cybersecurity professionals must be prepared for the current threats of zero days, insider threats, and supply chain, but now add in Artificial Intelligence (AI), specifically Generative AI. AI can revolutionize industries, but cybersecurity leaders and practitioners should be mindful of its capabilities and ensure it is used effectively.

WormGPT and FraudGPT - The Rise of Malicious LLMs

As technology continues to evolve, there is a growing concern about the potential for large language models (LLMs), like ChatGPT, to be used for criminal purposes. In this blog we will discuss two such LLM engines that were made available recently on underground forums, WormGPT and FraudGPT. If criminals were to possess their own ChatGPT-like tool, the implications for cybersecurity, social engineering, and overall digital safety could be significant.

The Risks and Rewards of ChatGPT in the Modern Business Environment

ChatGPT continues to lead the news cycle and increase in popularity, with new applications and uses seemingly uncovered each day for this innovative platform. However, as interesting as this solution is, and as many efficiencies as it is already providing to modern businesses, it’s not without its risks.

New AI Bot FraudGPT Hits the Dark Web to Aid Advanced Cybercriminals

Assisting with the creation of spear phishing emails, cracking tools and verifying stolen credit cards, the existence of FraudGPT will only accelerate the frequency and efficiency of attacks. When ChatGPT became available to the public, I warned about its misuse by cybercriminals. Because of the existence of “ethical guardrails” built into tools like ChatGPT, there’s only so far a cybercriminal can use the platform.

GenAI is Everywhere. Now is the Time to Build a Strong Culture of Security.

Since Nightfall’s inception in 2018, we’ve made it our mission to equip companies with the tools that they need to encourage safe employee innovation. Today, we’re happy to announce that we’ve expanded Nightfall’s capabilities to protect sensitive data across generative AI (GenAI) tools and the cloud. Our latest product suite, Nightfall for GenAI, consists of three products: Nightfall for ChatGPT, Nightfall for SaaS, and Nightfall for LLMs.

Worried About Leaking Data to LLMs? Here's How Nightfall Can Help.

Since the widespread launch of GPT-3.5 in November of last year, we’ve seen a meteoric rise in generative AI (GenAI) tools, along with an onslaught of security concerns from both countries and companies around the globe. Tech leaders like Apple have warned employees against using ChatGPT and GitHub Copilot, while other major players like Samsung have even go so far as to completely ban GenAI tools. Why are companies taking such drastic measures to prevent data leaks to LLMs, you may ask?

ChatGPT DLP (Data Loss Prevention) - Nightfall DLP for Generative AI

**ChatGPT Data Leak Prevention (DLP) by Nightfall AI: Prevent Data Leaks and Protect Privacy** ChatGPT is a powerful AI utility that can be used for a variety of tasks, such as generating text, translating languages, and writing different kinds of creative content. However, it is important to use ChatGPT safely and securely to prevent data leaks, protect privacy, and reduce risk.

Code Mirage: How cyber criminals harness AI-hallucinated code for malicious machinations

The landscape of cybercrime continues to evolve, and cybercriminals are constantly seeking new methods to compromise software projects and systems. In a disconcerting development, cybercriminals are now capitalizing on AI-generated unpublished package names also known as “AI-Hallucinated packages” to publish malicious packages under commonly hallucinated package names.

How Torq Socrates is Designed to Hyperautomate 90% of Tier-1 Analysis With Generative AI

Artificial intelligence (AI) has generated significant hype in recent years, and separating the promise from reality can be challenging. However, at Torq, AI is not just a concept. It is a reality that is revolutionizing the SOC field, specifically in the area of Tier-1 security analysis, especially as cybercriminals become more sophisticated in their tactics and techniques. Traditional security tools continue to fall short in detecting and mitigating these attacks effectively, particularly at scale.

Effective Access and Collaboration on Large Lab Datasets using Egnyte's Smart Cache

The life sciences industry is at the forefront of data-intensive research and innovation. Scientists and researchers rely heavily on the collection, processing, and analysis of vast amounts of data generated by lab instruments. And they are often challenged by errors or confusion in managing data flows that in turn, have a direct impact on the quality of data and corresponding compliance with regulatory requirements.

2 (Realistic) Ways to Leverage AI In Cybersecurity

If you had to choose a security measure that would make the most difference to your cyber program right now, what would it be? Maybe you’d like to get another person on your team? Someone who is a skilled analyst, happy to do routine work and incredibly reliable. Or perhaps you’d prefer an investment that would give your existing team members back more of their time without compromising your ability to find and fix threats? What about human intelligence without human limitations?