Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Security Researchers Share Insights on Black Hat 2023 Topics and Trends

Shocking to no one: Artificial Intelligence (AI) was a huge topic at Black Hat USA 2023, but what did we learn about it? With no shortage of talks on it, there are many insights to take into account. We asked highly skilled Software Security Researchers who attended both Black Hat and DEFCON to weigh-in on the most insightful moments, particularly related to AI. Here’s what we found.

Discover The Best AI Tools: Best Practices To Use It Safely

AI tools have become increasingly popular in various industries as businesses recognize their potential to revolutionize processes and drive innovation. These tools leverage advanced algorithms and machine learning techniques to automate tasks, analyze vast amounts of data, and generate valuable insights. In 2022, around 35% of businesses worldwide used AI tools and 61% of employees say AI helps to improve their work productivity.

AI Automation Can Help, But Not Replace

Discover the symbiotic relationship between AI and human roles in business. While automation has its place, it doesn't supplant human presence. AI augments tasks, and you won't be replaced by AI but rather by someone empowered by it. Even small businesses face challenges affording AI integration. A real-world example from a solicitor's office sheds light on the reality for small to medium-sized businesses. Join the conversation about the delicate balance between technology and human touch in the modern business landscape.

Enhancing Code Security with Generative AI: Using Veracode Fix to Secure Code Generated by ChatGPT

Artificial Intelligence (AI) and companion coding can help developers write software faster than ever. However, as companies look to adopt AI-powered companion coding, they must be aware of the strengths and limitations of different approaches – especially regarding code security. Watch this 4-minute video to see a developer generate insecure code with ChatGPT, find the flaw with static analysis, and secure it with Veracode Fix to quickly develop a function without writing any code.

AI can crack your passwords. Here's how Keeper can help.

As AI becomes more advanced, it’s important to consider all the ways AI can be used maliciously by cybercriminals, especially when it comes to cracking passwords. While AI password-cracking techniques aren’t new, they’re becoming more sophisticated and posing a serious threat to your sensitive data. Thankfully, password managers like Keeper Security exist and can help you stay safe from AI-password threats.

Ransomware Attacks Surge as Generative AI Becomes a Commodity Tool in the Threat Actor's Arsenal

According to a new report, cybercriminals are making full use of AI to create more convincing phishing emails, generating malware, and more to increase the chances of ransomware attack success. I remember when the news of ChatGPT hit social media – it was everywhere. And, quickly, there were incredible amounts of content providing insight into how to make use of the AI tool to make money.

Do You Use ChatGPT at Work? These are the 4 Kinds of Hacks You Need to Know About.

From ChatGPT to DALL-E to Grammarly, there are countless ways to leverage generative AI (GenAI) to simplify everyday life. Whether you’re looking to cut down on busywork, create stunning visual content, or compose impeccable emails, GenAI’s got you covered—however, it’s vital to keep a close eye on your sensitive data at all times.

Q2 Privacy Update: AI Takes Center Stage, plus Six New US State Laws

The past three months witnessed several notable changes impacting privacy obligations for businesses. Coming into the second quarter of 2023, the privacy space was poised for action. In the US, state lawmakers worked to push through comprehensive privacy legislation on an unprecedented scale, we saw a major focus on children's data and health data as areas of concern, and AI regulation took center stage as we examined the intersection of data privacy and AI growth.

Can machines dream of secure code? From AI hallucinations to software vulnerabilities

As GenerativeAI expands its reach, the impact of software development is not left behind. Generative models — particularly Language Models (LMs), such as GPT-3, and those falling under the umbrella of Large Language Models (LLMs) — are increasingly adept at creating human-like text. This includes writing code.

Coffee Talk with SURGe: The Interview Series featuring Jake Williams

Join Audra Streetman and special guest Jake Williams (@MalwareJake) for a discussion about hiring in cybersecurity, interview advice, the challenges associated with vulnerability prioritization, Microsoft's Storm-0558 report, and Jake's take on the future of AI and LLMs in cybersecurity.