Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

DevSecOps for OpenAI: detecting sensitive data shared with generative AIs

It is clear a new technology is taking hold when it becomes impossible to avoid hearing about it. That’s the case with generative AI. Large language models (LLMs) like OpenAI’s GPT-4 and the more approachable ChatGPT are making waves the world over. Generative AI is exciting, and it’s causing a real fear of missing out for tech companies as they try to match competitors.

Answering Key Questions About Embracing AI in Cybersecurity

As we witness a growing number of cyber-attacks and data breaches, the demand for advanced cybersecurity solutions is becoming critical. Artificial intelligence (AI) has emerged as a powerful contender to help solve pressing cybersecurity problems. Let’s explore the benefits, challenges, and potential risks of AI in cybersecurity using a Q&A composed of questions I hear often.

Can AI write secure code?

AI is advancing at a stunning rate, with new tools and use cases are being discovered and announced every week, from writing poems all the way through to securing networks. Researchers aren’t completely sure what new AI models such as GPT-4 are capable of, which has led some big names such as Elon Musk and Steve Wozniak, alongside AI researchers, to call for a halt on training more powerful models for 6 months so focus can shift to developing safety protocols and regulations.

Trustwave Answers 11 Important Questions on ChatGPT

ChatGPT can arguably be called the breakout software introduction of the last 12 months, generating both amazement at its potential and concerns that threat actors will weaponize and use it as an attack platform. Karl Sigler, Senior Security Research Manager, SpiderLabs Threat Intelligence, along with Trustwave SpiderLabs researchers, has been tracking ChatGPT’s development over the last several months.

AI, Cybersecurity, and Emerging Regulations

The SecurityScorecard team has just returned from an exciting week in San Francisco at RSA Conference 2023. This year’s theme, “Stronger Together,” was meant to encourage collaboration and remind attendees that when it comes to cybersecurity, no one goes it alone. Building on each other’s diverse knowledge and skills is what creates breakthroughs.

Heart of the Matter: How LLMs Can Show Political Bias in Their Outputs

Wired just published an interesting story about political bias that can show up in LLM's due to their training. It is becoming clear that training an LLM to exhibit a certain bias is relatively easy. This is a reason for concern, because this can "reinforce entire ideologies, worldviews, truths and untruths” which is what OpenAI has been warning about.

Does ChatGPT Have Cybersecurity Tells?

Poker players and other human lie detectors look for “tells,” that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when he’s about to bluff, for example, or someone’s pupils dilate when they’ve successfully drawn to an insider straight.

What is DLP and How Does It Work

Data loss prevention, or DLP for short, is a technology that helps companies protect their data from unauthorized access or theft. It does this by scanning all incoming and outgoing data for sensitive information and then preventing that data from leaving the company's network. In this blog post, we will discuss what DLP is and how it works!