Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

ChatGPT Data Breach Break Down

OpenAi have confirmed they have had a data breach involving a vulnerability inside a open-source dependency Redis. This allowed threat actors to see history from other active users. But this leads to the bigger question, how can we secure ChatGPT. In this video I explain my position using some interesting data that ChatGPT should be part of all organizations threat landscape and that banning ChatGPT won't help the situation.

Consolidation, Flexibility, ChatGPT, & Other Key Takeaways from Netskopers at RSA Conference 2023

At RSA Conference 2023, a number of Netskopers from across the organization who attended the event in San Francisco shared commentary on the trends, topics, and takeaways from this year’s conference.

DevSecOps for OpenAI: detecting sensitive data shared with generative AIs

It is clear a new technology is taking hold when it becomes impossible to avoid hearing about it. That’s the case with generative AI. Large language models (LLMs) like OpenAI’s GPT-4 and the more approachable ChatGPT are making waves the world over. Generative AI is exciting, and it’s causing a real fear of missing out for tech companies as they try to match competitors.

Answering Key Questions About Embracing AI in Cybersecurity

As we witness a growing number of cyber-attacks and data breaches, the demand for advanced cybersecurity solutions is becoming critical. Artificial intelligence (AI) has emerged as a powerful contender to help solve pressing cybersecurity problems. Let’s explore the benefits, challenges, and potential risks of AI in cybersecurity using a Q&A composed of questions I hear often.

Can AI write secure code?

AI is advancing at a stunning rate, with new tools and use cases are being discovered and announced every week, from writing poems all the way through to securing networks. Researchers aren’t completely sure what new AI models such as GPT-4 are capable of, which has led some big names such as Elon Musk and Steve Wozniak, alongside AI researchers, to call for a halt on training more powerful models for 6 months so focus can shift to developing safety protocols and regulations.

Trustwave Answers 11 Important Questions on ChatGPT

ChatGPT can arguably be called the breakout software introduction of the last 12 months, generating both amazement at its potential and concerns that threat actors will weaponize and use it as an attack platform. Karl Sigler, Senior Security Research Manager, SpiderLabs Threat Intelligence, along with Trustwave SpiderLabs researchers, has been tracking ChatGPT’s development over the last several months.

AI, Cybersecurity, and Emerging Regulations

The SecurityScorecard team has just returned from an exciting week in San Francisco at RSA Conference 2023. This year’s theme, “Stronger Together,” was meant to encourage collaboration and remind attendees that when it comes to cybersecurity, no one goes it alone. Building on each other’s diverse knowledge and skills is what creates breakthroughs.