Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

OpenAI Transparency Report Highlights How GPT-4 Can be Used to Aid Both Sides of the Cybersecurity Battle

The nature of an advanced artificial intelligence (AI) engine such as ChatGPT provides its users with an ability to use and misuse, potentially empowering both security teams and threat actors alike. I’ve previously covered examples of how ChatGPT and other AI engines like it can be used to craft believable business-related phishing emails, malicious code, and more for the threat actor.

Guarding Against AI-Enabled Social Engineering: Lessons from a Data Scientist's Experiment

The Verge came out with an article that got my attention. As artificial intelligence continues to advance at an unprecedented pace, the potential for its misuse in the realm of information security grows in parallel. A recent experiment by data scientist Izzy Miller shows another angle. Miller managed to clone his best friends' group chat using AI, downloading 500,000 messages from a seven-year-long group chat, and training an AI language model to replicate his friends' conversations.

Salt Unveils Enhancements to AI Algorithms for API Security

We’re pleased to share that Salt has extended the capabilities of our powerful AI algorithms, further strengthening the threat detection and API discovery abilities of the Salt Security API Protection Platform. (Check out today’s announcement.) Here at Salt, we always look forward to the RSA Conference, but this year we are doubly excited to attend and showcase these new advanced capabilities! Salt invests significant resources into the continued innovation of our API security platform.

[Head Start] Effective Methods How To Teach Social Engineering To An AI

Remember The Sims? Well Stanford created a small virtual world with 25 ChatGPT-powered "people". The simulation ran for 2 days and showed that AI-powered bots can interact in a very human-like way. They planned a party, coordinated the event, and attended the party within the sim. A summary of it can be found on the Cornell University website. That page also has a download link for a PDF of the entire paper (via Reddit).

What Are the Security Implications of AI Coding?

AI coding is here, and it’s transforming the way we create software. The use of AI in coding is actively revolutionizing the industry and increasing developer productivity by 55%. However, just because we can use AI in coding doesn't mean we should adopt it blindly without considering the potential risks and unintended consequences.

Win The AI Wars To Enhance Security And Decrease Cyber Risk

With all the overwrought hype with ChatGPT and AI…much of it earned…you could be forgiven for thinking that only the bad actors are going to be using these advanced technologies and the rest of us are at their mercy. But this is not an asymmetric battle where the bad actors use AI and the rest of us are struggling using our pencils and abacuses to catch up. It is the good side that invented and is accelerating AI. It is the good scientists that made ChatGPT and all of its competitors.

Recent Artificial Intelligence Hype is Used for Phishbait

Anticipation leads people to suspend their better judgment as a new campaign of credential theft exploits a person’s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard.