Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Chatbots

ChatGPT security risks: defending against chatbots

AI chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude, Meta AI and Google Gemini have already demonstrated their transformative potential for businesses, but they also present novel security threats that organisations can’t afford to ignore. In this blog post, we dig deep into ChatGPT security, outline how chatbots are being used to execute low sophistication attacks, phishing campaigns and other malicious activity, and share some key recommendations to help safeguard your business.

Chatbot security risks continue to proliferate

While the rise of ChatGPT and other AI chatbots has been hailed as a business game-changer, it is increasingly being seen as a critical security issue. Previously, we outlined the challenges created by ChatGPT and other forms of AI. In this blog post, we look at the growing threat from AI-associated cyber-attacks and discuss new guidance from the National Institute of Standards and Technology (NIST).

Rendezvous with a Chatbot: Chaining Contextual Risk Vulnerabilities

Ignoring the little stuff is never a good idea. Anyone who has pretended that the small noise their car engine is making is unimportant, only to later find themselves stuck on the side of the road with a dead motor will understand this statement. The same holds true when it comes to dealing with minor vulnerabilities in a web application. Several small issues that alone do not amount to much, can in fact prove dangerous, if not fatal, when strung together by a threat actor.

Friend or foe: AI chatbots in software development

Yes, AI chatbots can write code very fast, but you still need human oversight and security testing in your AppSec program. Chatbots are taking the tech world and the rest of the world by storm—for good reason. Artificial intelligence (AI) large language model (LLM) tools can write things in seconds that would take humans hours or days—everything from research papers to poems to press releases, and yes, to computer code in multiple programming languages.

ChatGPT: Dispelling FUD, Driving Awareness About Real Threats

ChatGPT is an artificial intelligence chatbot created by OpenAI, reaching 1 million users at the end of 2022. It is able to generate fluent responses given specific inputs. It is a variant of the GPT (Generative Pre-trained Transformer) model and, according to OpenAI, it was trained by mixing Reinforcement Learning from Human Feedback (RLHF) and InstructGPT datasets. Due to its flexibility and ability to mimic human behavior, ChatGPT has raised concerns in several areas, including cybersecurity.

Leveraging Microsoft Teams webhooks to create a Tines chatbot

Microsoft requires users to leverage the Microsoft Developer Portal to create new Teams applications, such as chatbots. At Tines, we thought it might be helpful to provide instructions for alternative options if you don't want to create a chatbot in the portal. For those who would prefer to send messages directly to a Teams channel instead of configuring a chatbot, Microsoft Teams can receive messages in a channel via a webhook.

How Chatbot Automation Benefits Security Teams

When you hear the term “chatbot,” your mind may at first turn to things like robotic customer support services on retail websites – a relatively mundane use case for chatbots, and one that is probably hard to get excited about if you’re a security engineer. But, the fact is that chatbots can do much more than provide customer support.