Ignoring the little stuff is never a good idea. Anyone who has pretended that the small noise their car engine is making is unimportant, only to later find themselves stuck on the side of the road with a dead motor will understand this statement. The same holds true when it comes to dealing with minor vulnerabilities in a web application. Several small issues that alone do not amount to much, can in fact prove dangerous, if not fatal, when strung together by a threat actor.
Yes, AI chatbots can write code very fast, but you still need human oversight and security testing in your AppSec program. Chatbots are taking the tech world and the rest of the world by storm—for good reason. Artificial intelligence (AI) large language model (LLM) tools can write things in seconds that would take humans hours or days—everything from research papers to poems to press releases, and yes, to computer code in multiple programming languages.
It seems that Chatbots are now embedded into every digital solution at our disposal, but despite the simplicity and ease this tool provides, cyber security concerns have been raised over the AI’s ability to spread misinformation, aid hackers in developing malware and even present sensitive data leak threats.
In late 2022, artificial intelligence (AI) chat and conversational bots garnered large followings and user bases. AI chatbots, including ChatGPT, Meta’s Blender Bot3 and DeepMind, and Google’s Sparrow, have numerous benefits and uses, including potentially replacing current search engines, but there are notable drawbacks.
ChatGPT is an artificial intelligence chatbot created by OpenAI, reaching 1 million users at the end of 2022. It is able to generate fluent responses given specific inputs. It is a variant of the GPT (Generative Pre-trained Transformer) model and, according to OpenAI, it was trained by mixing Reinforcement Learning from Human Feedback (RLHF) and InstructGPT datasets. Due to its flexibility and ability to mimic human behavior, ChatGPT has raised concerns in several areas, including cybersecurity.
Microsoft requires users to leverage the Microsoft Developer Portal to create new Teams applications, such as chatbots. At Tines, we thought it might be helpful to provide instructions for alternative options if you don't want to create a chatbot in the portal. For those who would prefer to send messages directly to a Teams channel instead of configuring a chatbot, Microsoft Teams can receive messages in a channel via a webhook.
When you hear the term “chatbot,” your mind may at first turn to things like robotic customer support services on retail websites – a relatively mundane use case for chatbots, and one that is probably hard to get excited about if you’re a security engineer. But, the fact is that chatbots can do much more than provide customer support.