LLM Risks: Chaining Prompt Injection with Excessive Agency
Alongside an explosion in the popularity of large language models (LLMs) across many industries, there has also been an increase in the level of trust granted to these models. Whereas LLMs were once perceived as simple, friendly chatbots that could respond to basic questions or pull useful resources from the web based on user input, many have now been granted the ability to perform actions, anywhere from sending an email to deploying code. This is referred to as agency.