AI and Compliance: Preventing Personally Identifiable Information Leakage

AI and Compliance: Preventing PII Information Leakage

In this video, A10 Networks' security leaders, Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar, delve into the growing security risks associated with the adoption of conversational AI bots and Large Language Models (LLMs), particularly in sensitive fields such as healthcare.

Diptanshu Purwar highlights several key concerns:

  • Rise of Conversational Bots: Conversational bots are increasingly being used, particularly in the medical field. These bots enable healthcare experts to engage in conversations, often through chatbots, while performing operations in the background.
  • Information Gathering in Healthcare: For instance, a medical expert seeking to gather patient information might use a chatbot, such as Copilot 365 or another AI tool.
  • Background Execution and Data Retrieval: When information is sent to the AI in the form of a natural language prompt, it is executed in the background, and the AI attempts to retrieve data from various sources.
  • Preventing Personally Identifiable Information (PII) Leakage and Ensuring Compliance: When retrieving information from various data sources, it's crucial to ensure that no Personally Identifiable Information (PII) or other sensitive personal information is leaked. Furthermore, organizations must ensure compliance regarding who is authorized to request access to sensitive data.

Learn more about securing AI and LLMs: https://bit.ly/4kOHmYd
Watch the full video: https://www.youtube.com/watch