Protecting Personal Identifiable Information in AI: Best Practices for Enterprises
Personal Identifiable Information (PII) includes sensitive data such as social security and passport numbers, as well as biometric data like faces and fingerprints. When training or refining Large Language Models (LLMs), there’s a risk of accidentally including PII, leading to significant real-world consequences like identity theft, privacy violations, and financial losses.
In this video, we discuss best practices for enterprises to safeguard against these risks. Enhanced filtering of both inputs and outputs is crucial for preventing PII exposure. Regular audits of LLM outputs ensure adherence to privacy standards, while careful curation of training data helps avoid the inclusion of sensitive information.
By proactively implementing these strategies, organizations can protect individual privacy, maintain public trust, and responsibly leverage the power of generative AI.
🎓 Learn more about these emerging AI technologies and their enterprise applications: http://cs.co/6054cpkde
Check out Empowering Citizen Developers: A GenAI Security Blueprint to explore the dual challenge of empowering citizen developers while safeguarding against critical security risks: https://venturebeat.com/empowering-citizen-developers-a-genai-security-blueprint/
Outshift is Cisco’s incubation engine, innovating what's next and new for Cisco products and sharing our expertise on emerging technologies. Discover the latest on cloud native applications, cloud application security, generative AI, quantum networking and security, future-forward tech research, our latest open source projects and more.
Keep up with the speed of innovation:
→ Learn more: http://cs.co/6051uzKYc
→ Read our blog: http://cs.co/6052uzKYY
Connect with us on social media:
→ LinkedIn: http://cs.co/6053uzKYl
→ Twitter / X: http://cs.co/6054uzKYm
→ Subscribe to our YouTube channel: @OutshiftbyCisco