Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

The EU AI Act Explained: Implications for Your Business

The European Union’s Artificial Intelligence Act emerged at the end of 2023 as a landmark law for the digital age and for the regulation of artificial intelligence. It is the world’s first comprehensive AI legislation to govern the ethical development and safe use of AI technologies. The “EU AI Act,” as it’s known, strives to impose a balanced framework as businesses automate manual tasks and deploy AI algorithms to drive efficiency and innovation.

Protecto SecRAG - Launch Secure AI Assistants/Chatbots in Minutes

Introducing Protecto's SecRAG, the game-changer for secure AI. SecRAG stands for Secure Retrieval Augmented Generation, a turnkey solution. No need to build complex rag or access controls from scratch. Protecto provides a simple interface and APIs to connect data sources, assign roles, and authorize the data. In a few minutes, your secure AI assistant will be ready. When users ask your Protecto-powered AI assistants, Protecto applies appropriate access control to find the right data and generate responses that don't expose other sensitive information that the user is not authorized to see.

All You Need to Know About Retrieval-Augmented Generation (RAG) - Why Your Organization Needs It

Imagine accessing a giant repository of knowledge, extracting the most relevant information in response to your specific needs, and then using that information to generate intelligent, factual responses - that's the power of Retrieval-Augmented Generation (RAG). This innovative technology is taking the world of Artificial Intelligence (AI) by storm, and for good reason. Let's delve into what RAG is, why it counts, and how it can transform your organization.

Copilot amplifies insecure codebases by replicating vulnerabilities in your projects

Did you know that GitHub Copilot may suggest insecure code if your existing codebase contains security issues? On the other hand, if your codebase is already highly secure, Copilot is less likely to generate code with security issues. AI coding assistants can suggest insecure code due to their limited understanding of your specific codebase. They imitate learned patterns or utilize available context without providing judgment.

Conversations with Charlotte AI: Vulnerabilities on Internet-Facing Hosts

With Charlotte AI, the information security analysts need to stop breaches is simply a question away. Watch how analysts are turning hours of work into minutes and seconds — getting the context they need to identify vulnerabilities on internet-facing hosts.

Exploring LLM Hallucinations - Insights from the Cisco Research LLM Factuality/Hallucination Summit

LLMs have many impressive business applications. But a significant challenge remains - how can we detect and mitigate LLM hallucinations? Cisco Research hosted a virtual summit to explore current research in the LLM factuality and hallucination space. The session includes presentations from University professors collaborating with the Cisco Research team, including William Wang (UCSB), Kai Shu (IIT), Danqi Chen (Princeton), and Huan Sun (Ohio State).