Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What to do if the 16 billion password data leak impacted you

Around 16 billion login credentials have been leaked online, potentially affecting services like Apple, Google, Facebook, and more. Learn how to check if you’re impacted and discover practical steps to secure your accounts with tools like 1Password. Sixteen billion leaked login credentials. That’s the number of records security experts at Cybernews recently identified, making this one of the most significant credential leaks ever discovered.

Shadow AI leak exposes data from 571 Canva Creators #ai #cybersecurity #dataleak #vendor #vendorrisk

571 Canva Creators had their personal data exposed by an unsecured Chroma database. The database, used by Russian AI startup My Jedai, contained 341 document collections. One of these collections included survey responses with emails, countries of residence, and detailed feedback on the Canva Creators program. This isn’t your typical breach. It’s the result of unsecured AI infrastructure.

Data Leakage and Other Risks of Insecure LlamaIndex Apps

Similar to Ollama and llama.cpp, LlamaIndex provides an application layer for connecting your data to LLMs and interacting with it through a chat interface. While LlamaIndex is an open source project like other LLM application frameworks, LlamaIndex is also a company, with a recent Series A, a commercial offering, and a more polished aesthetic than their strictly DIY counterparts.

Developer Leaks API Key for Private Tesla, SpaceX LLMs

In AI, as with so many advancing technologies, security often lags innovation. The xAI incident, during which a sensitive API key remained exposed for nearly two months, is a stark reminder of this disconnect. Such oversights not only jeopardize proprietary technologies but also highlight systemic vulnerabilities in API management. As more organizations integrate AI into their operations, ensuring robust API security has never been more critical.

The Shadow AI Data Leak Problem No One's Talking About

Is your team's favorite new productivity tool also your biggest data leak waiting to happen? Generative AI (GenAI) assistants like ChatGPT, Microsoft Copilot, and Google Gemini have quickly moved from novelty to necessity in many workplaces. These tools use machine learning and advanced algorithms to help employees draft content, analyze data, and even write code faster than ever before.

Analyzing llama.cpp Servers for Prompt Leaks

The proliferation of AI has rapidly introduced many new software technologies, each with its own potential misconfigurations that can compromise information security. Thus the mission of UpGuard Research: discover the vectors particular to a new technology and measure its cyber risk. This investigation looks at llama.cpp, an open-source framework for using large language models (LLMs).

Data Leaks and AI Agents: Why Your APIs Could Be Exposing Sensitive Information

Most organizations are using AI in some way today, whether they know it or not. Some are merely beginning to experiment with it, using tools like chatbots. Others, however, have integrated agentic AI directly into their business procedures and APIs. While both types of organizations are undoubtedly realizing remarkable productivity and efficiency benefits, they may not know they are putting themselves at a significant security risk.