Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How Safe is the ChatGPT Android App? An Appknox Study

Brilliant AI, broken defenses? AI-powered apps are revolutionizing how we search, learn, and communicate, but the rapid pace of innovation has come at a cost: security is often an afterthought. As part of our AI App Security Analysis Series, we’ve been scrutinizing some of the most popular AI tools on Android for hidden vulnerabilities that could put millions of users at risk.

Open Chroma Databases: A New Attack Surface for AI Apps

Chroma is an open-source vector store–a database designed to allow LLM chatbots to search for relevant information when answering a user’s question–and one of many technologies that have seen adoption grow with the recent AI boom. Like many databases, Chroma can be configured by end users to lack authentication and authorization mechanisms.

OpenAI Report Describes AI-Assisted Social Engineering Attacks

OpenAI has published a report looking at AI-enabled malicious activity, noting that threat actors are increasingly using AI tools to assist in social engineering attacks and influence operations. In one case, the company banned ChatGPT accounts that were likely being used in North Korean attempts to fraudulently obtain jobs at US companies. “Similar to the threat actors we disrupted and wrote about in February, the latest campaigns attempted to use AI at each step of the employment process.

Beyond Plain Text: Egnyte's Journey to Structured Data Extraction in RAG Systems

When we first launched Egnyte’s AI features built on retrieval-augmented generation (RAG), customer response was overwhelmingly positive. Users could quickly find and synthesize information from vast document repositories with accuracy and context. But success breeds ambition. As customers grew comfortable with the system, they began exploring new use cases that revealed a limitation: while our RAG excelled with plain text, it struggled with tables, charts, and other structured formats.

CISO Spotlight: Rick Bohm on Building Bridges, Taming AI, and the Future of API Security

Nestled in a log cabin high in the Rocky Mountains, Rick Bohm starts his day the same way he’s approached his career: intentionally, with a quiet commitment to learning and action. Boasting more than three decades of cybersecurity experience, Rick has watched tech evolve from dial-up ISPs to advanced AI-driven security architectures – and through it all, he’s focused on one enduring mission: protecting data, organizations, and people.

Exposing the Blind Spots: CrowdStrike Research on Feedback-Guided Fuzzing for Comprehensive LLM Testing

The increasing deployment of large language models (LLMs) in enterprise environments has created a pressing need for effective security testing methods. Traditional approaches, relying heavily on predefined templates, are limited in comparison to adaptive attacks — particularly those related to prompt injection attacks. This limitation becomes especially critical in high-performance computing environments where LLMs process thousands of requests per second.

Microsoft Copilot: Balancing Power and Privacy Risks

Microsoft Copilot’s integration with MS Graph opens powerful doors, letting AI access emails, docs, and your entire MS365 data ecosystem. But with great convenience comes significant risk: your sensitive data could become more vulnerable to attacks. In this video, we explore the privacy and security concerns this integration introduces—and offer actionable insights on how you can mitigate these risks effectively.