Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Five Uncomfortable Truths About LLMs in Production

Many tech professionals see integrating large language models (LLMs) as a simple process -just connect an API and let it run. At Wallarm, our experience has proved otherwise. Through rigorous testing and iteration, our engineering team uncovered several critical insights about deploying LLMs securely and effectively. This blog shares our journey of integrating cutting-edge AI into a security product.

CISO Spotlight: Rick Bohm on Building Bridges, Taming AI, and the Future of API Security

Nestled in a log cabin high in the Rocky Mountains, Rick Bohm starts his day the same way he’s approached his career: intentionally, with a quiet commitment to learning and action. Boasting more than three decades of cybersecurity experience, Rick has watched tech evolve from dial-up ISPs to advanced AI-driven security architectures – and through it all, he’s focused on one enduring mission: protecting data, organizations, and people.

Addressing API Security with NIST SP 800-228

According to the Wallarm Q1 2025 ThreatStats report, 70% of all application attacks target APIs. The industry can no longer treat API security as a sidenote; it’s time to treat it as the main event. NIST seems to be on board with this view, releasing the initial public draft of NIST SP 800-228, a set of recommendations for securing APIs.

CISO Spotlight: Mike Wilkes on Building Resilience in an Evolving Threat Landscape

Mike Wilkes has had a career many cybersecurity professionals could only dream of. An adjunct professor, former CISO of Marvel and MLS, member of the World Economic Forum, drummer, and board member at the National Jazz Museum in Harlem, his interests and achievements are as eclectic as they are impressive.

Attackers Abuse TikTok and Instagram APIs

It must be the season for API security incidents. Hot on the heels of a developer leaking an API key for private Tesla and SpaceX LLMs, researchers have now discovered a set of tools for validating account information via API abuse, leveraging undocumented TikTok and Instagram APIs. The tools, and assumed exploitation, involve malicious Python packages - checker-SaGaF, stein lurks, and inner core - uploaded to PyPI.

Mapping the Future of AI Security

AI security is one of the most pressing challenges facing the world today. Artificial intelligence is extraordinarily powerful, and, especially considering the advent of Agentic AI, growing more so by the day. But it is for this reason that securing it is so important. AI handles massive amounts of data and plays an increasingly important role in operations; should cybercriminals abuse it, the consequences can be dire.