CVE-2026-3854 is a command injection vulnerability in GitHub Enterprise Server. It lives in the git push pipeline. User-supplied push option values were not properly sanitized before being embedded in an internal service header. The header format used a delimiter that could also appear in user input. A crafted push option containing that delimiter let an attacker inject additional metadata fields. Downstream services treated those fields as trusted internal values.
As more organizations move past experimentation and start planning real AI agent deployments, the same set of concerns keeps surfacing in our conversations with security teams. Whether the worry is a shadow agent that shows up uninvited or a sanctioned agent going rogue, the questions tend to cluster around control: These are the right questions to be asking, and they share a common answer that’s more concrete than most people expect. AI agents are only as dangerous as the privileges they can reach.
There’s never a good time to lose a production database, but losing one to your own AI coding agent on a Friday afternoon has to rank near the bottom of the list. That’s the backdrop to the PocketOS incident, and it’s the clearest case yet for why AI agent security and intent-based access control belong at the top of every cloud security roadmap this year.
Singapore’s financial sector faces its most demanding regulatory environment yet in 2026. AI-powered cyberattacks, cloud-native banking infrastructure, and decentralised finance have pushed the Monetary Authority of Singapore (MAS) to sharpen its supervisory focus — and its expectations of every regulated institution. If you are a CISO, CTO, Head of Compliance, or technology risk officer at a Singapore financial institution, this guide answers the question your regulators are already asking.
MSPs today face growing security demands alongside increasing operational complexity. Disconnected tools and manual processes create noise, slow response times, and limit scalability. The solution? Automation and integration. By connecting security platforms with PSA and RMM tools, MSPs can streamline workflows, reduce alert fatigue, and improve service delivery, turning reactive processes into proactive, efficient operations.
Anthropic built a powerful AI model and then kept it on a short leash. The important part is not that a model found bugs, which has been coming for a while. What’s worth acknowledging is that Anthropic looked at what Mythos could do and decided broad release was a bad idea. Attackers do not need a perfect autonomous system. They need leverage.
Securonix Threat Research analyzed a stealthy Python-based backdoor framework, dubbed Deep#Door, which uses an obfuscated batch loader to deploy a persistent surveillance and credential-stealing implant on Windows systems.
In February 2026, researchers uncovered something that should give every security leader pause. A malware operation called SmartLoader, previously known for targeting consumers who downloaded pirated software, had completely pivoted its infrastructure. SmartLoaders new target was developers, and its new entry point was a protocol most security teams had never heard of. The payload delivered to victims: every saved browser password, every cloud session token, every SSH key on the machine.
We’ve established the new forensic reality: a massive 72.9% inventory gap exists between the vendors you monitor and those invisible to your security. We have seen the shortcomings of SSO and its inability to holistically monitor all the vendor applications your users engage with, along with a Shadow AI explosion that is compounding both issues. The era of procurement-only discovery is over. To secure the modern cyber workforce, we must pivot from "buying-based" to usage-based discovery.