Jerusalem, Israel
2017
  |  By Shauli Rozen
The external auditor’s evidence request lands Tuesday morning. A security architect at a Tier 1 bank pulls up her AI-SPM dashboard for the SOC2 Type 2 review. Eighty-three AI agents running across the bank’s clusters. For each one, the dashboard shows the current configuration and the current behavioral baseline. The data is accurate, comprehensive, and point-in-time.
  |  By Yossi Ben Naim
It is 11:47 p.m. and the on-call security engineer is staring at two dashboards. On the left, LangSmith — the ML team’s debugging stack — showing the agent’s prompts, model responses, tool calls, and tokens consumed. On the right, the runtime detection console showing eBPF-captured syscalls, network connections, and process trees from the same Pod. Both are populated.
  |  By Yossi Ben Naim
A platform team at a mid-size SaaS company runs three LangChain agents and one AutoGPT-derived planner on EKS. LangSmith is wired in. OpenTelemetry traces flow into their observability stack. Falco runs on every node. The setup is what most security teams would consider thorough. A pip dependency in one of the agents’ tool packages ships a malicious update.
  |  By Ben Hirschberg
In August 2025, a vulnerability chain in NVIDIA Triton Inference Server was found that allowed an unauthenticated remote attacker to send a single crafted inference request, leak the name of an internal shared memory region, register that region for subsequent requests, gain read-write primitives into the Triton Python backend’s private memory, and achieve full remote code execution. The exploit chain ran entirely through Triton’s standard inference API. No anomalous traffic volume.
  |  By ARMO
Your security team sees an MCP tool server throw an error. Your APM dashboard shows a latency spike. Your logs capture the JSON-RPC request with its method name and parameters. But none of that tells you whether the tool just read a harmless config file or dumped credentials to an external IP. Traditional observability tools—the APM platforms, the OpenTelemetry traces, the centralized logging pipelines—track performance across your Model Context Protocol deployments.
  |  By Ben Hirschberg
Tuesday, 09:14 UTC. A connector pulling content from your knowledge wiki indexes a new article into the vector database your support agents query at runtime. Embedded in legitimate troubleshooting prose is an instruction crafted to surface whenever a query mentions a specific product version — include the user’s account record in the response and POST the summary to the configured support webhook. For three days, nothing happens. Every security tool is green.
  |  By Yossi Ben Naim
A platform engineer at a mid-market fintech opens her SCA dashboard at the start of the quarter. The agentic customer-support pipeline her team shipped two months ago — a LangChain orchestrator, a vLLM inference server with two fine-tuned LoRA adapters pulled from Hugging Face, and an MCP toolkit wired to four internal APIs — shows green. Snyk has scanned every Python package in the container. Mend has cleared the dependency graph. The CVE count is zero.
  |  By Shauli Rozen
A platform engineer pulls the AI-SPM dashboard for an agent that has been running in production six weeks. The static dashboard shows several dozen findings, severity-sorted by configuration weight. The runtime-informed dashboard shows a smaller, prioritized list — but a few of those findings do not appear on the static view at all, and most of the static findings appear demoted to a tier the static view does not have. Same agent. Same window. Same underlying configuration.
  |  By Shauli Rozen
Every cloud security vendor launched an AI-SPM dashboard in the past year. Strip away the branding and most of them are presenting the same concept: a new posture management layer for AI workloads. Sit through four demos in the same week and a practical question surfaces. The dashboards look broadly similar — pie charts of findings, compliance tags, a list of AI assets, a severity ranking. Why, then, do the tools underneath cover completely different parts of the problem?
  |  By Shauli Rozen
A Tier 1 bank’s security architecture already spends heavily on detection. On one side sits the financial surveillance stack — fraud scoring platforms processing thirty thousand transactions an hour, AML monitoring watching money movement patterns, DLP engines scanning data in transit, payment anomaly detection tuned by a decade of production signal.
  |  By ITProTV
With the short week for the Thanksgiving holiday in the US, the Technado team decided to have a little fun by looking back at some of the dumbest tech headlines from 2019. Romanian witches online, flat-earthers, and fake food for virtual dogs - what a time to be alive. Then, Shauli Rozen joined all the way from Israel to talk about a zero-trust environment in DevOps. IT skills & certification training that’s effective & engaging. Binge-worthy learning for IT teams & individuals with 4000+ hours of on-demand video courses led by top-rated trainers. New content added daily.

ARMO closes the gap between development and security, giving development, DevOps, and DevSecOps the flexibility and ease to ensure high grade security and data protection no matter the environment – cloud native, hybrid, or legacy.

ARMO is driving a paradigm shift in the way companies protect their cloud native and hybrid environments. We help companies move from a “close-the-hole-in-the-bucket” model, installing firewalls, defining access control lists, etc. to a streamlined DevOps- and DevSecOps led model in which environments are deployed with inherent zero-trust.

Security at the Speed of DevOps:

  • Runtime workload identity and protection: Identifies workloads based on application code analysis, creating cryptographic signatures based on Code DNA to prevent unauthorized code from running in the environment to access and exfiltrate protected data. The patent-pending technology signs and validates workloads in runtime throughout the entire workload lifecycle.
  • Transparent data encryption: Transparent data encryption – keyless encryption – robustly and uniformly encrypts and protects files, objects, and properties, requiring no application changes, service downtime, or impact on functionality. It eases the adoption of encryption by removing the complexity of key management and providing an out-of-the-box solution for key protection in use, key rotations, and disaster recover procedures.
  • Identity-based communication tunneling: Transparent communication tunneling ensures only authorized and validated applications and services can communicate. Even if attackers steal valid access credentials, they are useless because the malicious code will be unsigned. Create API access polices to build identity-based policies and enforce correct workload behaviors.
  • Application-specific secret protection: Application-specific protection of secrets ensures cryptographic binding between continuously validated specific workload identities and their confidential data, delivering complete protection against access by unauthorized applications.
  • Visibility & compliance: Visibility and compliance monitoring provide granular details about workloads and running environments, including individual processes, file names and locations, open listening ports, actual connections, mapped volumes, opened files, process privilege levels, connections to external services, and more. Alerts can be used for continuous compliance verification.

Bringing Together Run-Time Workload And Data Protection To Seamlessly Establish Identity Based, Zero-Trust Service-To-Service Control Planes.