Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

NSOCKS: Insights into a Million-Dollar Residential Proxy Service

When an adversary wants to target an organization, they want to make it look like they’re coming from a regional or local internet service provider. This makes their activity seem more legitimate and buys time until they get caught. Proxies, which adversaries can use to conceal the origin of malicious traffic, are essential to this process.

Secure Your AI: Protecting Agentic AI in an API-Driven World

As enterprises embrace agentic AI for transformative business opportunities, they face a critical challenge: ensuring these intelligent systems operate securely. Wallarm, the leader in API-first security, invites you to an exclusive webinar to explore how to safeguard AI agents, APIs, and sensitive data from emerging threats. Learn how to protect your AI ecosystem and ensure business continuity with actionable insights from Wallarm Security Lab. Discover why 90% of agentic AI deployments are vulnerable and how to defend them.

AI Security = API Security: 10x Surge in AI-Related CVEs #AIExploits #APIAttacks #SecureAI

AI-driven applications rely on APIs, making them a prime target for attackers. In 2024, AI-related CVEs increased 10x, with 98.6% of vulnerabilities linked to APIs. As AI agents interact with systems via APIs, security risks grow. Learn why securing AI means securing APIs.

AI Risk Management: Benefits, Challenges, and Best Practices

Managing the risks of AI development tools is crucial for organizations looking to responsibly and effectively leverage this technology’s potential. AI offers transformative capabilities, particularly in coding assistance, where tools can speed up development and reduce manual workloads. However, these benefits can come with risks, such as security vulnerabilities and compliance challenges, that cannot be overlooked.

Responding and remediating: Best practices for handling security alerts

As organizations continue to evolve their DevSecOps programs by adopting comprehensive testing and monitoring, the next step is to take action on the insights uncovered. This means remediating security issues as early as possible and responding to security alerts and incidents in a timely manner. However, many security and development teams find that triaging the findings of every tool and managing remediation efforts is time-consuming and costly.

Make PostgreSQL Access Easier and More Secure with Teleport

Managing PostgreSQL access is a pain for engineering teams. Setting up users, roles, and keeping track of permissions slows down engineers. Security risks may emerge in the form of shared admin accounts or missteps in user setup or authorization workflows. Check out this screenshot from a Reddit thread discussing this problem.

Red Teaming for Generative AI: A Practical Approach to AI Security

Generative AI is changing industries by making automation, creativity, and decision-making more powerful. But it also comes with security risks. AI models can be tricked into revealing information, generating harmful content, or spreading false data. To keep AI safe and trustworthy, experts use GenAI Red Teaming. This method is a structured way to test AI systems for weaknesses before they cause harm.