Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Bias Is More Dangerous Than You Think #shorts

AI bias is a real problem. Bias can enter AI systems in many ways: That’s why governments and organizations are focusing on responsible AI policies to ensure AI benefits everyone equally, not just one group. Responsible AI means reducing discrimination and ensuring fairness across all communities. Watch The Full Podcast: Link Below.

How intelligent workflows help MSSPs deliver customer outcomes at scale

For managed security service providers (MSSPs), customer loyalty is the most critical indicator of business health. Unlike other metrics that you directly control, such as mean time to respond or mean time to detect, it can’t be gamed: customers will either stay with you or they’ll churn. This means that the top priority for any MSSP should be to deliver the specific customer outcomes they were hired to provide, like helping to stop threat actors before they cause damage.

Feroot Launches AI-Powered Digital Consent Audit to Prove CMP Enforcement

Organizations have invested heavily in consent management. Consent Management Platforms (CMPs) are standard infrastructure for privacy programs, and for good reason. Regulations like GDPR, CCPA/CPRA, LGPD, PDPA, and HIPAA require organizations to obtain, record, and honor user consent before collecting or processing personal data. CMPs provide the framework to do that. Most organizations have done the right thing, they just don’t know if they’ve done the right thing right.

Introducing Programmable Flow Protection: custom DDoS mitigation logic for Magic Transit customers

We're proud to introduce Programmable Flow Protection: a system designed to let Magic Transit customers implement their own custom DDoS mitigation logic and deploy it across Cloudflare’s global network. This enables precise, stateful mitigation for custom and proprietary protocols built on UDP. It is engineered to provide the highest possible level of customization and flexibility to mitigate DDoS attacks of any scale.

AI Integration Security: Why the Biggest Risk Is Not the Model

When people talk about AI security risks, the conversation usually starts with the model. Can it be jailbroken? Can someone get around the guardrails? Can an attacker make it say or do something it should not? Those are fair questions, but they are not the most important ones. The bigger risk is not the model on its own: it’s everything the model is connected to.

What the Cyber Resilience Act guidance means for connected products

The latest European Commission guidance on the Cyber Resilience Act sends a clear message to manufacturers of connected products: cybersecurity must be designed in from the start, maintained throughout the product lifecycle, and supported by demonstrable processes for risk management, vulnerability handling and ongoing support. For organizations building, deploying and managing connected devices, this is a significant shift. The CRA is not simply another compliance exercise.

My First RSA: Agents, Challenges, and Community

I am no stranger to conferences, and certainly no stranger to security conferences. Over the years, BlackHat and DEFCON have both become staples of my calendar. But this year brought a new one to the list: RSA, and it truly lived up to the hype. The show floor was full of bright lights, fancy booths, and yes, tattoos, if you knew where to find them.

Episode 29: When AI becomes a security problem ft. Tamaghna Basu

AI has quietly moved from experiments to real-world systems that now write, decide, and reason alongside us. But as these systems scale, so do the risks. From hallucinations and data leakage to prompt injection and model abuse. In this episode of Server Room, we sit down with Tamaghna Basu, Founder of DeTaSECURE, to explore what it really takes to build and secure AI systems in production and why the future of AI will depend not just on intelligence, but on trust.