Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

DevOps

Configuring Maximum Security Log Size

Setting the maximum log size for event logs is crucial for your security policy. Proper configuration helps detect attacks and investigate their sources. Insufficient storage can result in information loss and undetected breaches. This article covers everything you need to know about configuring maximum security log size. Server hardening can be labor-intensive and costly, often causing production issues.

How to Automate IIS Hardening Script with PowerShell

IIS hardening can be a time-consuming and challenging process. PowerShell can help you achieve hardened IIS security settings to some extent, but it still requires hours of testing to ensure that nothing is broken. CSS by CalCom can automate the IIS hardening process with its unique ability to “Learn” your network, eliminating the need for lab testing while ensuring zero outages to your production environment.

Teleport 16

It’s that time again — for a brand new major release. Our team releases major versions of Teleport every 4 months. Here we introduce Teleport 16. This post goes into detail about Teleport 16 breaking changes, bug fixes and improvements. In Teleport 16, we focused on new features and enhancements to enable our customers to implement mitigations to protect against an IdP Compromise.

Leveraging Golden Signals for Enhanced Kubernetes Security

As a powerful and widely adopted open-source platform, the complexity of Kubernetes is not to be underestimated. Managing a Kubernetes environment requires a deep understanding of how its various components interact, especially when it comes to observability and security. This blog post will delve into the intricacies of golden signals in Kubernetes, their connection to security issues, and how they can be leveraged to safeguard a Kubernetes environment against common attack chains.

Hallucinated Packages, Malicious AI Models, and Insecure AI-Generated Code

AI promises many advantages when it comes to application development. But it’s also giving threat actors plenty of advantages, too. It’s always important to remember that AI models can produce a lot of garbage that is really convincing—and so can attackers. “Dark” AI models can be used to purposely write malicious code, but in this blog, we’ll discuss three other distinct ways using AI models can lead to attacks.