If your organization is having trouble creating policies, I hope that this blog post will help you set a clear path. We’ll discuss setting up your organization up for success by ensuring that you do not treat your policies as a “do once and forget” project. Many organizations I have worked with have done that, but later realized good policy lifecycle is required, and a pillar of good governance.
You don’t need me to tell you what a ransomware attack could do to your business. We’ve all read the stories. Even the largest multinationals have been crippled by malware encrypting or stealing sensitive data. The result is a Hobson’s choice for IT managers: pay the criminal gang an exorbitant ransom demand or face costly downtime, reputational damage, and regulatory scrutiny. Thankfully, your fate is in your hands. Ransomware attacks aren’t random.
When we develop software, it’s common practice for engineers to require system configuration in order to run a program. We specify instructions on how to set up your own local environment in a.env.example file or README.md file.
In previous blog posts we’ve discussed the value of a data-driven approach to security operations. In this post, we’d like to reflect and take a closer look at what that approach means to the automation of SOC (Security Operations Center) workflows and how it has influenced the product and design decisions of ThreatQ and ThreatQ TDR Orchestrator.
Recent research from CyberArk Labs presents a new technique for extracting sensitive data from the Chromium browser’s memory. However, existing access to the targeted system is required before leveraging the technique to extract the sensitive data. The technique could enable identity-based attacks involving authentication bypass using Oauth cookies that have already passed an MFA challenge.
People commonly think that any “Internet Connection” is exactly the same, or they may be vaguely aware that some connections are faster than others. However, there are significant differences between the connections. While these differences may not matter to someone who just wants to browse websites and read email, they can be significant or even showstoppers for more advanced users or s. This is especially true for anyone looking to do security testing or vulnerability scanning.
This blog is the final part of a series about secure access to Amazon RDS. In Part 1, we covered how to use OSS Teleport as an identity-aware access proxy to access Amazon RDS instances running in private subnets. Part 2 explained implementing single sign-on (SSO) for Amazon RDS access using Okta and Teleport. Part 3 showed how to configure Teleport access requests to enable just-in-time access requests for Amazon RDS access.
From small school districts and not-for-profit organizations with limited cyber defense budgets to major Fortune 500 companies with sophisticated cyber defense teams, understanding what to do in the first 48 hours following a significant cyber event is essential in protecting your organization and limiting the potential damage.
The cloud has enabled organizations to build and deploy applications faster than ever, but security has become more complex. The shift to cloud has created a world where everything is code — not just the applications, but also the infrastructure they run on. So, any security issue within an application or cloud environment can put an entire system at risk. And keeping that cloud native application stack secure is increasingly the responsibility of development teams.