Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The ultimate guide to creating a secure Python package

Creating a Python package involves several actions, such as figuring out an apt directory structure, creating package files, and configuring the package metadata before deploying it. There are a few other steps that you need to follow including creating a subdirectory for tests and clear documentation. Once the package is ready, you can distribute it to various distribution archives. With that, your Python package will be ready for others to install and use.

Integrating Snyk Code SAST results in your ServiceNow workflows

Application security teams often lack the crucial information and visibility needed to find, prioritize, and remediate risks in their most business-critical applications. To solve this application security challenge, ServiceNow and Snyk have partnered to provide a singular view of the risk within these applications — exposing the severity and criticality of vulnerabilities while providing actionable workflows to boost your overall security posture.

More accurate than GPT-4: How Snyk's CodeReduce improved the performance of other LLMs

Snyk has been a pioneer in AI-powered cybersecurity since the launch of Snyk Code in 2021, with the DeepCode AI engine bringing unmatched accuracy and speed to identifying security issues in the SAST space for the first time. Over the last 3 years, we have seen the rise of AI and LLMs, which Snyk has been at the forefront of with the introduction of new AI-based capabilities, such as DeepCode AI Fix, our vulnerability autofixing feature, or our third-party dependency reachability feature.

How Mulesoft fosters a developer-first, shift-left culture with Snyk

While shifting security left has been a hot topic for around a decade, many organizations still face issues trying to make it a reality. There are many misconceptions about what shift left means and what it looks like for development teams to take ownership of security without derailing their existing workflows.

Snyk CLI: Introducing Semantic Versioning and release channels

We are pleased to introduce Semantic Versioning and release channels to Snyk CLI from v.1.1291.0 onwards. In this blog post, we will share why we are introducing these changes, what problems these changes solve for our customers, and how our customers can opt-in according to their needs.

360 degrees of application security with Snyk

Application development is a multistage process. The App goes through various stages, each with its own area of focus. However, application security, a.k.a. AppSec, is constant throughout all the stages. For example, when a developer codes, it’s expected that the code will be secure. Similarly, the artifacts that are worked upon or generated as an end output of the respective stages are all required to be secure.

Snyk Code's autofixing feature, DeepCode AI Fix, just got better

DeepCode AI Fix is an AI-powered feature that provides one-click, security-checked fixes within Snyk Code, a developer-focused, real-time SAST tool. Amongst the first semi-automated, in-IDE security fix features on the market, DeepCode AI Fix’s public beta was announced as part of Snyk Launch in April 2023. It delivered fixes to security issues detected by Snyk Code in real-time, in-line, and within the IDE.

Responsibilities of a modern CISO

The role of a Chief Information Security Officer (CISO) is critical in an interconnected business environment. A modern CISO will ensure that their organization is well-prepared to handle the myriad of cybersecurity challenges it faces. It is multifaceted, extending beyond traditional IT security to encompass various responsibilities to protect an organization's information assets.

An investigation into code injection vulnerabilities caused by generative AI

Generative AI is an exciting technology that is now easily available through cloud APIs provided by companies such as Google and OpenAI. While it’s a powerful tool, the use of generative AI within code opens up additional security considerations that developers must take into account to ensure that their applications remain secure. In this article, we look at the potential security implications of large language models (LLMs), a text-producing form of generative AI.