Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

4 AI coding risks and how to address them

96% of developers use AI coding tools to generate code, detect bugs, and offer documentation or coding suggestions. Developers rely on tools like ChatGPT and GitHub Copilot so much that roughly 80% of them bypass security protocols to use them. That means that whether you discourage AI-generated code in your organization or not, developers will probably use it. And it comes with its fair share of risks. On one hand, AI-generated code helps developers save time.

Is over-focusing on privacy hampering the push to take full advantage of AI?

Customer data needs to be firewalled and if protected properly can still be used for valuable analytics In 2006, British mathematician Clive Humby declared that data is the new oil-and so could be the fuel source for a new, data-driven Industrial Revolution.

Is AI-generated code secure? Maybe. Maybe not.

Generative AI has emerged as the next big thing that will transform the way we build software. Its impact will be as significant as open source, mobile devices, cloud computing—indeed, the internet itself. We’re seeing Generative AI’s impacts already, and according to the recent Gartner Hype Cycle for Artificial Intelligence, AI may ultimately be able to automate as much as 30% of the work done by developers.

AI quality: Garbage in, garbage out

If you use expired, moldy ingredients for your dessert, you may get something that looks good but tastes awful. And you definitely wouldn’t want to serve it to guests. Garbage in, garbage out (GIGO) applies to more than just technology and AI. Inputting bad ingredients into a recipe will lead to a potentially poisonous output. Of course, if it looks a little suspicious, you can cover it in frosting, and no one will know. This is the danger we are seeing now.

How AI adoption throughout the SDLC affects software testing

With AI finding adoption throughout all stages of the development process, the SDLC as we know it is becoming a thing of the past. Naturally, this has many implications for the field of software testing. This article will discuss how the SDLC has evolved over time, going into detail on the impact that AI adoption is having on both software development and software testing.

Protecto Announces Data Security and Safety Guardrails for Gen AI Apps in Databricks

Protecto, a leader in data security and privacy solutions, is excited to announce its latest capabilities designed to protect sensitive enterprise data, such as PII and PHI, and block toxic content, such as insults and threats within Databricks environments. This enhancement is pivotal for organizations relying on Databricks to develop the next generation of Generative AI (Gen AI)applications.

Enhancing Language Models: An Introduction to Retrieval-Augmented Generation

Over the past few years, significant progress has been observed in the area of NLP, largely due to the availability and excellence of advanced language models, including OpenAI's GPT series. These models, which are useful for generating human-like text which is contextually appropriate, have transformed several interfaces from conversational agents to creative writing. However, as popular and effective as they may seem, the traditional language models have their own drawbacks and specifically, the restriction in accessing additional up-dated data and incorporating them.

Snowflake Breach: Stop Blaming, Start Protecting with Protecto Vault

Hackers recently claimed on a known cybercrime forum that they had stolen hundreds of millions of customer records from Santander Bank and Ticketmaster. It appears that hackers used credentials obtained through malware to target Snowflake accounts without MFA enabled. While it's easy to blame Snowflake for not enforcing MFA, Snowflake has a solid track record and features to protect customer data. However, errors and oversight can happen in any organization.

Securing AI in the Cloud: AI Workload Security for AWS

To bolster the security of AI workloads in the cloud, Sysdig has extended its recently launched AI Workload Security to AWS AI services, including Amazon Bedrock, Amazon SageMaker, and Amazon Q. This enhancement helps AWS AI service users secure AI workloads and keep pace with the speed of AI evolution.

Protect Your Data from LLMs: Mitigating AI Risks Effectively

As artificial intelligence (AI) continues to advance, its integration into our daily lives and various industries brings both tremendous benefits and significant risks. Addressing these risks proactively is crucial to harnessing AI’s full potential while ensuring security and ethical use. Let's embark on a journey through the AI pipeline, uncovering the potential pitfalls and discovering strategies to mitigate them.