Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Technology

Casting a Cybersecurity Net to Secure Generative AI in Manufacturing

Generative AI has exploded in popularity across many industries. While this technology has many benefits, it also raises some unique cybersecurity concerns. Securing AI must be a top priority for organizations as they rush to implement these tools. The use of generative AI in manufacturing poses particular challenges. Over one-third of manufacturers plan to invest in this technology, making it the industry's fourth most common strategic business change.

How AI will impact cybersecurity: the beginning of fifth-gen SIEM

The power of artificial intelligence (AI) and machine learning (ML) is a double-edged sword — empowering cybercriminals and cybersecurity professionals alike. AI, particularly generative AI’s ability to automate tasks, extract information from vast amounts of data, and generate communications and media indistinguishable from the real thing, can all be used to enhance cyberattacks and campaigns.

How AppSentinels aligns with Gartner API Security Recommendations

The Gartner research paper “What You Need to Do to Protect Your APIs” outlines key requirements for bolstering API security measures. In this blog post, we’ll delve deeper into these requirements as introduced by Gartner, explain their significance, and demonstrate how AppSentinels offers comprehensive solutions for each requirement. As per Gartner, the second step is to assess the security of these APIs.

Active Cloud Risk: Why Static Checks Are Not Enough

How would you feel about your home security system if it only checked to see if your doors and windows were locked periodically? This security system would provide great visualizations of your house and how a criminal could get from one room to another, ultimately reaching one of your prized possessions, like a safe. However, it doesn’t have cameras on your doorbell or windows to alert you in real time when someone suspicious was approaching, or worse, trying to break into your house.

Making BYOD Work, Safely

Achieving an effective bring-your-own-device (BYOD) program has been aspirational for many IT organizations. There are explicit security and privacy concerns, which have led many admins to sour on the concept, despite its benefits. Admins have even reluctantly accepted the risk of personal PCs being left unmanaged, which leaves gaps in management and visibility.

Cloud Disaster Recovery: A Complete Overview

The cloud provides multiple benefits for running services and storing data. Just like with data stored on-premises, data stored offsite and in the cloud should be backed up. Data stored in the cloud is not invulnerable by default, as the risk of data loss is still present due to accidental deletions and cloud-specific threats. At the same time, the cloud can be useful for disaster recovery.

Understanding AI Package Hallucination: The latest dependency security threat

In this video, we explore AI package Hallucination. This threat is a result of AI generation tools hallucinating open-source packages or libraries that don't exist. In this video, we explore why this happens and show a demo of ChatGPT creating multiple packages that don't exist. We also explain why this is a prominent threat and how malicious hackers could harness this new vulnerability for evil. It is the next evolution of Typo Squatting.

An investigation into code injection vulnerabilities caused by generative AI

Generative AI is an exciting technology that is now easily available through cloud APIs provided by companies such as Google and OpenAI. While it’s a powerful tool, the use of generative AI within code opens up additional security considerations that developers must take into account to ensure that their applications remain secure. In this article, we look at the potential security implications of large language models (LLMs), a text-producing form of generative AI.