Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

July 2023

July Release Rollup: AI Document Summarization, Smart Cache and More

‍ This month's release rollup includes Egnyte's AI-driven document summarization, project dashboard for Android, and Smart Cache file download improvements. Below is an overview of these and other new releases. Visit the linked articles for more details.

Researchers uncover surprising method to hack the guardrails of LLMs

Researchers from Carnegie Mellon University and the Center for A.I. Safety have discovered a new prompt injection method to override the guardrails of large language models (LLMs). These guardrails are safety measures designed to prevent AI from generating harmful content. This discovery poses a significant risk to the deployment of LLMs in public-facing applications, as it could potentially allow these models to be used for malicious purposes.

Five worthy reads: Cybersecurity in the age of AI - Battling sophisticated threats

Five worthy reads is a regular column on five noteworthy items we have discovered while researching trending and timeless topics. This week we are exploring the significant role of AI in the field of cybersecurity and why it’s the next biggest thing in cybersecurity.

FYI: the dark side of ChatGPT is in your software supply chain

Let’s face it, the tech world is a whirlwind of constant evolution. AI is no longer just a fancy add-on; it’s shaking things up and becoming part and parcel of various industries, not least software development. One such tech marvel that’s stealthily carving out a significant role in our software supply chain is OpenAI’s impressive language model – ChatGPT.

Introducing the Next Generation of AI at Egnyte

For nearly a decade, Egnyte has been applying AI to help customers protect and manage large volumes of unstructured data. The outputs of these models were historically focused on a relatively narrow set of IT security, privacy, and compliance applications. Today, we’re announcing the next generation of AI-powered solutions at Egnyte, unleashing content intelligence for every user on our platform!

Retrieval vs. poison - Fighting AI supply chain attacks

While perhaps new to AI researchers, supply chain attacks are nothing new to the world of cybersecurity. For those in the know, it has been best practice to verify the source and authenticity of downloads, package repositories, and containers. But human nature usually wins. As developers, our desire to move quickly to improve ease of use for users and customers can cause us to delay efforts to validate the software supply chain until we are forced to by our peers in compliance or security organizations.

Snyk's 2023 State of Open Source Security: Supply chain security, AI, and more

The 2021 Log4Shell incident cast a bright light on open source software security — and especially on supply chain security. The 18 months following the incident brought a greater focus on open source software security than at any time in history. Organizations like the OpenSSF, AlphaOmega, and large technology companies are putting considerable resources towards tooling and education. But is open source software security actually improving? And where are efforts still falling short?

Impact of Generative AI on Identity Proofing

Generative AI, the transformative technology causing a stir in the global tech sphere, is akin to an enthralling narrative with its charming allure and consequential dark underbelly. Its most notable impact is forecasted in the realm of identity proofing, creating ripples of change that demand our immediate attention.

SkopeAI: AI-powered Data Protection that Mimics the Human Brain

In the modern, cloud-first era, traditional data protection technology approaches struggle to keep up. Data is rapidly growing in volume, variety, and velocity. It is becoming more and more unstructured, and therefore, harder to detect, and consequently, to protect.

Rising Cybercrime: How Cyber Attackers Utilize Grammarly & Chat GPT

Explore the evolving tactics of cyber attackers, leveraging Grammarly and Chat GPT to craft convincing emails. Dive into social media targeting and psychological manipulation through bribery and coercion. Unravel the complexities of this ever-changing landscape. GUEST BIOS Joe Hancock.

More than an Assistant - A New Architecture for GenAI in Cloud Security

There is no question that cybersecurity is on the brink of an AI revolution. The cloud security industry, for example, with its complexity and chronic talent shortage, has the potential to be radically impacted by AI. Yet the exact nature of this revolution remains uncertain, largely because the AI-based future of cybersecurity is still being invented, step by step.

Cloud Security Meets GenAI: Introducing Sysdig Sage

The scale and complexity of the cloud has redefined the security battleground. Threats can now be anywhere and attacks are far, far faster. We are proud to introduce Sysdig Sage - an AI-powered security assistant that redefines what it means to respond at cloud speed. With Sage's help, you can take action on an attack in under 60 seconds! Using multi-domain correlation, multi-step reasoning, and - most importantly - runtime insights, Sage speeds up your investigation by prioritizing security events, providing context, and helping you assess risk.

WormGPT: Cybercriminals' Latest AI Tool

The rapid and widespread adoption of artificial intelligence (AI) has ushered in a new era of technological advancement, revolutionizing various industries and becoming immensely popular worldwide. AI-driven applications and solutions have streamlined processes, improved efficiency, and enhanced the overall user experience. However, this surge in AI’s popularity also comes with a dark side.

The New Era of AI-Powered Application Security. Part Three: How Can Application Security Cope With The Challenges Posed by AI?

This is the third part of a blog series on AI-powered application security. Following the first two parts that presented concerns associated with AI technology, this part covers suggested approaches to cope with AI concerns and challenges. In my previous blog posts, I presented major implications of AI use on application security, and examined why a new approach to application security may be required to cope with these challenges.

Artificial Intelligence Governance Professional Certification - AIGP

For anyone who follows industry trends and related news I am certain you have been absolutely inundated by the torrent of articles and headlines about ChatGPT, Google’s Bard, and AI in general. Let me apologize up front for adding yet another article to the pile. I promise this one is worth a read, especially for anyone looking for ways to safely, securely, and ethically begin introducing AI to their business.

Bard or ChatGPT: Cybercriminals Give Their Perspectives

Six months ago, the question, “Which is your preferred AI?” would have sounded ridiculous. Today, a day doesn’t go by without hearing about “ChatGPT” or “Bard.” LLMs (Large Language Models) have been the main topic of discussions ever since the introduction of ChatGPT. So, which is the best LLM? The answer may be found in a surprising source – the dark web. Threat actors have been debating and arguing as to which LLM best fits their specific needs.

Top 6 security considerations for enterprise AI implementation

As the world experiences the AI gold rush, organizations are increasingly turning to enterprise AI solutions to gain a competitive edge and unlock new opportunities. However, amid the excitement and potential benefits, one crucial aspect that must not be overlooked is data security — in particular, protecting against adversarial attacks and securing AI models. As businesses embrace the power of AI, they must be vigilant in safeguarding sensitive data to avoid potential disasters.

Using Generative AI for Creating Phishing Sequences

Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.

Best practices for using AI in the SDLC

AI has become a hot topic thanks to the recent headlines around the large language model (LLM) AI with a simple interface — ChatGPT. Since then, the AI field has been vibrant, with several major actors racing to provide ever-bigger, better, and more versatile models. Players like Microsoft, NVidia, Google, Meta, and open source projects have all published a list of new models. In fact, a leaked Google document makes it seem that these models will be ubiquitous and available to everyone soon.

No Ethical Boundaries: WormGPT

In this week's episode, Bill and Robin discover the dangerous world of an AI tool without guardrails: WormGPT. This AI tool is allowing people with limited technical experience to create potential chaos. When coupled with the rise in popularity of tools like the Wi-Fi pineapple, and Flipper Zero, do you need to be more worried about the next generation of script kiddies? Learn all this and more on the latest episode of The Ring of Defense!

The New Era of AI-Powered Application Security. Part Two: AI Security Vulnerability and Risk

AI-related security risk manifests itself in more than one way. It can, for example, result from the usage of an AI-powered security solution that is based on an AI model that is either lacking in some way, or was deliberately compromised by a malicious actor. It can also result from usage of AI technology by a malicious actor to facilitate creation and exploitation of vulnerabilities.

You're Not Hallucinating: AI-Assisted Cyberattacks Are Coming to Healthcare, Too

We recently published a blog post detailing how threat actors could leverage AI tools such as ChatGPT to assist in attacks targeting operational technology (OT) and unmanaged devices. In this blog post, we highlight why healthcare organizations should be particularly worried about this.

Tines Technical Advisory Board (TAB) Takeaways with Pete: part one

I’m Peter Wrenn, my friends call me Pete! I have the pleasure of being the moderator of the Tines Technical Advisory Board (TAB) which is held quarterly. In it, some of Tines’s power users engage in conversations around product innovations, industry trends, and ways we can push the Tines vision forward — automation for the whole team. Well, that’s the benefit to our customers and Tines.

Darknet Diaries host Jack Rhysider talks about hacker teens and his AI predictions

It’s human nature: when we do something we’re excited about, we want to share it. So it’s not surprising that cybercriminals and others in the hacker space love an audience. Darknet Diaries, a podcast that delves into the how’s and why’s and implications of incidents of hacking, data breaches, cybercrime and more, has become one way for hackers to tell their stories – whether or not they get caught.

[HEADS UP] See WormGPT, the new "ethics-free" Cyber Crime attack tool

CyberWire wrote: "Researchers at SlashNext describe a generative AI cybercrime tool called “WormGPT,” which is being advertised on underground forums as “a blackhat alternative to GPT models, designed specifically for malicious activities.” The tool can generate output that legitimate AI models try to prevent, such as malware code or phishing templates.

AI at Egnyte: The First Ten Years

In the 1960s, Theodore Levitt published his now famous treatise in the Harvard Business Review in which he warned CEOs of being “product oriented instead of customer oriented.” Among the many examples cited was the buggy whip industry. As Levitt wrote, “had the industry defined itself as being in the transportation business rather than in the buggy whip business, it might have survived. It would have done what survival always entails — that is, change.”

Unlocking the Potential of Artificial Intelligence in IoT

Imagine a world where IoT devices not only collect and transmit data, but also analyse, interpret, and make decisions autonomously. This is the power of integrating artificial intelligence in IoT (AI with the Internet of Things). The combination of these two disruptive technologies has the potential to revolutionize industries, businesses, and economies.

LLMs Need Security Too

In this episode Jb and Izar are joined by David Haber, CEO of Lakera, who focuses on securing LLMs and their use. We explore topics like prompt injection and their impact on security, safety and trust, and we look at the Gandalf experiment ran by Lakera. We touch on the recently drafted OWASP Top 10 on LLM project, and have a great discussion on what LLMs are really doing and their potential as tools and targets.

[Discovered] An evil new AI disinformation attack called 'PoisonGPT'

PoisonGPT works completely normally, until you ask it who the first person to walk on the moon was. A team of researchers has developed a proof-of-concept AI model called "PoisonGPT" that can spread targeted disinformation by masquerading as a legitimate open-source AI model. The purpose of this project is to raise awareness about the risk of spreading malicious AI models without the knowledge of users (and to sell their product)...

26 AI Code Tools in 2024: Best AI Coding Assistant

Generative AI unleashed a whole series of new innovations and tools to the masses in 2023. From AI chatbots to image generators to AI coding assistants, there is just so much to consider, and there are more and more being launched every day. In this guide, we will look at how AI is changing the world of software development by showcasing 26 AI coding tools that are helping developers produce high-quality software more efficiently.

AI is the Future of Cybersecurity. Here Are 5 Reasons Why.

While Gen AI tools are useful conduits for creativity, security teams know that they’re not without risk. At worst, employees will leak sensitive company data in prompts to chatbots like ChatGPT. At best, attack surfaces will expand, requiring more security resources in a time when businesses are already looking to consolidate. How are security teams planning to tackle the daunting workload? According to a recent Morgan Stanley report, top CIOs and CISOs are also turning to AI.

The New Era of AI-Powered Application Security. Part One: AI-Powered Application Security: Evolution or Revolution?

Imagine the following scenario. A developer is alerted by an AI-powered application security testing solution about a severe security vulnerability in the most recent code version. Without concern, the developer opens a special application view that highlights the vulnerable code section alongside a display of an AI-based code fix recommendation, with a clear explanation of the corresponding code changes.

Chaos AI Assistant (Security Analysis via Chain of Thought)

Now you can actually have a conversation with your data! The Chaos AI Assistant is a breakthrough feature that elevates log and event data analytics. Seamlessly integrating with the ChaosSearch Platform, it utilizes AI and Large Language Models (LLMs), enabling you to talk to your data to unveil actionable insights.

How to Decide Whether Vulnerability Remediation Augmented by Generative AI Reduces or Incurs Risk

Software security vendors are applying Generative AI to systems that suggest or apply remediations for software vulnerabilities. This tech is giving security teams the first realistic options for managing security debt at scale while showing developers the future they were promised; where work is targeted at creating user value instead of looping back to old code that generates new work.

Chaos AI Assistant (AWS Security Lake Analysis)

Now you can actually have a conversation with your data! The Chaos AI Assistant is a breakthrough feature that elevates log and event data analytics. Seamlessly integrating with the ChaosSearch Platform, it utilizes AI and Large Language Models (LLMs), enabling you to talk to your data to unveil actionable insights.

See it in action: Privacy-first generative AI with Elastic

Get a look at the power of Elasticsearch and generative AI (GAI) in action — always putting privacy first and safeguarding your proprietary data. Several examples show off the art of the possible, with intuitive, personalized results you can’t achieve with just publicly available data.

Top tips: What AI-powered security risks should you keep an eye out for?

We’ve all heard the cliché, “Change is the only constant.” Sure, it’s been overused to a point where it may have lost its meaning, but that doesn’t change the fact that this statement is true—and it couldn’t be more apt when describing the global tech landscape.

ChatGPT, the new rubber duck

Whether you are new to the world of IT or an experienced developer, you may have heard of the debugging concept of the 'programmer's rubber duck’. For the uninitiated, the basic concept is that by speaking to an inanimate object (e.g., a rubber duck) and explaining one’s code or the problem you are facing as if you were teaching it, you can solve whatever roadblock you’ve hit.

EP 31 - How Generative AI is Reshaping Cyber Threats

While generative AI offers powerful tools for cyber defenders, it’s also enabled cyber attackers to innovate and up the ante when it comes to threats such as malware, vulnerability exploitation and deep fake phishing. All this and we’re still just in the early days of the technology.

Synthetic Identity: When AI and ML Crunch (Your) Harvested Data

ChatGPT knows a lot about Len Noe, CyberArk’s resident technical evangelist, white hat hacker and biohacker. The biohacker piece of his title is a nod to the fact that Noe is transhuman (you might call him a cyborg and be right), which is why his grandkids call him “Robo Papa.” ChatGPT knows all of this.