Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

January 2024

Featured Post

Why Identity is the Cornerstone of a Zero Trust Architecture

As organisations continue to embrace digital transformation to gain access to the cloud's many benefits, this means that computing environments are evolving into borderless IT ecosystems. Digital identities are also evolving at pace and identity security is now a crucial aspect of cybersecurity. As we continue to digitally transform organisations, so the importance of secure and reliable digital identities has grown. 2024 is poised to usher in a multitude of innovations and trends in this area, ranging from advanced biometrics to the integration of artificial intelligence and machine learning to meet the changing needs of businesses, individuals, and governments.

Introducing NIST AI RMF: Monitor and mitigate AI risk

The pace and complexity of AI technologies is increasing every day. In this rapidly changing environment, it’s critical for companies to adopt a rigorous approach to safely and responsibly incorporating AI into their products and processes. ‍ That’s why we’re excited to announce that the NIST AI Risk Management Framework (RMF) is now available in beta.

Demo Tuesday: AI Assist

If you could ask your network one question, what would it be? Good news, you can ask it all the questions you want with Forward Enterprise's new AI Assist feature. Watch Mike Lossmann use natural language to perform Network Query Engine searches-- no matter your role or skill level, you can conduct sophisticated network queries with a minimal learning curve.

Celebrating new milestones plus enterprise-ready features and more AI capabilities

Today we’re excited to share several milestones as we continue on our mission to secure the internet and protect consumer data. ‍ ‍ ‍ ‍ And we’re just getting started. ‍ As we continue to reimagine GRC tools for the future of trust, we’ve built enterprise-ready features and rolled out additional Vanta AI capabilities along with support for the NIST AI Risk Management Framework. ‍ ‍

Introducing AI Data Import for Access Reviews

Conducting regular user access reviews is an effective way to make sure your organization is securing access to critical systems and third-party vendors. Frameworks like SOC 2 and ISO 27001 even require proof of regular access reviews to demonstrate compliance. ‍ Without automation, access reviews are tedious and time-consuming, requiring IT and security teams to manually record user access information in a spreadsheet and take countless screenshots of access permissions screens. ‍

Data-Driven Decisions: How Energy Software Solutions Drive Efficiency

The energy sector is undergoing a transformative shift, and at the heart of this change is the crucial role that data plays in decision-making. In a rapidly evolving landscape, organizations are recognizing the power of data-driven decisions to enhance efficiency and sustainability. This article explores the significance of harnessing data in the energy industry and the pivotal role played by advanced energy software solutions.

How to steal intellectual property from GPTs

A new threat vector discovered by Cato Research could reveal proprietary information about the internal configuration of a GPT, the simple custom agents for ChatGPT. With that information, hackers could clone a GPT and steal one’s business. Extensive resources were not needed to achieve this aim. Using simple prompts, I was able to get all the files that were uploaded to GPT knowledge and reveal their internal configuration.

Forget Deepfake Audio and Video. Now There's AI-Based Handwriting!

Researchers have developed AI technology that can mimic someone’s handwriting with only a few paragraphs of written content. Experts worry about the possibility of misuse. The Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi announced they have developed handwriting AI based on a neural network designed to learn context and meaning in sequential data.

Five worthy reads: Making AI functionality transparent using the AI TRiSM framework

Five worthy reads is a regular column on five noteworthy items we have discovered while researching trending and timeless topics. This week, we will explore the pivotal role of the AI trust, risk, and security management (AI TRiSM) framework in safeguarding the functionality of AI and understand why it is crucial for our protection. Any relationship needs to be fortified with trust to be successful. The human-AI relationship is not an exception.

Retail in the Era of AI: An Industry Take on Splunk's 2024 Predictions

Macro technology trends have always impacted and influenced every aspect of the retail industry. From the days of catalog ordering and cash only transactions to today’s personalized, always-on omnichannel experiences where contactless payment has become the norm - the world of retail is almost unrecognizable.

How Elastic AI Assistant for Security and Amazon Bedrock can empower security analysts for enhanced performance

Generative AI and large language models (LLMs) are revolutionizing natural language processing (NLP), offering enhanced conversational AI experiences for customer service and boosting productivity. To meet enterprise needs, it’s important to ensure the responses that are generated are accurate as well as respect the permissions model associated with the underlying content.

NCSC Warns That AI is Already Being Used by Ransomware Gangs

In a newly published report, the UK's National Cyber Security Centre (NCSC) has warned that malicious attackers are already taking advantage of artificial intelligence and that the volume and impact of threats - including ransomware - will increase in the next two years. The NCSC, which is part of GCHQ - the UK's intelligence, security and cyber agency, assesses that AI has enabled relatively unskilled hackers to "carry out more effective access and information gathering operations...

Forward Networks Delivers First Generative AI Powered Feature

Natural language prompts put the power of NQE into the hands of every networking engineer As featured in Network World, Forward Networks has raised the bar for network digital twin technology with AI Assist. This groundbreaking addition empowers NetOps, SecOps, and CloudOps professionals to harness the comprehensive insights of NQE through natural language prompts to quickly resolve complex network issues. See the feature in action.

Future of VPNs in Network Security for Workers

The landscape of network security is continuously evolving, and Virtual Private Networks (VPNs) are at the forefront of this change, especially in the context of worker security. As remote work becomes more prevalent and cyber threats more sophisticated, the role of VPNs in ensuring secure and private online activities for workers is more crucial than ever. Let's explore the anticipated advancements and trends in VPN technology that could redefine network security for workers.

Four Takeaways from the McKinsey AI Report

Artificial intelligence (AI) has been a hot topic of discussion this year among tech and cybersecurity professionals and the wider public. With the recent advent and rapid advancement of a number of publicly available generative AI tools—ChatGPT, Dall-E, and others—the subject of AI is at the top of many minds. Organizations and individuals alike have adopted these tools for a wide range of business and personal functions.

Use of Generative AI Apps Jumps 400% in 2023, Signaling the Potential for More AI-Themed Attacks

As the use of Cloud SaaS platforms of generative AI solutions increases, the likelihood of more “GPT” attacks used to gather credentials, payment info and corporate data also increases. In Netskope’s Cloud and Threat Report 2024, they show a massive growth in the use of generative AI solutions – from just above 2% of enterprise users prior to 2023 to over 10% in November of last year. Mainstream AI services ChatGPT, Grammarly, and Google Bard all top the list of those used.

How Cloudflare's AI WAF proactively detected the Ivanti Connect Secure critical zero-day vulnerability

Most WAF providers rely on reactive methods, responding to vulnerabilities after they have been discovered and exploited. However, we believe in proactively addressing potential risks, and using AI to achieve this. Today we are sharing a recent example of a critical vulnerability (CVE-2023-46805 and CVE-2024-21887) and how Cloudflare's Attack Score powered by AI, and Emergency Rules in the WAF have countered this threat.

Cato Taps Generative AI to Improve Threat Communication

Today, Cato is furthering our goal of simplifying security operations with two important additions to Cato SASE Cloud. First, we’re leveraging generative AI to summarize all the indicators related to a security issue. Second, we tapped ML to accelerate the identification and ranking of threats by finding similar past threats across an individual customer’s account and all Cato accounts.

Making Sense of AI in Cybersecurity

Unless you have been living under a rock, you have seen, heard, and interacted with Generative AI in the workplace. To boot, nearly every company is saying something to the effect of “our AI platform can help achieve better results, faster,” making it very confusing to know who is for real and who is simply riding the massive tidal wave that is Generative AI.

Fake Biden Robocall Demonstrates the Need for Artificial Intelligence Governance Regulation

The proliferation of artificial intelligence tools worldwide has generated concern among governments, organizations, and privacy advocates over the general lack of regulations or guidelines designed to protect against misusing or overusing this new technology.

AI Does Not Scare Me, But It Will Make The Problem Of Social Engineering Much Worse

I am not scared of AI. What I mean is that I do not think AI is going to kill humanity Terminator-style. I think AI is going to be responsible for more cybercrime and more realistic phishing messages, but it is already pretty bad. Social engineering, without AI, is already involved in 70% - 90% of successful cyber attacks.

3 tips from Snyk and Dynatrace's AI security experts

McKinsey is calling 2023 “generative AI’s breakout year.” In one of their recent surveys, a third of respondents reported their organizations use GenAI regularly in at least one business function. But as advancements in AI continue to reshape the tech landscape, many CCISOs are left grappling with this question: How does AI impact software development cycles and the overall security of business applications?

In AI we trust: AI governance best practices from legal and compliance leaders

According to Vanta’s State of Trust Report, 54% of businesses say that regulating AI would make them more comfortable investing in it. But with regulation still in flux, how can companies adopt AI safely and responsibly to minimize risk while accelerating innovation?

What Existing Security Threats Do AI and LLMs Amplify? What Can We Do About Them?

In my previous blog post, we saw how the growth of generative AI and Large Language Models has created a new set of challenges and threats to cybersecurity. However, it’s not just new issues that we need to be concerned about. The scope and capabilities of this technology and the volume of the components that it handles can exacerbate existing cybersecurity challenges. That’s because LLMs are deployed globally, and their impact is widespread.

Protecto - Data Protection for Gen AI Applications. Embrace AI confidently!

Worried your AI is leaking sensitive data? Stuck between innovation and data protection fears? Protecto is your answer. Embrace AI's power without sacrificing privacy or security. Smartly replace your personal data with tokenized shadows. Move at the speed of light, free from data leaks and lawyer headaches. Protecto enables Gen AI apps to preserve privacy, protect sensitive enterprise data, and meet compliance in minutes.

AI & Cybersecurity: Navigating the Digital Future

As we keep a close eye on trends impacting businesses this year, it is impossible to ignore the impacts of Artificial Intelligence and its evolving relationship with technology. One of the key areas experiencing this transformational change is cybersecurity. The integration of AI with cybersecurity practices is imperative, and it also demands a shift in how businesses approach their defenses.

Developing Enterprise-Ready Secure AI Agents with Protecto

In an era where artificial intelligence is transforming industries, AI agents are emerging as powerful tools for automating workflows, enhancing decision-making, and delivering tailored user experiences. These agents are entrusted with handling vast amounts of sensitive data from sensitive healthcare records to financial transactions and intellectual property. However, this trust comes with a significant responsibility: ensuring robust data security and compliance.

AI and digital twins: A roadmap for future-proofing cybersecurity

Keeping up with threats is an ongoing problem in the constantly changing field of cybersecurity. The integration of artificial intelligence (AI) into cybersecurity is emerging as a vital roadmap for future-proofing cybersecurity, especially as organizations depend more and more on digital twins to mimic and optimize their physical counterparts.

2024 IT Predictions: What to Make of AI, Cloud, and Cyber Resiliency

The future is notoriously hard to see coming. In the 1997 sci-fi classic Men in Black — bet you didn’t see that reference coming — a movie about extraterrestrials living amongst us and the secret organization that monitors them, the character Kay, played by the great Tommy Lee Jones, sums up this reality perfectly: While vistors from distant galaxies have yet to make first contact — or have they? — his point stands.

Ethical Crossroads in AI: Unveiling Global Perspectives| Navigating the Dark Side of Technology

Dive into the intricate web of ethical dilemmas in AI development with me in this thought-provoking video. As we tread through the current phase of AI evolution, I explore the intrinsic differences in moralities and ethics between the East, West, South, and North. Delving into the darker side of humanity, we confront the potential misuse of advanced technology. Join the conversation as we revisit the debate on the consequences of good AI falling into the wrong hands. The looming question of a clash between ethical AI and malicious use is on the horizon, and it could escalate swiftly.

The Road Ahead: What Awaits in the Era of AI-Powered Cyberthreats?

Artificial intelligence (AI) is rapidly infiltrating the business world and our daily lives. While revolutionizing how – and how efficiently – work gets done, it also introduces a new set of cybersecurity challenges. In response to the evolving, AI-shaped threat landscape, I foresee organizations adopting robust countermeasures.

Analysis of Phishing Emails Shows High Likelihood They Were Written By AI

It’s no longer theoretical; phishing attacks and email scams are leveraging AI-generated content based on testing with anti-AI content solutions. I’ve been telling you since the advent of ChatGPT’s public availability that we’d see AI’s misuse to craft compelling and business-level email content.

Navigating the AI Landscape: The Urgent Call for Transparent Martial Law | Razorthorn Security

Embark on a critical discussion with me as we dissect the current state of AI legislation, shining a spotlight on the ambiguity surrounding military applications. In this video, I emphasize the pressing need for transparent frameworks and controls in the military AI sector, mirroring the advancements seen in commercial and medical domains. It's a call to action for industries, governments, and users worldwide to unite in pushing for robust controls and accountability. The risk is real, and it's time to prioritize ethical considerations in military AI.

The Hidden Costs of AI: Disruptions, Frauds, Job Loss, and the Looming Wage Depression

Embark on an in-depth exploration of the unintended consequences of AI in this eye-opening video From the rise of scams during the holiday season fueled by deceptive AI-generated product ads to the emotional toll on people with sudden digital relationships, we explore the disruptive side of artificial intelligence.

AI and security: It is complicated but doesn't need to be

AI is growing in popularity and this trend is only set to continue. This is supported by Gartner which states that approximately 80% of enterprises will have used generative artificial intelligence (GenAI) application programming interfaces (APIs) or models by 2026. However, AI is a broad and ubiquitous term, and, in many instances, it covers a range of technologies. Nevertheless, AI presents breakthroughs in the ability to process logic differently which is attracting attention from businesses and consumers alike who are experimenting with various forms of AI today. At the same time, this technology is attracting similar attention from threat actors who are realising that it could be a weakness in a company's security while it could also be a tool that helps companies to identify these weaknesses and address them.

AI and privacy - Addressing the issues and challenges

Artificial intelligence (AI) has seamlessly woven itself into the fabric of our digital landscape, revolutionizing industries from healthcare to finance. As AI applications proliferate, the shadow of privacy concerns looms large. The convergence of AI and privacy gives rise to a complex interplay where innovative technologies and individual privacy rights collide.

Navigating the Reality of AI: Debunking the AGI Myth and Addressing Current Challenges #podcast

Delve into the realm of Artificial Intelligence with me as we unravel the truths about AGI (Artificial General Intelligence). In this video, we'll address the misconceptions and emphasize the current state of AI, focusing on the challenges posed by sophisticated language models (LLMs). Let's acknowledge the reality – AGI isn't around the corner, and there are crucial issues to tackle now. We'll explore the impact on society, jobs, and cybersecurity posed by the existing level of AI sophistication.

GPT Guard - A Step by Step Guide

GPTGuard - ChatGPT-like insights, zero privacy risk Want to chat with LLMs like ChatGPT without sacrificing privacy? GPTGuard keeps your interactions secure and private by masking sensitive data in your prompts. GPTGuard shields sensitive data through a unique masking technique that allows LLMs to grasp the context without directly receiving confidential information. Discover the power of safe AI with GPTGuard's special data masking technology.

7 Cybersecurity Predictions for 2024: An AI-Dominated Year

Part of being a part of the cybersecurity industry means looking ahead to the future and anticipating what’s to come. For most of us, we should expect a 2024 that is largely dominated by AI discussion. With the cybersecurity industry growing rapidly, AI is at the forefront of every organization’s cyber plans and plays an integral role in all technological advances.

How Generative AI Will Accelerate Cybersecurity with Sherrod DeGrippo

In this episode of Cyber Security Decoded, host Steve Stone, Head of Rubrik Zero Labs, is joined by Sherrod DeGrippo, Director of Threat Intelligence Strategy at Microsoft to discuss the cyber threat landscape. In this episode, you'll hear insights on: Rubrik Zero Labs' “The State of Data Security: The Journey to Secure an Uncertain Future" report provides a timely view into the increasingly commonplace problem of cyber risks and the challenge to secure data across an organization’s expanding surface area.

Beyond Buzzwords: The Truth About AI

Hey there, Razorwire listener! In this episode, we welcome back cybersecurity experts Richard Cassidy and Oliver Rochford to follow up on our AI podcast back in November. Join us for spirited debates on the current state of AI capabilities, imminent impacts to society and business, and thought-provoking speculation on the future of AI and its existential promise and perils.

OpenAI's GPT Store: What to Know

Many are speculating that at long last, OpenAI’s GPT store is set to go live this week. GPT builders and developers received an email on January 4th notifying them of the launch, which has been rumored for months, and likely only delayed due to the drama that has taken place at the company. This blog will summarize what this means for citizen development and how security teams should approach this new technological breakthrough from the AI giant.

Introducing Cloudflare's 2024 API security and management report

You may know Cloudflare as the company powering nearly 20% of the web. But powering and protecting websites and static content is only a fraction of what we do. In fact, well over half of the dynamic traffic on our network consists not of web pages, but of Application Programming Interface (API) traffic — the plumbing that makes technology work.

How to choose a security tool for your AI-generated code

“Not another AI tool!” Yes, we hear you. Nevertheless, AI is here to stay and generative AI coding tools, in particular, are causing a headache for security leaders. We discussed why recently in our Why you need a security companion for AI-generated code post. Purchasing a new security tool to secure generative AI code is a weighty consideration. It needs to serve both the needs of your security team and those of your developers, and it needs to have a roadmap to avoid obsolescence.

Secure AI System Development

Scientific progress in AI and downstream innovation to solve concrete real-world problems is part of a greater movement toward inventing Artificial General Intelligence (AGI). Broadly speaking, AGI is defined as an intelligent agent that can emulate and surpass human intelligence. Today, we are already familiar with incomplete forms of AGI: Despite these promising innovations moving from the scientific domain to consumer marketplaces, we are still far from achieving AGI.

Using Amazon SageMaker to Predict Risk Scores from Splunk

Splunk Enterprise and Splunk Cloud Platform, along with the premium products that are built upon them, are open platforms, which allow third party products to query data within Splunk for further use case development. In this blog, we will cover using Amazon SageMaker as the ISV product using the data within Splunk to further develop a fraud detection use case to predict future risk scores.

Unleashing Creativity: Exploring CapCut's Online Photo Editor for Dynamic Graphic Design

In today's digital era, visual content reigns supreme, shaping our online experiences and communication. CapCut, known for its expertise in video editing, also presents an impressive online photo editor designed for creative pursuits. This article aims to explore the diverse capabilities of CapCut's online photo editor, focusing solely on its innovative features for photo editing, graphic creation, and the transformation of ideas from speech to text.

Using Veracode Fix to Remediate an SQL Injection Flaw

In this first in a series of articles looking at how to remediate common flaws using Veracode Fix – Veracode’s AI security remediation assistant, we will look at finding and fixing one of the most common and persistent flaw types – an SQL injection attack. An SQL injection attack is a malicious exploit where an attacker injects unauthorized SQL code into input fields of a web application, aiming to manipulate the application's database.