Artificial intelligence (AI) is the hot topic of the moment, so we asked Tanium Executive Advisory Board member Larry Godec for his thoughts on generative AI in general and its more well-known applications, such as ChatGPT. Larry is the former CIO of First American Financial and a trusted advisor on AI topics to some of the world’s largest enterprises.
AI in cybersecurity refers to applying artificial intelligence (AI) techniques and technologies to defend against cyber threats. With the increasing number and complexity of cyber threats, traditional cybersecurity solutions are often insufficient. AI-driven technology, including advanced machine learning algorithms and computational models, has emerged as a powerful tool in the fight against cybercriminals.
Red teamers at IBM X-Force warn that AI-generated phishing emails are nearly as convincing as human-crafted ones, and can be created in a fraction of the time. The researchers tricked ChatGPT into quickly crafting a phishing lure, then tested the lure against real employees.
Right now, organisations using AI cybersecurity tools like SenseOn can improve their cybersecurity in three core ways: But, in the future, one of the most significant benefits of AI will be its ability to protect organisations from….AI. To see why, let’s jump into a time machine.
Everyone is talking about generative artificial intelligence (GenAI) and a massive wave of developers already incorporate this life-changing technology in their work. However, GenAI coding assistants should only ever be used in tandem with AI security tools. Let's take a look at why this is and what we're seeing in the data. Thanks to AI assistance, developers are building faster than ever.
In today’s rapidly evolving business landscape, organizations face an ever-increasing array of risks and compliance challenges. As businesses strive to adapt to the digital age, it has become imperative to enhance their Governance, Risk Management, and compliance (GRC) strategies. Fortunately, the fusion of artificial intelligence (AI) and GRC practices presents a transformative opportunity.
Ahead of the upcoming AI Safety Summit to be held at the UK’s famous Bletchley Park in November, I wanted to outline three areas that I would like to see the summit address, to help simplify the complex AI regulatory landscape. When we start any conversation about the risks and potential use cases for an artificial intelligence (AI) or machine learning (ML) technology, we must be able to answer three key questions.
As organizations continue to believe the malicious use of artificial intelligence (AI) will outpace its defensive use, new data focused on the future of AI in cyber attacks and defenses should leave you very worried. It all started with the proposed misuse of ChatGPT to write better emails and has (currently) evolved into purpose-built generative AI tools to build malicious emails. Or worse, to create anything an attacker would need using a simple prompt.
If you are searching for ways to actualise benefits from cybersecurity AI tools or want to find out what AI tools will really make a difference in your SOC, you’re not alone. A World Economic Forum survey last year showed that almost half of all security leaders thought AI and machine learning would have the greatest influence on stopping cyber attacks and malware in the next two years. And that was before ChatGPT started an AI frenzy.
As the world embraces the power of artificial intelligence, large language models (LLMs) have become a critical tool for businesses and individuals alike. However, with great power comes great responsibility – ensuring the security and integrity of these models is of utmost importance.
In an evolving era of Artificial Intelligence (AI) and Large Language Models (LLMs), innovative tools like GitHub's Copilot are transforming the landscape of software development. In a prior article, I published about the implications of this transformation and how it extends to both the convenience offered by these intelligently automated tools and the new set of challenges it brings to maintaining robust security in our coding practices.
The intersection of privilege escalation and identity is taking on new dimensions with the advent of Artificial Intelligence (AI). As AI becomes increasingly integrated into our lives, it both challenges and reinforces existing notions of privilege and identity. In this blog, we'll explore what privilege escalation means in the context of AI and how it influences our understanding of personal and societal identities.
It is common knowledge that when it comes to cybersecurity, there is no one-size-fits all definition of risk, nor is there a place for static plans. New technologies are created, new vulnerabilities discovered, and more attackers appear on the horizon. Most recently the appearance of advanced language models such as ChatGPT have taken this concept and turned the dial up to eleven.
Today we announced Vanta AI, our suite of AI-powered tools to accelerate and simplify security and compliance workflows. With Vanta AI, tasks that were previously impossible to automate can now be performed reliably in minutes, enabling security and compliance teams to prove trust and manage risk more efficiently and confidently than ever before. From the start, Vanta has been on a mission to secure the internet and protect consumer data.
Today we’re thrilled to announce the launch of Vanta AI, a new suite of tools that brings the power of AI and LLMs to the Vanta platform to help you accelerate compliance, efficiently assess vendor risk, and automate security questionnaires. AI is transforming the way work gets done, especially when it comes to reducing repetitive tasks.
Composing song lyrics, writing code, securing networks — sometimes it seems like AI can do it all. And with the rise of LLM-based engines like ChatGPT and Google Bard, what once seemed like science fiction is now accessible to anyone with an internet connection. These AI advancements are top-of-mind for most businesses and bring up a lot of questions.
The use of artificial intelligence in every area of life — from writing papers to maintaining critical infrastructure to manufacturing goods — is a controversial topic. Some are excited about the possibilities that come with AI/ML tech, while others are fearful and reticent. These differing opinions raise a fundamental question: will AI turn our modern-day society into a utopia or a dystopia?
Artificial Intelligence (AI) has emerged as a transformative force, reshaping industries, societies, and the way we live and work. The profound impact of AI is evident in virtually every facet of our lives, from personalized recommendations on streaming platforms to the automation of complex tasks in many industries. Join us on this data-driven journey to unravel the multifaceted world of AI and explore the numbers that underpin its significance in our rapidly evolving digital era.
Following the rush to Artificial Intelligence (AI), many companies have introduced new tools and services to the software supply chain. Some of today’s most popular AI development tools include: This assortment of tools can be used to develop a wide range of AI applications, such as chatbots, virtual assistants, and image recognition systems.
Threat actors continue to use generative AI tools to craft convincing social engineering attacks, according to Glory Kaburu at Cryptopolitan. “In the past, poorly worded or grammatically incorrect emails were often telltale signs of phishing attempts,” Kaburu writes. “Cybersecurity awareness training emphasized identifying such anomalies to thwart potential threats. However, the emergence of ChatGPT has changed the game.
On August 10th, the Pentagon introduced "Task Force Lima," a dedicated team working to bring Artificial Intelligence (AI) into the core of the U.S. defense system. The goal is to use AI to improve business operations, healthcare, military readiness, policy-making, and warfare. Earlier in August, the White House announced a large cash prize for individuals or groups that can create AI systems to defend important software from cyberattacks.