Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

December 2024

Secure Gen AI With Role-Based Access Control (RBAC)

Generative AI (Gen AI) has transformed how businesses handle data and automate processes. Its ability to generate human-like content and analyze massive datasets has unlocked new opportunities. However, these capabilities also introduce significant data security risks. Unauthorized access, data misuse, and breaches are growing concerns. Role-Based Access Control (RBAC) is a critical solution for mitigating these risks.

The Future of AI Regulation: Balancing Innovation and Safety in Silicon Valley

California Governor Gavin Newsom’s recent veto of SB 1047, a proposed AI safety bill, has sparked a hot debate on the balance between innovation and regulation in the artificial intelligence (AI) space. California has over a dozen AI related bills that have been signed although this bill sought to establish rigorous safety testing requirements for large-scale AI models and introduce an emergency "kill switch" for situations where systems might become dangerous.

5 Ways Audit Trails Can Protect Your Business

Audit trails are systematic records of activities and transactions within a system. They provide a transparent and chronological log of actions, making them essential for modern business operations. By integrating audit trails into their systems, businesses can strengthen transparency and enhance security. Audit trails are not just about record-keeping. They form the backbone of a secure and accountable business environment.

Securely Unlocking the Power of AI Skills in Microsoft Fabric

In today’s rapidly evolving digital landscape, the ability to harness the power of AI is becoming increasingly crucial for businesses. Within Microsoft Fabric, Microsoft recently added capabilities for building AI Skills, making it easier than ever for business users to create and integrate intelligent capabilities into your workflows to answer questions over lakehouse and warehouse tables. AI Skills are basically LLM engines that simplify interactions with data.

Mock Data for Testing: A Critical Component for Software and AI Development

Mock data is an essential tool in software development and testing, offering realistic and secure alternatives to sensitive production data. Beyond traditional testing, mock data is now a cornerstone for AI development, where large datasets are critical for training and validation. By mimicking the properties of real-world data while ensuring privacy and compliance, mock data enables organizations to innovate without compromising security or trust.

How AI Is Transforming Cybersecurity with Predictive Capabilities

Unless you've been avoiding the internet entirely, you've probably noticed the rise of sophisticated cyberattacks making headlines. From data breaches to ransomware, these threats aren't just increasing in number - they're becoming more complex and harder to detect. Enter artificial intelligence (AI), the unsung hero quietly reshaping cybersecurity. But how exactly does AI use its predictive superpowers to stay ahead of hackers? Let's dive in.

Healthcare Data Masking: Tokenization, HIPAA, and More

Healthcare data masking unlocks the incredible potential of healthcare data for analytics and AI applications. The insights from healthcare data can revolutionize the industry from improving patient care to streamlining operations. However, the use of such data is fraught with risk. In the United States, Protected Health Information (PHI) is regulated by the Health Insurance Portability and Accountability Act (HIPAA), which sets stringent requirements to safeguard patient privacy.

The hardware that powers Cloudflare: AI-capable Gen 12 servers and more

Join host João Tomé and Cloudflare’s Head of Hardware Engineering, Syona Sarma, for a discussion on Cloudflare’s latest Generation 12 hardware innovations, broadcast from the Lisbon office. As Cloudflare expands its global network across over 330 cities and 120 countries, explore how the company is evolving its hardware infrastructure to meet the demands of modern technology, particularly in the AI era.

Strengthen LLMs with Sysdig Secure

The term LLMjacking refers to attackers using stolen cloud credentials to gain unauthorized access to cloud-based large language models (LLMs), such as OpenAI’s GPT or Anthropic Claude. This blog shows how to strengthen LLMs with Sysdig. The attack works by criminals exploiting stolen credentials or cloud misconfigurations to gain access to expensive artificial intelligence (AI) models in the cloud. Once they gain access, they can run costly AI models at the victim’s expense.

Predicting cybersecurity trends in 2025: AI, regulations, global collaboration

Cybersecurity involves anticipating threats and designing adaptive strategies in a constantly changing environment. In 2024, organizations faced complex challenges due to technological advances and sophisticated threats, requiring them to constantly review their approach. For 2025, it is crucial to identify key factors that will enable organizations to strengthen their defenses and consolidate their resilience in the face of a dynamic and risk-filled digital landscape.

LLMs - The what, why and how

LLMs are based on neural network architectures, with transformers being the dominant framework. Introduced in 2017, transformers use mechanisms called attention mechanisms to understand the relationships between words or tokens in text, making them highly effective at understanding and generating coherent language. Practical Example: GPT (Generative Pre-trained Transformer) models like GPT-4 are structured with billions of parameters that determine how the model processes and generates language.

AI-Powered Investment Scams Surge: How 'Nomani' Steals Money and Data

Cybersecurity researchers are warning about a new breed of investment scam that combines AI-powered video testimonials, social media malvertising, and phishing tactics to steal money and personal data. Known as Nomani — a play on "no money" — this scam grew by over 335% in H2 2024, with more than 100 new URLs detected daily between May and November, according to ESET's H2 2024 Threat Report.

4 tips for securing GenAI-assisted development

Gartner predicts that generative AI (GenAI) will become a critical workforce partner for 90% of companies by next year. In application development specifically, we see developers turning to code assistants like Github Copilot and Google Gemini Code Assist to help them build software at an unprecedented speed. But while GenAI can power new levels of productivity and speed, it also introduces new threats and challenges for application security teams.

How AI is Revolutionizing Compliance Management

Organizations worldwide struggle with complex regulatory requirements. AI in compliance management emerges as a powerful solution to simplify these challenges. Modern businesses face unprecedented pressure to maintain rigorous compliance standards across multiple domains. AI for compliance transforms how companies approach regulatory requirements. Traditional methods consume significant resources and expose organizations to substantial risks.

80% of Cybersecurity Leaders Prefer Platform-Delivered GenAI for Stronger Defense

Adversaries are advancing faster than ever, exploiting the growing complexity of business IT environments. In this high-stakes threat landscape, generative AI (GenAI) is a necessity. With organizations grappling with skills shortages, sophisticated adversaries and operational complexity, 64% of security professionals have already kicked off their GenAI purchase journey.

Trustwave's 2025 Cybersecurity Predictions: AI-Powered Attacks, Critical Infrastructure Risks, and Regulatory Challenges

As 2024 comes to a close, we went around the room and asked some of Trustwave’s top executives what cybersecurity issues and technology they saw playing a prominent role in 2025. Here is the latest installment. As we look ahead to 2025, the landscape of cyber threats continues to evolve, presenting new challenges for cybersecurity professionals.

Advancing the Arctic Wolf Aurora Platform with Cylance's Endpoint Security Suite

Arctic Wolf has taken a decisive step forward in our mission to end cyber risk by acquiring Cylance, a pioneer of AI-based endpoint protection. With this acquisition, Arctic Wolf ushers a new era of simplicity and automation to the endpoint security market that will deliver the security outcomes endpoint security customers have been struggling to achieve for years.

Data De-identification: Definition, Methods & Why it is Important

Data is essential. Businesses, researchers, and healthcare providers rely on it. However, this data often contains sensitive personal information, creating privacy risks. Data de-identification helps mitigate these risks by removing or altering identifiers. This makes it harder to link data back to specific individuals. This process is vital for protecting sensitive information and allowing safe data use. Privacy is a growing concern. Regulations like HIPAA set strict rules.

The Evolution of Cyber Attacks: Lessons for Staying Safe in 2025

The pace at which cyberattacks are evolving has accelerated in recent years, driven by technological advances, particularly artificial intelligence (AI) and machine learning. The sophistication of cybercriminals' tactics has reached unprecedented levels, posing new challenges for traditional cybersecurity defenses. In this article, we will explore the key developments in cyber threats, identify emerging risks, and offer practical lessons on how businesses and individuals can stay safe in 2025.

Understanding Shadow IT in the Age of AI

With the emergence of artificial intelligence (AI), there has been a flurry of new terms to describe an increasing variety of new problems. Some of those problems have been around for decades but are now more difficult to manage due to the versatility of AI-based tools and applications. One of those ongoing challenges is shadow IT with a new class of problems classified as shadow AI.

94% of U.K. Businesses Aren't Adequately Prepared for AI-Driven Phishing Scams

A new report makes it clear that U.K. organizations need to do more security awareness training to ensure their employees don’t fall victim to the evolving use of AI. Here at KnowBe4, we’ve long known that AI is going to be a growing problem, with phishing attacks and the social engineering they employ far more believable and effective.

Cybersecurity in 2025: Converging Identities, Private AIs and Autonomous APTs

2024 has proved historic for technology and cybersecurity—and we still have some distance from the finish line. We’ve witnessed everything from advancements in artificial intelligence (AI) and large language models (LLMs) to brain-computer interfaces (BCIs) and humanoid robots. Alongside these innovations, new attack vectors like AI model jailbreaking and prompt hacking have emerged. And we also experienced the single largest IT outage the world has ever seen.

Introducing Tanium Ask: Using AI to Get Questions Answered

How many questions does your organization need to answer about your endpoints every day, and how long does it typically take to get the answer? How often do these questions require an operator with great expertise to provide accurate answers? Do the questions feel like they are resulting in fire drills for your teams?

The Essential LLM Security Checklist

Large language models (LLMs) are transforming how we work and are quickly becoming a core part of how businesses operate. But as these powerful models become more embedded, they also become prime targets for cybercriminals. The risk of exploitation is growing by the day. More than 67% of organizations have already incorporated LLMs into their operations in some way – and over half of all data engineers are planning to deploy an LLM to production within the next year.

Resecurity introduces Government Security Operations Center (GSOC) at NATO Edge 2024

Resecurity, a global leader in cybersecurity solutions, unveiled its advanced Government Security Operations Center (GSOC) during NATO Edge 2024, the NATO Communications and Information Agency's flagship conference. The solution is also specifically tailored for MSSPs that protect aerospace and defense organizations.

Ultralytics AI Pwn Request Supply Chain Attack

The ultralytics supply chain attack occurred in two distinct phases between December 4-7, 2024. In the first phase, two malicious versions were published to PyPI: version 8.3.41 was released on December 4 at 20:51 UTC and remained available for approximately 12 hours until its removal on December 5 at 09:15 UTC. Version 8.3.42 was published shortly after on December 5 at 12:47 UTC and was available for about one hour before removal at 13:47 UTC.

Top Tool Capabilities to Prevent AI-Powered Attacks

Recent advances in AI technologies have granted organizations and individuals alike unprecedented productivity, efficiency, and operational benefits. AI is, without question, the single most exciting emerging technology in the world. However, it also brings enormous risks. While the dystopian, AI-ruled worlds of sci-fi films are a long way off, AI is helping cyber threat actors launch attacks at a hitherto unknown scale and level of sophistication. But what are AI-powered attacks?

Top 5 PII Data Masking Techniques: Pros, Cons, and Best Use Cases

Protecting sensitive information has never been more critical, especially in today’s AI-driven world. As businesses increasingly leverage AI and advanced analytics, safeguarding Personally Identifiable Information (PII) and Patient Health Information (PHI) is paramount. Data masking has become a cornerstone strategy, allowing organizations to securely manage and analyze data while significantly reducing the risks of exposure and misuse.

How Governments Can Mitigate AI-Powered Cyber Threats

Cybersecurity leaders across all levels of government are growing increasingly alarmed by the rise of cyber attacks fueled by Artificial Intelligence (AI). Cybercriminals are now incorporating machine learning and automation into their strategies, significantly boosting the scale, efficiency and sophistication of their attacks. According to a recent survey of over 800 IT leaders, a staggering 95% believe that cyber threats have become more advanced than ever before.

'Tis the Season for Artificial Intelligence-Generated Fraud Messages

The FBI issued an advisory on December 3rd warning the public of how threat actors use generative AI to more quickly and efficiently create messaging to defraud their victims, echoing earlier warnings issued by Trustwave SpiderLabs. The FBI noted that publicly available tools assist criminals with content creation and can correct human errors that might otherwise serve as warning signs of fraud.

One Identity's approach to AI in cybersecurity

In this video, Chinski addresses the challenges posed by malicious AI, such as deepfakes and advanced phishing attacks, emphasizing the importance of threat detection and response. On the flip side, Chinski showcases how One Identity uses predictive AI and machine learning in solutions like Identity Manager and Safeguard to enhance security through behavioral analytics and governance.

Expert predictions: What do cybercriminals have planned for 2025?

It’s that time again— we’re saying goodbye to 2024 and looking ahead to what the new year may bring. From AI-driven attacks and the rise of deepfakes to the growing vulnerabilities in collaboration tools, the cyber landscape is set to face new and evolving threats. What trends should we prepare for and how can we stay one step ahead?

FBI Warns of Cybercriminals Using Generative AI to Launch Phishing Attacks

The US Federal Bureau of Investigation (FBI) warns that threat actors are increasingly using generative AI to increase the persuasiveness of social engineering attacks. Criminals are using these tools to generate convincing text, images, and voice audio to impersonate individuals and companies. “Generative AI reduces the time and effort criminals must expend to deceive their targets,” the FBI says.

Trustwave Named a Major Player in IDC MarketScape: Worldwide Cloud Security Services in the AI Era 2024-2025 Vendor Assessment

IDC has positioned Trustwave as a Major Player in the just released IDC MarketScape Worldwide Cloud Security Services in the AI Era 2024–2025 Vendor Assessment (IDC, November 2024) for its comprehensive set of offensive and defensive cloud security services. IDC said organizations should consider Trustwave when “Enterprises with varying levels of security maturity that require customized hybrid approach and depth of offensive and defensive security capabilities should consider Trustwave.

How to Strike a Balance Between Automation and Human Touch in AI Recruitment

As AI continues to redefine recruitment, the question arises: can we automate without losing the human touch? The integration of AI into recruitment processes, from sourcing and screening to interviewing and prequalifying candidates, has increased efficiency.

Setting Guardrails for AI Agents and Copilots

The rapid adoption of AI agents and copilots in enterprise environments has revolutionized how businesses operate, boosting productivity and innovation. We continue to see more and more innovation in this space, between Microsoft Copilot continuing its dominance, and with Salesforce Agentforce recently announced, business users of all technical backgrounds can now even build their own AI agents that act on our behalf.

How To Speed Up Insider Threat Investigations With AI

Collecting forensics for Insider Threat investigations doesn't have to be a hassle. Learn how Teramind's platform makes it easy to speed up insider threat investigations so you prevent threats from causing major security incidents. Even better, our AI-powered OMNI platform presents potential risks in a News Feed-style format, so you can address the most pressing concerns before they happen.

6 Best ChatGPT Alternatives

ChatGPT is widely regarded as the leading AI chatbot, but it is by no means the only one available. Depending on your particular requirements, you may find that ChatGPT isn't the best fit. While it is a versatile tool, it can sometimes lack effectiveness compared to more specialized alternatives. Therefore, exploring a variety of ChatGPT substitutes is advisable.

Revolutionizing Cyber Defense: AI-Powered Chatbots as the New Frontline against Threats

There has never been a greater need for creative solutions when cyber dangers are changing at an unprecedented rate. One of the most exciting advancements in cybersecurity is the use of AI-powered chatbots, which are rapidly becoming essential in protecting against increasingly complex assaults. These chatbots are pushing the limits of cyber defence by utilizing developments in artificial intelligence. Let's examine how AI chatbot in cybersecurity is changing the game, providing unmatched security, and influencing how digital security will develop in the future.

Crypto Marketing Trends to Watch in 2025: The Future of Blockchain Branding

The crypto world is evolving faster than ever, and as 2025 approaches, marketing strategies need to keep pace with this dynamic landscape. Companies like ICODA are leading the way in transforming how cryptocurrency projects engage audiences, leveraging cutting-edge tools and strategies to drive success. With an emphasis on crypto SEO and personalized campaigns, ICODA exemplifies how businesses can stay ahead of the curve in the highly competitive blockchain space.

Sumo Logic Mo Copilot: AI assistant for faster incident response and simplified troubleshooting

AI is transforming industries at an unprecedented pace. From generative AI tools revolutionizing creative work to AI assistants reshaping enterprise workflows, one thing is clear: this technology is no longer a nice-to-have; it’s a must-have. But what about DevSecOps - the teams tasked with safeguarding our modern apps and infrastructure and ensuring their reliability?

Security Threats Facing LLM Applications and 5 Ways to Mitigate Them

Large Language Models (LLMs) are AI systems trained on vast textual data to understand and generate human-like text. These models, such as OpenAI’s Chat GPT-4 and Anthropic Claude, leverage their wide-ranging language input to perform various tasks, making them versatile tools in the tech world. By processing extensive datasets, LLMs can predict subsequent word sequences in text, enabling them to craft coherent and contextually relevant responses.

Trustwave's 2025 Cybersecurity Predictions: AI as Powerful Ally for Cyber Defenders and Law Enforcement

As 2024 comes to a close, we went around the room and asked some of Trustwave’s top executives what cybersecurity issues and technology they saw playing a prominent role in 2025. Over the next several weeks their thoughts will be posted here, so please read on and stay tuned! As we approach 2025, cybersecurity landscapes are set to evolve in unprecedented ways, with artificial intelligence (AI) taking center stage for both cyber defenders and threat actors alike.

Using AI and Machine Learning in Video Editing

The world of video editing is seeing some exciting advancements recently. Thanks to the inclusion of AI in editing programs, people are able to create videos with greater precision, richer color, and more special features than ever before. So, just what is it that AI is helping to bring about in the world of video editing? We will take a closer look in the sections below.