Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

June 2024

Integrating Zero Trust Security Models with LLM Operations

Zero Trust Security Models are a cybersecurity paradigm that assumes no entity, whether inside or outside the network, can be trusted by default. This model functions on the principle of "never trust, always verify," meaning every access request must be authenticated and authorized regardless of origin.

AI in Cybersecurity: Benefits and Challenges

Cyber threats are getting more sophisticated and frequent. As a result, organizations are always looking for ways to outsmart cybercriminals. This is where artificial intelligence (AI) comes in handy. Artificial intelligence (AI) is transforming the cybersecurity landscape by offering faster, more precise, and more efficient means of identifying cyber threats.

AI Regulations and Governance Monthly AI Update

In an era of unprecedented advancements in AI, the National Institute of Standards and Technology (NIST) has released its "strategic vision for AI," focusing on three primary goals: advancing the science of AI safety, demonstrating and disseminating AI safety practices, and supporting institutions and communities in AI safety coordination.

Adversarial Robustness in LLMs: Defending Against Malicious Inputs

Large Language Models (LLMs) are advanced artificial intelligence systems that understand and generate human language. These models, such as GPT-4, are built on deep learning architectures and trained on vast datasets, enabling them to perform various tasks, including text completion, translation, summarization, and more. Their ability to generate coherent and contextually relevant text has made them invaluable in the healthcare, finance, customer service, and entertainment industries.

Data Anonymization Techniques for Secure LLM Utilization

Data anonymization is transforming data to prevent the identification of individuals while conserving the data's utility. This technique is crucial for protecting sensitive information, securing compliance with privacy regulations, and upholding user trust. In the context of LLMs, anonymization is essential to protect the vast amounts of personal data these models often process, ensuring they can be utilized without compromising individual privacy.

BrowserGPT Review: The Ultimate ChatGPT Chrome Extension for Enhanced Web Productivity

In the constantly evolving digital landscape, BrowserGPT emerges as a beacon of innovation for enhancing productivity and efficiency online. As a comprehensive ChatGPT Chrome extension, BrowserGPT offers a unique set of features that seamlessly integrate into users' web browsing experiences. This review delves into the capabilities and functionalities of BrowserGPT, evaluating its potential to redefine how we interact with content on the web.

When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI

In the rapidly evolving fields of large language models (LLMs) and machine learning, new frameworks and applications emerge daily, pushing the boundaries of these technologies. While exploring libraries and frameworks that leverage LLMs for user-facing applications, we came across the Vanna.AI library – which offers a text-to-SQL interface for users – where we discovered CVE-2024-5565, a remote code execution vulnerability via prompt injection techniques.

Rapidly deliver trustworthy GenAI assistants with Motific

This demo highlights how Motific simplifies the journey of requesting a GenAI application, going through the approval process, connecting it with the right information sources, and provisioning an application to meet business requirements. With Motific, you can gain flexibility without complexity for easy deployments of ready-to-use AI assistants and APIs.

How to augment DevSecOps with AI?

Join us for a roundtable on GenAI's dual role in cybersecurity. Experts from GitGuardian, Snyk, Docker, and Protiviti, with Redmonk, discuss threat mitigation versus internal tool adoption, securing coding assistants, leveraging LLMs in supply chain security, and more. Gain valuable insights on harnessing GenAI to enhance your DevSecOps practices.

BlueVoyant Awarded Microsoft Worldwide Security Partner of the Year, Recognizing Leading-Edge Cyber Defense

We are over the moon to share that BlueVoyant has been awarded the Microsoft Worldwide Security Partner of the Year, demonstrating our leading-edge cyber defense capabilities and our strong partnership with Microsoft. We have also been recognized as the Microsoft United States Security Partner of the Year for the third time, and the Microsoft Canada Security Partner of the Year for the first time.

Kroll insights hub highlights key AI security risks

From chatbots like ChatGPT to the large language models (LLMs) that power them, managing and mitigating potential AI vulnerabilities is an increasingly important aspect of effective cybersecurity. Kroll’s new AI insights hub explores some of the key AI security challenges informed by our expertise in helping businesses of all sizes, in a wide range of sectors. Some of the topics covered on the Kroll AI insights hub are outlined below.

The Importance of AI Penetration Testing

Penetration Testing, often known as "pen testing," plays a pivotal role in assessing the security posture of any digital environment. It's a simulated cyber attack where security teams utilise a series of attack techniques to identify and exploit vulnerabilities within systems, applications, and an organisation’s infrastructure. This form of testing is crucial because it evaluates the effectiveness of the organisation's defensive mechanisms against unauthorized access and malicious actors.

Breaking down BEC: Why Business Email Compromise is More Popular Than Ever

Cybersecurity moves fast, and the latest threats to reach organizations worldwide are being built on the back of artificial intelligence (AI) models that spit out accurate code, realistic messages, and lifelike audio and video designed to fool people. But as headline-grabbing as AI-based attacks appear to be, they aren’t driving the most breaches globally. That would be BEC attacks, in which attackers leverage stolen access to a business email account to create a scam that results in financial gain.

RAG in Production: Deployment Strategies and Practical Considerations

The RAG architecture, a novel approach in language models, combines the power of retrieval from external knowledge sources with traditional language generation capabilities. This innovative method overcomes a fundamental limitation of conventional language models, which are typically trained on a fixed corpus of text and struggle to incorporate up-to-date or specialized knowledge not present in their training data.

What Drives an SME's Approach to Implementing AI?

AI’s rise in both the business and consumer worlds has been astonishingly exponential. Businesses are using AI to generate content, analyze data, automate processes, and more. But small and medium-sized enterprises (SMEs) look and act very differently from their enterprise counterparts. This prompts the question: How are SMEs approaching AI? Recent data from a 2024 JumpCloud study of SME IT may help answer it.

The Double-Edged Sword of AI: Empowering Cybercriminals and the Need for Heightened Cybersecurity Awareness

The BBC recently reported that Booking.com is warning that AI is driving an explosion in travel scams. Up to 900% in their estimation - making it abundantly clear that while AI can be a force for good, it can also be a formidable weapon in the arsenal of cybercriminals. One of the most concerning trends we've observed is the increasing use of AI by cybercriminals to carry out sophisticated phishing attacks.

EssayWriter Review: Comprehensive Guide to Using the Free Essay Writing Assistant

In the age of technology, the educational sector has seen profound shifts in how students and professionals prepare academic content. Among the various tools that have emerged, AI-powered writing assistants are redefining the landscape of academic writing. This review delves into one such innovative tool - EssayWriter, analyzing its features, functionality, and overall utility in enhancing academic and research work through the lens of an AI essay writer platform.

EP 55 - AI Insights: Shaping the Future of IAM

In this episode of Trust Issues, Daniel Schwartzer, CyberArk’s Chief Product Technologist and leader of the company’s Artificial Intelligence (AI) Center of Excellence, joins host David Puner for a conversation that explores AI’s transformative impact on identity and access management (IAM). Schwartzer discusses how CyberArk’s AI Center of Excellence is equipping the R&D team to innovate continuously and stay ahead of AI-enabled threats.

How Artificial General Intelligence Will Redefine Cybersecurity

Artificial Intelligence (AI) is now integrated into almost every available technology. It powers numerous real-world applications, from facial recognition to language translators and virtual assistants. AI offers significant benefits for businesses and economies by boosting productivity and creativity. However, it still faces practical challenges. Machine learning often requires substantial human effort to label training data for supervised learning.

Snyk Code now secures AI builds with support for LLM sources

As we enter the age of AI, we’ve seen the first wave of AI adoption in software development in the form of coding assistants. Now, we’re seeing the next phase of adoption take place, with organizations leveraging increasingly widely available LLMs to build AI-enabled software. Naturally, as the adoption of LLM platforms like OpenAI and Gemini grows, so does the security risk associated with using them.

Implementing AI within your security strategy: 7 best practices

There’s a ton of media hype about the swift integration of AI across different business functions. It has also been reported that 98% of technology executives have paused their AI programs to establish guidelines and policies around its implementation. Depending on when and where you read about it, opinions on the speed of AI adoption vary. Nevertheless, AI is more than just hype.

Top 7 Challenges in Building Healthcare GenAI Applications

The integration of generative AI (GenAI) into healthcare holds tremendous potential for transforming patient care, diagnostics, and operational efficiency. However, developing these applications faces numerous challenges that must be addressed to ensure compliance, accuracy, and security. Here are the top challenges in building healthcare GenAI applications.

Building Apps at Scale in Power Platform? Not for the Faint of Heart... or CoE Security

Enterprises are racing to adopt AI copilots and low-code/no-code platforms to innovate and maximize efficiency by placing powerful technology and development tools in the hands of all business users. While the productivity gains are enormous, so are the security risks, as the nature of these copilots and low-code platforms results in a surge of new business apps being created at the enterprise.

Questionnaires: OkCupid vs. Security

What do OkCupid quizzes and generic security questionnaires have in common? More than you might think. James Scheffler, Head of GRC at DataRobot, explains why one size definitely doesn't fit all. That’s why TrustShare allows prospects to conduct a virtual audit and get the information they need from your trust portal. When a questionnaire is unavoidable, our AI-powered solution pre-fills up to 90% with accurate, context-aware answers - and citations to prove it!

ChatGPT Security: Tips for Safe Interactions with Generative AI

With over 100 million users and partnerships with Microsoft, Reddit, Stack Overflow, and more, ChatGPT has become the herald of an AI revolution since its launch in late 2022. The rise of this AI-powered natural language processing tool comes down to two distinct features: its conversational nature, which allows anyone to ask questions and receive detailed and helpful responses, and its access to a global knowledge base.

Hallucinated Packages, Malicious AI Models, and Insecure AI-Generated Code

AI promises many advantages when it comes to application development. But it’s also giving threat actors plenty of advantages, too. It’s always important to remember that AI models can produce a lot of garbage that is really convincing—and so can attackers. “Dark” AI models can be used to purposely write malicious code, but in this blog, we’ll discuss three other distinct ways using AI models can lead to attacks.

The Future Of AI At Arctic Wolf

Arctic Wolf is addressing the exponential scale of security threats to business worldwide with our fusion of human intelligence, artificial intelligence, and one of the world’s largest data-streams of security observations. Join Arctic Wolf’s Dan Schiappa, Chief Product Officer, and Ian McShane, Vice President of Product, as they share their vision for AI in the context of the industry-leading Arctic Wolf Security Operations Cloud.

4 Examples of How AI is Being Used to Improve Cybersecurity

Throughout history, technology has been a catalyst for solving many civilizational problems. The advent of artificial intelligence (AI) presents an incredible opportunity to combat cybersecurity risks and bolster the defenses of organizational IT networks. The good news is that it’s already making an impact by reducing the average dwell time of cyber attacks by as much as 15%. But AI holds much more promise.

Your AI Governance Blueprint: A Guide to ISO 42001 & NIST AI RMF

As businesses increasingly rely on AI to drive innovation and efficiency, ensuring that these systems are used ethically and safely becomes paramount. We’re here to help you build your blueprint to effective AI governance, stay compliant with global standards, and mitigate potential risks.

Why Artificial Intelligence (AI) Is Neither

Artificial Intelligence (AI) is the buzzword du jour of not just tech, but the entire online world. We see it in the daily headlines of everything from industry stalwarts such as Wired (There’s an AI Candidate Running for Parliament in the UK) through the stiff-collared set at the Wall Street Journal (What the Apple-OpenAI Deal Means for Four Tech Titans). Everyone who is anyone is talking about it, training it, or trying leverage against it.

Protecto.ai and Fiddler AI Announce Strategic Collaboration for Responsible AI Development

Protecto.ai is thrilled to announce a strategic collaboration with Fiddler AI, a trailblazer in AI explainability and transparency. With a total of $47 million in funding, Fiddler AI empowers organizations to build trust in their AI systems by making complex models interpretable and transparent, thereby enhancing model performance and ensuring compliance with regulatory standards and ethical guidelines.

Quick Guide to Popular AI Licenses

Only about 35 percent of the models on Hugging Face bear any license at all. Of those that do, roughly 60 percent fall under traditional open source licenses. But while the majority of licensed AI models may be open source, some very large projects–including Midjourney, BLOOM, and LLaMa—fall under that remaining 40 percent category. So let’s take a look at some of the top AI model licenses on Hugging Face, including the most popular open source and not-so-open source licenses.

The Future of Endpoint Protection: AI and Predictive Security

Traditional security measures, while essential, are often reactive, scrambling to respond to attacks after they've occurred. Endpoint protection stands as a critical line of defense against an increasingly sophisticated array of cyber threats. Its future lies in proactive, intelligent solutions that leverage the power of AI and predictive security to anticipate and prevent threats before they can cause harm.

Revolutionizing Security: AI at the Heart of Modern Protection

Dive into the future of security with us at Brivo as we explore how AI-Centric Security is transforming the way we protect spaces in real-time. Join Neerja Bajaj in uncovering the power of artificial intelligence in analyzing security data, identifying threats, and responding with unmatched efficiency. From commercial real estate to multifamily residential areas, discover how Brivo leverages cutting-edge AI to ensure your safety and peace of mind.

How Brokers Harness Artificial Intelligence for Market Analysis

The integration of artificial intelligence (AI) in the finance sector has seen a dramatic surge over the past decade. Key technological advancements like increased computing power, improved algorithms, and the availability of big data have paved the way for AI to transform brokerage operations.

"AI is only useful when it solves real customer problems": Tines on Risky Biz

We’re all huge fans of the Risky Biz podcast here at Tines, so we were thrilled to be invited to appear on the show recently to talk about AI’s role in security automation. I had a great conversation with host Patrick Gray about the security and privacy challenges that go along with deploying an LLM in your environment, and how our approach to AI in Tines is fundamentally different. I loved every minute of this chat, and I hope you’ll find it interesting, too.

4 AI coding risks and how to address them

96% of developers use AI coding tools to generate code, detect bugs, and offer documentation or coding suggestions. Developers rely on tools like ChatGPT and GitHub Copilot so much that roughly 80% of them bypass security protocols to use them. That means that whether you discourage AI-generated code in your organization or not, developers will probably use it. And it comes with its fair share of risks. On one hand, AI-generated code helps developers save time.

Transparency and Ethics in AI: Ensuring Safety and Regulation

In this video, Erin Mann delves into the critical importance of transparency and ethics in the use of artificial intelligence (AI). As AI continues to evolve and integrate into various aspects of our lives, ensuring its ethical use and safety becomes paramount. Erin discusses how transparency in AI operations can drive the necessary conversations around regulation and efficient implementation. By understanding the ethical implications and advocating for clear guidelines, we can harness the power of AI responsibly and effectively.

Is over-focusing on privacy hampering the push to take full advantage of AI?

Customer data needs to be firewalled and if protected properly can still be used for valuable analytics In 2006, British mathematician Clive Humby declared that data is the new oil-and so could be the fuel source for a new, data-driven Industrial Revolution.

Is AI-generated code secure? Maybe. Maybe not.

Generative AI has emerged as the next big thing that will transform the way we build software. Its impact will be as significant as open source, mobile devices, cloud computing—indeed, the internet itself. We’re seeing Generative AI’s impacts already, and according to the recent Gartner Hype Cycle for Artificial Intelligence, AI may ultimately be able to automate as much as 30% of the work done by developers.

Protecto Announces Data Security and Safety Guardrails for Gen AI Apps in Databricks

Protecto, a leader in data security and privacy solutions, is excited to announce its latest capabilities designed to protect sensitive enterprise data, such as PII and PHI, and block toxic content, such as insults and threats within Databricks environments. This enhancement is pivotal for organizations relying on Databricks to develop the next generation of Generative AI (Gen AI)applications.

AI quality: Garbage in, garbage out

If you use expired, moldy ingredients for your dessert, you may get something that looks good but tastes awful. And you definitely wouldn’t want to serve it to guests. Garbage in, garbage out (GIGO) applies to more than just technology and AI. Inputting bad ingredients into a recipe will lead to a potentially poisonous output. Of course, if it looks a little suspicious, you can cover it in frosting, and no one will know. This is the danger we are seeing now.

How AI adoption throughout the SDLC affects software testing

With AI finding adoption throughout all stages of the development process, the SDLC as we know it is becoming a thing of the past. Naturally, this has many implications for the field of software testing. This article will discuss how the SDLC has evolved over time, going into detail on the impact that AI adoption is having on both software development and software testing.

Enhancing Language Models: An Introduction to Retrieval-Augmented Generation

Over the past few years, significant progress has been observed in the area of NLP, largely due to the availability and excellence of advanced language models, including OpenAI's GPT series. These models, which are useful for generating human-like text which is contextually appropriate, have transformed several interfaces from conversational agents to creative writing. However, as popular and effective as they may seem, the traditional language models have their own drawbacks and specifically, the restriction in accessing additional up-dated data and incorporating them.

Snowflake Breach: Stop Blaming, Start Protecting with Protecto Vault

Hackers recently claimed on a known cybercrime forum that they had stolen hundreds of millions of customer records from Santander Bank and Ticketmaster. It appears that hackers used credentials obtained through malware to target Snowflake accounts without MFA enabled. While it's easy to blame Snowflake for not enforcing MFA, Snowflake has a solid track record and features to protect customer data. However, errors and oversight can happen in any organization.

Securing AI in the Cloud: AI Workload Security for AWS

To bolster the security of AI workloads in the cloud, Sysdig has extended its recently launched AI Workload Security to AWS AI services, including Amazon Bedrock, Amazon SageMaker, and Amazon Q. This enhancement helps AWS AI service users secure AI workloads and keep pace with the speed of AI evolution.

Protect Your Data from LLMs: Mitigating AI Risks Effectively

As artificial intelligence (AI) continues to advance, its integration into our daily lives and various industries brings both tremendous benefits and significant risks. Addressing these risks proactively is crucial to harnessing AI’s full potential while ensuring security and ethical use. Let's embark on a journey through the AI pipeline, uncovering the potential pitfalls and discovering strategies to mitigate them.

AI Integration: Empowering Your Team for the Future | Brivo Insights

Dive into the world of AI with Brivo! In this essential guide, we're exploring how to seamlessly prepare your staff for the AI revolution. With technology rapidly evolving, ensuring your team is ready to embrace AI is crucial for staying ahead. From understanding AI basics to implementing practical training strategies, we cover it all. Plus, discover how Brivo's smart spaces technology can enhance this transition, making it smoother and more efficient.

Webinar Replay: Q1 2024 Threat Landscape: Insider Threat & Phishing Evolve Under AI Auspices

In the first quarter of 2024 Kroll saw an evolution in techniques used by attackers, some of which may point to longer term trends in the variation and sophistication of attacks faced by organizations. In this briefing, Kroll’s cyber threat intelligence leaders explore key insights and trends from hundreds of cyber incidents handled worldwide in Q1.
Featured Post

Generative AI: Productivity Dream or Security Nightmare

The field of AI has been around for decades, but its current surge is rewriting the rules at an accelerated rate. Fuelled by increased computational power and data availability, this AI boom brings with it opportunities and challenges. AI tools fuel innovation and growth by enabling businesses to analyse data, improve customer experiences, automate processes, and innovate products - at speed. Yet, as AI becomes more commonplace, concerns about misinformation and misuse arise. With businesses relying more on AI, the risk of unintentional data leaks by employees also goes up.

Friday Flows Special Edition: Change Control with AI Summary

Tyler Talaga, Staff Engineer at MyFitnessPal, is one of the early adopters of Tines' AI capabilities. In this special "Wednesday Workflows," Tyler walks through a story he built to improve the visibility of Change Control requests. This workflow routes Change Control requests to Slack with a detailed summary provided through the AI Action, helping the team quickly approve (or deny) a change. The MyFitnessPal team is building many new, helpful automations with the AI capabilities, including one to summarize vulnerabilities fixed in MacOS updates.

What Udemy is building with AI in Tines

For the security team at Udemy, AI in workflow automation provides an opportunity to unlock new time savings while keeping their organization secure, and protecting their online learning and teaching marketplace of 62 million users. But like all good security teams, they don’t want to sacrifice data security or privacy. AI in Tines, which is secure and private by design, provides that all-important layer of control - data never leaves the region, travels online, is logged, or is used for training.

Introducing AI in Tines

Everyone in the market is talking about AI right now. It’s a modern marvel; some say it might even be as big as the Industrial Revolution. We’re not big on grandiose statements like that, but we are big on delivering products that help our customers be more efficient and secure and, as a result, have happier and more engaged teams. That’s why today, we’re excited to announce AI in Tines. Two powerful features to make Tines even more accessible to any member of your organization.

A Brief Look at AI in the Workplace: Risks, Uses and the Job Market

Anyone remotely wired into technology newsfeeds – or any newsfeeds for that matter – will know that AI (artificial intelligence) is the topic of the moment. In the past 18 months alone, we’ve borne witness to the world’s first AI Safety Summit, a bizarre and highly public leadership drama at one of the world’s top AI companies, and countless prophecies of doom. And yet, even after all that, it seems businesses have largely failed to take meaningful action on AI.

AI Autonomy and the Future of Cybersecurity

Have you ever wondered how Artificial Intelligence (AI) could mimic consciousness and autonomously control various tasks? It sounds rather daunting. However, it may not be as intimidating as it seems under the right conditions. Moreover, Would AI perform tasks independently in the same manner as humans? And what implications does this hold for cybersecurity? In the present day, we are observing the rise of self-driving cars that operate with minimal human input.

Operation Grandma: A Tale of LLM Chatbot Vulnerability

Who doesn’t like a good bedtime story from Grandma? In today’s landscape, more and more organizations are turning to intelligent chatbots or large language models (LLMs) to boost service quality and client support. This shift is receiving a lot of positive attention, offering a welcome change given the common frustrations with bureaucratic delays and the lackluster performance of traditional automated chatbot systems.

Creating a new LLM connection with Motific

This demo highlights how Motific simplifies the journey of requesting a GenAI application, going through the approval process, connecting it with the right information sources, and provisioning an application to meet business requirements. With Motific, you can gain flexibility without complexity for easy deployments of ready-to-use AI assistants and APIs.

Secure AI tool adoption: Perceptions and realities

In our latest report, Snyk surveyed security and software development technologists, from top management to application developers, on how their companies had prepared for and adopted generative AI coding tools. While organizations felt ready and believed AI coding tools and AI-generated code were safe, they failed to undertake some basic steps for secure adoption. And within the ranks, those close to the code have greater doubts about AI safety than those higher up in management.

Unlocking the Power of AI in Cybersecurity: Key Takeaways from the HMS Belfast Breakfast Briefing

In the rapidly evolving landscape of technology, the fusion of Artificial Intelligence (AI) and cybersecurity is creating both exciting opportunities and formidable challenges. The recent breakfast briefing on the historic HMS Belfast served as a critical forum for industry leaders to explore these issues in depth.

Penetration Testing of A.I. Models

Penetration testing is a cornerstone of any mature security program and is a mature and well understood practice supported by robust methodologies, tools, and frameworks. The tactical goals of these engagements typically revolve around identification and exploitation of vulnerabilities in technology, processes, and people to gain initial, elevated, and administrative access to the target environment.

The Rise of the Co-author: Will AI Invade Our Writing Space?

The writer's life has always been a dance between solitude and collaboration. We yearn for the quiet focus of crafting a sentence but also crave the spark of shared ideas. Now, a new partner enters the scene: Artificial Intelligence. AI writing assistants are rapidly evolving, blurring the lines between human and machine authorship. But will these tools become our unwanted and uninvited co-authors, or can they be valuable collaborators, enhancing our creativity?

Strengthening AI Chatbot Defenses with Targeted Penetration Tests

The world is quickly seeing the rise of AI powered customer service. The conversational agent chatbots enhance the customer experience but also introduce a new attack vector. Here's what you need to know about strengthening AI chatbot defenses. Many AI driven technologies have access to vast data sources and access to functions that assist users. AI chatbots can be used in many ways such as answering questions about an item in stock, help develop code, to helping users reset their password.

Protecto Unveils Enhanced Capabilities to Enable HIPAA-Compliant Data for Generative AI Applications in Snowflake

San Francisco, CA - Protecto, a leading innovator in data privacy and security solutions, is proud to announce the release of new capabilities designed to identify and cleanse Protected Health Information (PHI) data from structured and unstructured datasets, facilitating the creation of safe and compliant data for Generative AI (GenAI) applications. This advancement underscores Protecto's commitment to data security and compliance while empowering organizations to harness the full potential of GenAI.

10 Best Tools to Bypass AI Detection: Ensuring Your Content Remains Undetected

In the rapidly evolving digital landscape, the advent of AI writing tools has revolutionized content creation. However, with this technology's rise, the need to bypass AI detectors has become increasingly crucial for many creators aiming to maintain the originality and human essence of their content. AI detectors are designed to identify content generated by AI, potentially leading to issues with authenticity and even penalization in certain contexts.