Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

Artificial Intelligence Governance Professional Certification - AIGP

For anyone who follows industry trends and related news I am certain you have been absolutely inundated by the torrent of articles and headlines about ChatGPT, Google’s Bard, and AI in general. Let me apologize up front for adding yet another article to the pile. I promise this one is worth a read, especially for anyone looking for ways to safely, securely, and ethically begin introducing AI to their business.

Bard or ChatGPT: Cybercriminals Give Their Perspectives

Six months ago, the question, “Which is your preferred AI?” would have sounded ridiculous. Today, a day doesn’t go by without hearing about “ChatGPT” or “Bard.” LLMs (Large Language Models) have been the main topic of discussions ever since the introduction of ChatGPT. So, which is the best LLM? The answer may be found in a surprising source – the dark web. Threat actors have been debating and arguing as to which LLM best fits their specific needs.

Top 6 security considerations for enterprise AI implementation

As the world experiences the AI gold rush, organizations are increasingly turning to enterprise AI solutions to gain a competitive edge and unlock new opportunities. However, amid the excitement and potential benefits, one crucial aspect that must not be overlooked is data security — in particular, protecting against adversarial attacks and securing AI models. As businesses embrace the power of AI, they must be vigilant in safeguarding sensitive data to avoid potential disasters.

Using Generative AI for Creating Phishing Sequences

Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.

Best practices for using AI in the SDLC

AI has become a hot topic thanks to the recent headlines around the large language model (LLM) AI with a simple interface — ChatGPT. Since then, the AI field has been vibrant, with several major actors racing to provide ever-bigger, better, and more versatile models. Players like Microsoft, NVidia, Google, Meta, and open source projects have all published a list of new models. In fact, a leaked Google document makes it seem that these models will be ubiquitous and available to everyone soon.

No Ethical Boundaries: WormGPT

In this week's episode, Bill and Robin discover the dangerous world of an AI tool without guardrails: WormGPT. This AI tool is allowing people with limited technical experience to create potential chaos. When coupled with the rise in popularity of tools like the Wi-Fi pineapple, and Flipper Zero, do you need to be more worried about the next generation of script kiddies? Learn all this and more on the latest episode of The Ring of Defense!

You're Not Hallucinating: AI-Assisted Cyberattacks Are Coming to Healthcare, Too

We recently published a blog post detailing how threat actors could leverage AI tools such as ChatGPT to assist in attacks targeting operational technology (OT) and unmanaged devices. In this blog post, we highlight why healthcare organizations should be particularly worried about this.

Tines Technical Advisory Board (TAB) Takeaways with Pete: part one

I’m Peter Wrenn, my friends call me Pete! I have the pleasure of being the moderator of the Tines Technical Advisory Board (TAB) which is held quarterly. In it, some of Tines’s power users engage in conversations around product innovations, industry trends, and ways we can push the Tines vision forward — automation for the whole team. Well, that’s the benefit to our customers and Tines.