In today’s digital landscape, the importance of application security cannot be overstated, as businesses worldwide face evolving cyber threats. Both defenders and attackers are now harnessing the power of Artificial Intelligence (AI) to their advantage. As AI-driven attacks become increasingly sophisticated, it is crucial for organizations to adopt a comprehensive approach to application security that effectively addresses this emerging threat landscape.
Artificial intelligence (AI) is transforming modern society at unprecedented speed. It can do your homework, help you make better investment decisions, turn your selfie into a Renaissance painting or write code on your behalf. While ChatGPT and other generative AI tools can be powerful forces for good, they’ve also unleashed a tsunami of attacker innovation and concerns are mounting quickly.
Scattered across the internet are jigsaw puzzle pieces containing your personal information. If reassembled by an attacker, these puzzle pieces could easily compromise your identity. Our returning guest today is Len Noe, CyberArk’s resident transhuman (a.k.a. cyborg), whose official titles these days are Technical Evangelist, White Hat Hacker and Biohacker.
A couple weeks ago, ServiceNow and NVIDIA announced a groundbreaking partnership to help expand ServiceNow’s generative AI use cases for their customers to strengthen workflow automation and rapidly increase productivity. ServiceNow is also helping NVIDIA streamline its IT operations by using NVIDIA data to customize NVIDIA NeMo foundation models running on hybrid-cloud infrastructure.
Back in 2015, we published an article about the apparent perils of driverless cars. At that time, the newness and novelty of sitting back and allowing a car to drive you to your destination created a source of criminal fascination for some, and a nightmare for others. It has been eight years since the original article was published, so perhaps it is time to revisit the topic to see if driverless cars have taken a better direction.
Large language models (LLMs), such as ChatGPT, have gained significant popularity for their ability to generate human-like conversations and assist users with various tasks. However, with their increasing use, concerns about potential vulnerabilities and security risks have emerged. One such concern is prompt injection attacks, where malicious actors attempt to manipulate the behavior of language models by strategically crafting input prompts.