Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

Will predictive AI revolutionize the SIEM industry?

The cybersecurity industry is extremely dynamic and always finds a way to accommodate the latest and best technologies available into its systems. There are two major reasons: one, because cyberattacks are constantly evolving and organizations need to have the cutting edge technologies in place to detect sophisticated attacks; and two, because of the complexity of the network architecture of many organizations.

Six Key Security Risks of Generative AI

Generative Artificial Intelligence (AI) has revolutionized various fields, from creative arts to content generation. However, as this technology becomes more prevalent, it raises important considerations regarding data privacy and confidentiality. In this blog post, we will delve into the implications of Generative AI on data privacy and explore the role of Data Leak Prevention (DLP) solutions in mitigating potential risks.

Hype vs. Reality: Are Generative AI and Large Language Models the Next Cyberthreat?

Generative AI and large language models (LLMs) have the potential to be used as tools for cybersecurity attacks, but they are not necessarily a new cybersecurity threat in themselves. Let’s have a look at the hype vs. the reality. The use of generative AI and LLMs in cybersecurity attacks is not new. Malicious actors have long used technology to create convincing scams and attacks.

How to secure Generative AI applications

I remember when the first iPhone was announced in 2007. This was NOT an iPhone as we think of one today. It had warts. A lot of warts. It couldn’t do MMS for example. But I remember the possibility it brought to mind. No product before had seemed like anything more than a product. The iPhone, or more the potential that the iPhone hinted at, had an actual impact on me. It changed my thinking about what could be.

Watershed Moment for Responsible AI or Just Another Conversation Starter?

The Biden Administration’s recent moves to promote “responsible innovation” in artificial intelligence may not fully satiate the appetites of AI enthusiasts or defuse the fears of AI skeptics. But the moves do appear to at least start to form a long-awaited framework for the ongoing development of one of the more controversial technologies impacting people’s daily lives. The May 4 announcement included three pieces of news.

Cloudflare Equips Organisations with the Zero Trust Security They Need to Safely Use Generative AI

Now companies can give their teams the productivity and innovation of emerging generative AI - while reducing risk with built-in security and governance controls over the flow of data.

Who is Securing the Apps Built by Generative AI?

The rise of low-code/no-code platforms has empowered business professionals to independently address their needs without relying on IT. Now, the integration of generative AI into these platforms further enhances their capabilities and eliminates entry barriers. However, as everyone becomes a developer, concerns about security risks arise.

The Face Off: AI Deepfakes and the Threat to the 2024 Election

The Associated Press warned this week that AI experts have raised concerns about the potential impact of deepfake technology on the upcoming 2024 election. Deepfakes are highly convincing digital disinformation, easily taken for the real thing and forwarded to friends and family as misinformation. Researchers fear that these advanced AI-generated videos could be used to spread false information, sway public opinion, and disrupt democratic processes.