Amid growing concerns about the integrity of upcoming European elections in 2024, the 11th edition of the Threat Landscape report by the European Union Agency for Cybersecurity (ENISA), released on October 19, 2023, reveals alarming findings about the rising threats posed by AI-enabled information manipulation.
The major news in technology policy circles is this month’s release of the long-anticipated Executive Order (E.O.) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. While E.O.s govern policy areas within the direct control of the U.S. government’s Executive Branch, they are important broadly because they inform industry best practices and can even potentially inform subsequent laws and regulations in the U.S. and abroad.
Generative AI and large language models (LLMs) seem to have burst onto the scene like a supernova. LLMs are machine learning models that are trained using enormous amounts of data to understand and generate human language. LLMs like ChatGPT and Bard have made a far wider audience aware of generative AI technology. Understandably, organizations that want to sharpen their competitive edge are keen to get on the bandwagon and harness the power of AI and LLMs.
Cyber risks for small and medium-sized businesses (SMBs) have never been higher. SMBs face a barrage of attacks, including ransomware, malware and variations of phishing/vishing. This is one reason why the Cybersecurity and Infrastructure Security Agency (CISA) states “thousands of SMBs have been harmed by ransomware attacks, with small businesses three times more likely to be targeted by cybercriminals than larger companies.”
In all the hullabaloo about AI, it strikes me that our attention gravitates far too quickly toward the most extreme arguments for its very existence. Utopia on the one hand. Dystopia on the other. You could say that the extraordinary occupies our minds far more than the ordinary. That’s hardly surprising. “Operational improvement” doesn’t sound quite as headline-grabbing as “human displacement”. Does it?
Researchers at Pindrop have published a report looking at consumer interactions with AI-generated deepfakes and voice clones. “Consumers are most likely to encounter deepfakes and voice clones on social media,” the researchers write. “The top four responses for both categories were YouTube, TikTok, Instagram, and Facebook. You will note the bias toward video on these platforms as YouTube and TikTok encounters were materially higher.