Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

CrowdStrike's View on the New U.S. Policy for Artificial Intelligence

The major news in technology policy circles is this month’s release of the long-anticipated Executive Order (E.O.) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. While E.O.s govern policy areas within the direct control of the U.S. government’s Executive Branch, they are important broadly because they inform industry best practices and can even potentially inform subsequent laws and regulations in the U.S. and abroad.

The Future of Financial Management with Cutting-Edge Software

The future of financial management is here, and it's more advanced than ever before. As technology has evolved in recent years, so have the ways that companies can manage their finances. Businesses are becoming increasingly tech-savvy, with many adopting cloud-based solutions and artificial intelligence (AI) to make their operations more efficient. These advancements are changing how we look at traditional methods of financial management and moving us into a new era where everything is faster, more accessible, and more reliable than ever before.

ThreatQuotient Publishes 2023 State of Cybersecurity Automation Adoption Research Report

Survey results highlight the expanding importance of automation, a change in how cybersecurity professionals determine ROI, and how cybersecurity teams believe they can avoid burnout.

CrowdStrike Brings AI-Powered Cybersecurity to Small and Medium-Sized Businesses

Cyber risks for small and medium-sized businesses (SMBs) have never been higher. SMBs face a barrage of attacks, including ransomware, malware and variations of phishing/vishing. This is one reason why the Cybersecurity and Infrastructure Security Agency (CISA) states “thousands of SMBs have been harmed by ransomware attacks, with small businesses three times more likely to be targeted by cybercriminals than larger companies.”

3 Considerations to Make Sure Your AI is Actually Intelligent

In all the hullabaloo about AI, it strikes me that our attention gravitates far too quickly toward the most extreme arguments for its very existence. Utopia on the one hand. Dystopia on the other. You could say that the extraordinary occupies our minds far more than the ordinary. That’s hardly surprising. “Operational improvement” doesn’t sound quite as headline-grabbing as “human displacement”. Does it?

AI-Manipulated Media Through Deepfakes and Voice Clones: Their Potential for Deception

Researchers at Pindrop have published a report looking at consumer interactions with AI-generated deepfakes and voice clones. “Consumers are most likely to encounter deepfakes and voice clones on social media,” the researchers write. “The top four responses for both categories were YouTube, TikTok, Instagram, and Facebook. You will note the bias toward video on these platforms as YouTube and TikTok encounters were materially higher.

What's in Store for 2024: Predictions About Zero Trust, AI, and Beyond

With 2024 on the horizon, we have once again reached out to our deep bench of experts here at Netskope to ask them to do their best crystal ball gazing and give us a heads up on the trends and themes that they expect to see emerging in the new year. We’ve broken their predictions out into four categories: AI, Geopolitics, Corporate Governance, and Skills. Here’s what our experts think is in store for 2024.

Rubrik and Microsoft: Pioneering the Future of Cybersecurity with Generative AI

We’re excited to announce Rubrik as one of the first enterprise backup providers in the Microsoft Security Copilot Partner Private Preview, enabling enterprises to accelerate cyber response times by determining the scope of attacks more efficiently and automating recoveries. Ransomware attacks typically result in an average downtime of 24 days. Imagine your business operations completely stalled for this duration.

How Corelight Uses AI to Empower SOC Teams

The explosion of interest in artificial intelligence (AI) and specifically large language models (LLMs) has recently taken the world by storm. The duality of the power and risks that this technology holds is especially pertinent to cybersecurity. On one hand the capabilities of LLMs for summarization, synthesis, and creation (or co-creation) of language and content is mind-blowing.