Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

How Torq Socrates is Designed to Hyperautomate 90% of Tier-1 Analysis With Generative AI

Artificial intelligence (AI) has generated significant hype in recent years, and separating the promise from reality can be challenging. However, at Torq, AI is not just a concept. It is a reality that is revolutionizing the SOC field, specifically in the area of Tier-1 security analysis, especially as cybercriminals become more sophisticated in their tactics and techniques. Traditional security tools continue to fall short in detecting and mitigating these attacks effectively, particularly at scale.

Effective Access and Collaboration on Large Lab Datasets using Egnyte's Smart Cache

The life sciences industry is at the forefront of data-intensive research and innovation. Scientists and researchers rely heavily on the collection, processing, and analysis of vast amounts of data generated by lab instruments. And they are often challenged by errors or confusion in managing data flows that in turn, have a direct impact on the quality of data and corresponding compliance with regulatory requirements.

2 (Realistic) Ways to Leverage AI In Cybersecurity

If you had to choose a security measure that would make the most difference to your cyber program right now, what would it be? Maybe you’d like to get another person on your team? Someone who is a skilled analyst, happy to do routine work and incredibly reliable. Or perhaps you’d prefer an investment that would give your existing team members back more of their time without compromising your ability to find and fix threats? What about human intelligence without human limitations?

July Release Rollup: AI Document Summarization, Smart Cache and More

‍ This month's release rollup includes Egnyte's AI-driven document summarization, project dashboard for Android, and Smart Cache file download improvements. Below is an overview of these and other new releases. Visit the linked articles for more details.

Researchers uncover surprising method to hack the guardrails of LLMs

Researchers from Carnegie Mellon University and the Center for A.I. Safety have discovered a new prompt injection method to override the guardrails of large language models (LLMs). These guardrails are safety measures designed to prevent AI from generating harmful content. This discovery poses a significant risk to the deployment of LLMs in public-facing applications, as it could potentially allow these models to be used for malicious purposes.

Five worthy reads: Cybersecurity in the age of AI - Battling sophisticated threats

Five worthy reads is a regular column on five noteworthy items we have discovered while researching trending and timeless topics. This week we are exploring the significant role of AI in the field of cybersecurity and why it’s the next biggest thing in cybersecurity.

FYI: the dark side of ChatGPT is in your software supply chain

Let’s face it, the tech world is a whirlwind of constant evolution. AI is no longer just a fancy add-on; it’s shaking things up and becoming part and parcel of various industries, not least software development. One such tech marvel that’s stealthily carving out a significant role in our software supply chain is OpenAI’s impressive language model – ChatGPT.

Snyk's 2023 State of Open Source Security: Supply chain security, AI, and more

The 2021 Log4Shell incident cast a bright light on open source software security — and especially on supply chain security. The 18 months following the incident brought a greater focus on open source software security than at any time in history. Organizations like the OpenSSF, AlphaOmega, and large technology companies are putting considerable resources towards tooling and education. But is open source software security actually improving? And where are efforts still falling short?