Study Warns Open-Source AI Models Face Misuse Risks

Open-Source

Researchers from cybersecurity firms SentinelOne and Censys have revealed that open-source large language models (LLMs) are increasingly vulnerable to criminal exploitation, raising serious concerns about the safety of these widely accessible systems.

Their findings highlight that hackers and other malicious actors can easily commandeer computers running open-source AI to carry out harmful activities without the guardrails typical of major AI platforms.

The study, conducted over 293 days, found that many internet-accessible LLMs-especially variants of Meta’s Llama and Google DeepMind’s Gemma-are operating without sufficient security controls in place. In hundreds of cases, developers had reportedly removed model guardrails, leaving systems exposed to misuse.

Read: Accreditation Reform: U.S. Official Blames Accrediting System

How Criminals Could Exploit Models

Researchers warn that criminals could leverage these unsecured AI models for a range of malicious purposes, including:

  • Phishing and spam campaigns

  • Disinformation and hate speech generation

  • Personal data theft and identity fraud

  • Scams, illicit financial schemes, and cyberattacks

Without proper safeguards, these models may also be co-opted to produce or distribute harmful content, compounding the risks posed to users and broader online communities.

The teams used tools like Ollama, which enables individuals and organizations to run LLMs locally, to analyze thousands of open-source deployments. They found that about 7.5% of the systems observed had system prompts that could facilitate harmful activities. Roughly 30% of exposed hosts were located in China and about 20% in the U.S., underlining the global nature of the issue.

Calls for Shared Responsibility and Better Safeguards

Experts argue that responsibility for preventing misuse extends beyond model developers to all stakeholders in the AI ecosystem. Rachel Adams, CEO of the Global Center on AI Governance, emphasized that while creators can’t control every downstream misuse, they owe a duty of care to anticipate foreseeable harms, document risks, and provide mitigation tools and guidance.

Representatives from major tech companies acknowledged the importance of balancing open innovation with security measures. For instance, a Microsoft official noted that while open-source models play an important role in innovation, they must be evaluated for risks, particularly when exposed to the internet without adequate protections.

Industry Response and Next Steps

Some companies highlighted existing protective measures, such as Meta’s Llama Protection tools and responsible use guides, although not all organizations responded to queries about their strategies for addressing misuse risks.

The study underscores the urgent need for shared accountability and stronger safety frameworks as open-source AI technology continues to expand rapidly in scale and accessibility.

Share Now

Related Articles

Different Types of Cyberattacks
What Are the Different Types of Cyberattacks? A Complete Beginner-Friendly Guide
Cybersecurity
Weekly Cybersecurity Recap: AI, Exploits & Threats
Cybersecurity-Ghana Ministry
Ghana Ministry and Cyber Security Authority Launch Cybersecurity Industry Forum
Ex-L3Harris Executive
Ex-L3Harris Executive Admits Selling Zero-Day Exploits to Russia
GRU Cyber Espionage Tool Used to Target UK Networks
EXPOSED: GRU's Cyber Espionage Tool Used to Target UK Networks

You May Also Like

Open-Source
Accreditation Reform
oilfield
Palantir
Scroll to Top