Hugging Face Model & Hub Security

Hugging Face hosts the largest collection of open ML models with 500,000+ repositories. This makes it a prime target for supply chain attacks — malicious models with pickle payloads, typosquatted model names, poisoned training datasets, and backdoored model weights that activate on specific inputs.

Verified by Precogs Threat Research
huggingfacemodelssupply-chainpickleUpdated: 2026-03-22

Model Supply Chain Attacks

Hugging Face models are downloaded and executed with minimal verification. Pickle deserialization in PyTorch model files can execute arbitrary code on load. Typosquatted models (e.g., "bert-base-uncaseed") trick developers into downloading malicious versions. Model weight poisoning introduces backdoors triggered by specific inputs — undetectable by standard testing.

Inference & Deployment Risks

Hugging Face Inference API and Spaces run user-uploaded code in shared infrastructure. Model cards can contain malicious JavaScript (stored XSS). Private model access tokens (hf_...) frequently leak in notebooks and code. Self-hosted Inference Endpoints may expose model weights to unauthorized extraction through prediction API probing.

How Precogs AI Protects Against HF Risks

Precogs AI scans model files for embedded malicious payloads before download, detects Hugging Face token exposure (hf_...) across all code surfaces, validates model provenance against known-good signatures, and identifies unsafe deserialization patterns in transformers model loading code.

Attack Scenario: The Supply Chain Model Hijack

1

Attacker creates a HuggingFace account resembling a popular research group (e.g., "M1crosoft" instead of "Microsoft").

2

Attacker uploads a modified version of a popular open-source model (e.g., Llama-3).

3

The model contains a malicious Pickle payload in the `pytorch_model.bin` file.

4

A developer runs `pipeline("text-generation", model="M1crosoft/Llama-3-8B")`.

5

The `transformers` library downloads and deserializes the model payload.

6

The Pickle payload executes a reverse shell, compromising the ML training server and accessing AWS credentials in ~/.aws/credentials.

Real-World Code Examples

Arbitrary Code Execution via Pickle (CWE-502)

Traditional PyTorch models (.pt, .pth, .bin) are essentially zipped Python Pickle files. Deserializing a malicious pickle file results in immediate Remote Code Execution (RCE) on the ML engineer's machine or the inference server.

VULNERABLE PATTERN
# VULNERABLE: Loading an untrusted PyTorch model using Pickle
import torch

# Downloading a model from an unverified HuggingFace repository
model_path = download_from_hf("attacker/malicious-model")

# torch.load() uses Python's pickle module by default
# If the model contains a malicious __reduce__ method, code runs immediately
model = torch.load(model_path)
SECURE FIX
# SAFE: Using Safetensors format which cannot execute code
from safetensors.torch import load_file

model_path = download_from_hf("trusted/verified-model", filename="model.safetensors")

# load_file only parses tensors, it has no execution capability
tensors = load_file(model_path)
model.load_state_dict(tensors)

Detection & Prevention Checklist

  • Mandate the use of `safetensors` format for all model weights across the organization
  • Disable `trust_remote_code=True` in HuggingFace transformers pipelines unless explicitly audited
  • Use tools like `picklescan` on all downloaded `.bin` or `.pt` files before loading
  • Run inference workloads in hardened, network-isolated Kubernetes pods without egress internet access
  • Verify model cryptographic hashes (SHA256) against known-good manifests before deployment
🛡️

How Precogs AI Protects You

Precogs AI scans Hugging Face model files for malicious payloads, detects HF token exposure, validates model provenance, and identifies unsafe deserialization — securing the ML model supply chain.

Start Free Scan

Are Hugging Face models safe to use?

Hugging Face models can contain pickle exploits, backdoors in weights, and typosquatted names. Precogs AI scans model files, detects HF token leaks, and validates model provenance before deployment.

Scan for Hugging Face Model & Hub Security Issues

Precogs AI automatically detects hugging face model & hub security vulnerabilities and generates AutoFix PRs.