or
Malicious LLM Routers Can Steal Crypto: Researchers Flag 26 Dangerous Services
Security

Malicious LLM Routers Can Steal Crypto: Researchers Flag 26 Dangerous Services

April 13, 20262 min read

University of California researchers found that some third-party LLM routers actively inject malicious code and steal credentials from AI agent sessions. Out of 428 routers tested, 26 were flagged as dangerous. One drained real ETH from the researchers' test wallet. The paper dropped on Thursday.

The problem: developers using AI agents to work with smart contracts or crypto wallets may unknowingly pass private keys and seed phrases through third-party router infrastructure that has never been vetted.

What is an LLM router and why it matters

LLM routers are third-party services that aggregate access to language models from OpenAI, Anthropic, Google, and others. Developers add them as a middle layer to cut API costs or switch between providers. The catch is that routers terminate TLS connections and read every message in plaintext. Any private keys or seed phrases in the session become visible to the service.

What the researchers found

Analysis of 428 LLM routers (UC Berkeley)
Routers tested28 paid + 400 free
Malicious code injection9 routers
Adaptive evasion2 routers
AWS credential access17 routers
ETH drained from wallet1 router (under $50)

Co-author Chaofan Shou posted on X: "26 LLM routers are secretly injecting malicious tool calls and stealing creds." The team funded test wallets with small ETH balances and passed the keys through routers as decoys to measure the real-world risk.

YOLO mode: the agent that runs itself

The researchers flagged another risk. Many AI frameworks ship with a "YOLO mode" - a setting where the agent executes commands without asking the user to confirm each step. A router that behaved cleanly in the past can be silently weaponized after an update, with the service operator none the wiser.

Free routers are the most suspect: they may offer cheap API access as bait while harvesting credentials in the background. "The boundary between credential handling and credential theft is invisible to the client because routers already read secrets in plaintext as part of normal forwarding," the paper states.

What developers should do now

  • Never let private keys or seed phrases pass through an AI agent session
  • Vet a router's reputation and source code before integration
  • Stick to official API endpoints from verified providers

The long-term fix proposed is cryptographic signing of AI model responses, so an agent can verify that the instructions it executes actually came from the model, not a malicious router in the middle.

Takeaway

The threat is real for anyone building crypto apps with AI agents. If your agent routes through a third-party LLM service and the session context includes keys from crypto wallets or API accounts, the leak risk is not zero. Treat any LLM router as an untrusted intermediary until proven otherwise.

Comments

Your email address will not be published. Required fields are marked *

or verify by email