
In the rapidly expanding world of agentic AI tooling, the Model Context Protocol (MCP) has become a cornerstone for enabling large language models like Claude to interact with external systems—filesystems, APIs, databases, and crucially, Git repositories.
Yet a recent disclosure reveals a stark reminder of supply-chain risks in AI infrastructure: three medium-severity vulnerabilities in Anthropic’s own official reference Git MCP server (mcp-server-git) allow attackers to achieve arbitrary file access, deletion, overwrites, and—in chained scenarios—full remote code execution (RCE), all triggered purely through prompt injection.
Discovered by AI security firm Cyata Security and publicly detailed on January 20, 2026, these flaws (CVE-2025-68143, CVE-2025-68144, CVE-2025-68145) were reported to Anthropic in June 2025, accepted in September, and fully patched by December 2025 (with the git_init tool removed entirely in version 2025.12.18).
The campaign highlights a growing class of threats in 2026: vulnerabilities not in third-party plugins, but in the “canonical” reference implementations developers are encouraged to adopt or fork for production AI agents.
The affected component is mcp-server-git, Anthropic’s official MCP server for Git operations, designed as a safe example for exposing repositories to LLMs. MCP servers act as bridges: the AI decides on tool calls, and the server executes them on the host system.
The three flaws, exploitable via indirect prompt injection (e.g., attacker-controlled content in a README.md, GitHub issue, or webpage that the AI reads), include:
Individually, these enable sensitive file reads (loading files into LLM context), deletions, or overwrites. When chained with the legitimate Filesystem MCP server (which permits controlled file read/write under configured rules), attackers achieve RCE:
Cyata demonstrated this chain in red-team exercises, turning innocuous AI tasks (e.g., “review this repo”) into full system compromise.
MCP represents the future of AI agents: seamless tool integration for code review, repo syncing, file ops, and more. But reference implementations like mcp-server-git are copied widely—developers assume they’re secure baselines.
These flaws expose developers and organizations to:
For SMBs and dev teams adopting AI assistants as “helpful colleagues,” unvetted MCP setups expand the attack surface far beyond traditional code repos.
The vulnerabilities manifest through specific behaviors in affected deployments (pre-2025.12.18 versions of mcp-server-git):
Monitor logs for anomalous MCP tool calls, especially git_init, git_diff, or filesystem writes following suspicious prompts.
Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination. Yarden Porat, Cyata Security researcher, January 2026 analysis

In the rapidly expanding world of agentic AI tooling, the Model Context Protocol (MCP) has become a cornerstone for enabling large language models like Claude to interact with external systems—filesystems, APIs, databases, and crucially, Git repositories.
Yet a recent disclosure reveals a stark reminder of supply-chain risks in AI infrastructure: three medium-severity vulnerabilities in Anthropic’s own official reference Git MCP server (mcp-server-git) allow attackers to achieve arbitrary file access, deletion, overwrites, and—in chained scenarios—full remote code execution (RCE), all triggered purely through prompt injection.
Discovered by AI security firm Cyata Security and publicly detailed on January 20, 2026, these flaws (CVE-2025-68143, CVE-2025-68144, CVE-2025-68145) were reported to Anthropic in June 2025, accepted in September, and fully patched by December 2025 (with the git_init tool removed entirely in version 2025.12.18).
The campaign highlights a growing class of threats in 2026: vulnerabilities not in third-party plugins, but in the “canonical” reference implementations developers are encouraged to adopt or fork for production AI agents.
The affected component is mcp-server-git, Anthropic’s official MCP server for Git operations, designed as a safe example for exposing repositories to LLMs. MCP servers act as bridges: the AI decides on tool calls, and the server executes them on the host system.
The three flaws, exploitable via indirect prompt injection (e.g., attacker-controlled content in a README.md, GitHub issue, or webpage that the AI reads), include:
Individually, these enable sensitive file reads (loading files into LLM context), deletions, or overwrites. When chained with the legitimate Filesystem MCP server (which permits controlled file read/write under configured rules), attackers achieve RCE:
Cyata demonstrated this chain in red-team exercises, turning innocuous AI tasks (e.g., “review this repo”) into full system compromise.
MCP represents the future of AI agents: seamless tool integration for code review, repo syncing, file ops, and more. But reference implementations like mcp-server-git are copied widely—developers assume they’re secure baselines.
These flaws expose developers and organizations to:
For SMBs and dev teams adopting AI assistants as “helpful colleagues,” unvetted MCP setups expand the attack surface far beyond traditional code repos.
The vulnerabilities manifest through specific behaviors in affected deployments (pre-2025.12.18 versions of mcp-server-git):
Monitor logs for anomalous MCP tool calls, especially git_init, git_diff, or filesystem writes following suspicious prompts.
Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination. Yarden Porat, Cyata Security researcher, January 2026 analysis