VU#949137: Langchaingo supports jinja2 and gonja for syntax parsing, allowing for arbitrary file read

VU#949137: Langchaingo supports jinja2 and gonja for syntax parsing, allowing for arbitrary file read

Overview

LangChainGo, the Go implementation of LangChain, a large language model (LLM) application building framework, has been discovered to contain an arbitrary file read vulnerability. The vulnerability, tracked as CVE-2025-9556, allows for arbitrary file read through the Gonja template engine with Jinja2 syntax. Attackers can exploit this by injecting malicious prompt content to access sensitive files, leading to a server-side template injection (SSTI) attack.

Description

LangChainGo is the Go Programming Language port/fork of LangChain, an open-source orchestration framework for the development of applications that leverage LLMs. LangChainGo uses Gonja for syntax parsing and creating dynamic and reusable prompt templates. Gonja is the Go implementation of Jinja2, a templating engine. Gonja is largely compatable with the the original Python Jinja2 implementation, and supports Jinja2 syntax.

As Gonja supports Jinja2 syntax, an attacker could leverage directives such as % include %, % from %, or % extends % for malicious purposes within LangChainGo. While these directives were meant to be used for building reusable templates, they can also allow an external file to be pulled and read from the server’s filesystem. An attacker could use this to inject malicious template code containing advanced templating directives to read sensitive files such as /etc/password. This results in a server-side template injection vulnerability that can expose sensitive information. This vulnerability is tracked as CVE-2025-9556.

Impact

This vulnerability compromises the confidentiality of the system by enabling arbitrary file read on a server running LangChainGo. By injecting malicious template syntax, an attacker could access sensitive information stored on the victim device. This information can lead to further comprise of the system. In LLM-based chatbot environments that use LangChainGo, attackers would only need access to the prompt to maliciously craft and exploit the prompt.

Solution

The maintainer of LangChainGo has released with new security features to prevent template injection. A new RenderTemplateFS function has been added, which supports secure file template referencing, on top of blocking filesystem access by default. Users of LangChainGo should update to the latest version of the software in order to be protected.

Acknowledgements

Thanks to the reporter, bestlzk. This document was written by Ayushi Kriplani and Christopher Cullen.