Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News LangChain - Working with Large Language Models, Made Easy

LangChain - Working with Large Language Models, Made Easy

LangChain is a framework that simplifies working with large language models (LLMs), such as OpenAI GPT=4 or Google PaLM, by providing abstractions for common use cases. It supports both JavaScript and Python. 

To understand the need for LangChain, we first need to discuss how LLMs work. 

Under the hood, LLMs are statistical models that, given a set of chunks containing anything from a single character to a few words, can predict the next set of chunks.

The initial chunks of text are called Prompts, and Prompt Engineering is the art of fine-tuning the results received from an LLM by providing the most appropriate set of prompts.

While LangChain provides many tools, at its core, it offers three capabilities:

  1. An abstraction layer that enables developers to interact with the different LLM providers using a standardized set of commands
  2. A set of tools that formalizes the process of Prompt Engineering by enforcing a set of best practices
  3. The ability to "chain" the various components LangChain offers to perform complex interactions

The JavaScript example below demonstrates how to create and execute the simplest chain containing a single prompt.

const model = new OpenAI();
import { PromptTemplate } from "langchain/prompts";

const prompt = PromptTemplate.fromTemplate(`Tell me a joke about {topic}`);
const chain = new LLMChain({ llm: model, prompt: prompt });
const response = await{ topic: "ducks" });

Of course, using a chain of a single component isn't very interesting. More complex applications usually use multiple components to generate the desired results.

We will use the SimpleSequentialChain to run multiple prompts sequentially to demonstrate this. In this case, after asking LangChain to write a joke about the provided language, we will ask it to translate it into Spanish.

const translatePrompt = PromptTemplate.fromTemplate(`translate the following text to Spanish: {text}`);
const translateChain = new LLMChain({ llm: model, prompt: translatePrompt });
const overallChain = new SimpleSequentialChain({
    chains: [chain, translateChain],
    verbose: true,
const results = await"ducks");

Note that by passing "verbose: true" to the SimpleSequentialChain, we can see the generation process, which is helpful for debugging.

Of course, LangChain can do much more than chaining several prompts. It includes two modules that allow developers to extend the interactions with LLMs beyond simple chats. 

The Memory module enables developers to persist state across chains using a variety of solutions ranging from external databases such as Redis and DynamoDB to simply storing the data in memory.

The Agents module enables chains to interact with external providers and perform actions based on their responses. 

The full documentation, alongside more complex examples, can be found on the official LangChain documentation site.

Developers should be aware that LangChain is still under active development, and use in production should be handled with care.

About the Author

Rate this Article