The Beginner’s Guide to Using LangChain to Build With LLMs

Large language models (LLMs) are incredibly powerful for their reasoning capabilities, but working with them presents challenges that are different from building traditional software:

LangChain is a popular framework for developing LLM-powered apps built with these challenges in mind. It also provides a wide range of integrations with closed-source model providers (like OpenAI, Anthropic, and Google), open source models, and other third-party components like vectorstores.

This article will walk through the fundamentals of building with LLMs and LangChain’s Python library. It will only assume you have basic familiarity with Python, and doesn’t require any ML background.

You’ll learn about:

Let’s dive in!


We recommend using a Jupyter notebook to run the code in this tutorial since it provides a clean, interactive environment. See this page for instructions on setting it up locally, or check out Google Colab for an in-browser experience.

Let’s install the required dependencies. This guide defaults to Anthropic and their Claude 3 Chat Models, but you can use LangChain with a wide range of options:

pip install langchain_core langchain_anthropic

<aside> 💡 If you’re working in a Jupyter notebook, you’ll need to prefix pip with a % symbol like this: %pip install langchain_core langchain_anthropic


You’ll also need an Anthropic API key, which you can obtain here from their official console. Once you have it, set as an environment variable named ANTHROPIC_API_KEY:

export ANTHROPIC_API_KEY="..."