LangChain Architecture
& Setup
Discover the power of LangChain - the Python framework that simplifies building AI applications. This intermediate-level tutorial covers installation, architecture, core components, and hands-on project setup for developers with Python experience.
Learning Objectives
- Understand what LangChain is and why it's essential for AI development
- Learn LangChain's core architecture and component hierarchy
- Install LangChain in Python using pip and set up your development environment
- Build your first LangChain Python application step by step
š Prerequisites: Who This Tutorial Is For
This intermediate-level tutorial assumes you have:
- ⢠Basic Python programming knowledge
- ⢠Completed Phase 1 (AI Fundamentals) or equivalent experience
- ⢠Python 3.8+ installed on your system
- ⢠Familiarity with pip package management
What is LangChain?
LangChain is a powerful framework designed to simplify the development of applications powered by large language models (LLMs). It provides a set of tools and abstractions that make it easier to build complex AI applications.
Think of LangChain as:
The "React" or "Django" of AI development - a framework that provides structure, best practices, and reusable components for building LLM-powered applications.
š Without LangChain
- āWrite boilerplate code for each LLM integration
- āManually handle prompt templates and formatting
- āBuild your own memory management system
- āCreate custom chains and workflows from scratch
⨠With LangChain
- āUnified interface for all LLM providers
- āBuilt-in prompt templates and management
- āReady-to-use memory and conversation history
- āPre-built chains for common patterns
LangChain Architecture
Core Components Hierarchy
1. Models š¤
The foundation - interfaces to various LLMs (OpenAI, Anthropic, etc.)
ChatOpenAI, ChatAnthropic, Ollama
2. Prompts š
Templates and formatters for creating effective prompts
PromptTemplate, ChatPromptTemplate, FewShotPromptTemplate
3. Memory š§
Systems for maintaining conversation context and history
ConversationBufferMemory, ConversationSummaryMemory
4. Chains āļø
Combine models, prompts, and memory into reusable workflows
LLMChain, ConversationChain, RetrievalQA
5. Agents š¤
Autonomous decision-makers that can use tools and chains
AgentExecutor, Tools, ReActAgent
š” Key Insight: Each component builds on the previous ones. You start with models, add prompts, include memory if needed, combine them with chains, and create agents for complex tasks.
š Supported LLM Providers
LangChain supports many providers with the same interface:
How to Install LangChain in Python: Step-by-Step Setup Guide
Setup Instructions
1. Install LangChain and dependencies:
pip install langchain langchain-google-genai langchain-community python-dotenv
2. Create your project structure:
my-langchain-app/
āāā .env
āāā app.py
Your First LangChain Application
Basic LangChain Example
Let's build a simple application that uses LangChain to create a helpful assistant:
1. First, get your Google AI API key:
- ⢠Go to Google AI Studio
- ⢠Click "Create API Key" (free to use)
- ⢠Copy your API key
# Create .env file
GOOGLE_API_KEY=AIza...your-api-key-here...
2. Now create your LangChain application:
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.prompts import ChatPromptTemplate
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Initialize the model
llm = ChatGoogleGenerativeAI(
model="gemini-2.0-flash", # Use latest model: gemini-2.0-flash, gemini-1.5-pro, etc.
temperature=0.7
)
# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that explains concepts clearly."),
("human", "Explain {topic} in simple terms")
])
# Create a chain using the new syntax (LLMChain is deprecated)
chain = prompt | llm
# Run the chain
result = chain.invoke({"topic": "quantum computing"})
print(result.content)
What's happening here?
- We initialize a ChatGoogleGenerativeAI model (use the latest available model)
- Create a prompt template with system and human messages
- Combine them using the pipe operator (|) - the new recommended syntax
- Execute the chain with our input topic
Note: Use the latest available Gemini model (e.g., gemini-2.0-flash, gemini-1.5-pro). CheckGoogle AI docs for current models. The LLMChain class is deprecated - use the pipe operator instead: prompt | llm
Adding Memory to Your Application
š§ Why Memory Matters
Without memory, each interaction with the AI is isolated - it can't remember previous messages. This makes it impossible to have natural conversations or build context-aware applications.
User: "My name is Alice"
AI: "Nice to meet you!"
User: "What's my name?"
AI: "I don't know your name."
User: "My name is Alice"
AI: "Nice to meet you, Alice!"
User: "What's my name?"
AI: "Your name is Alice."
What we're adding:
- ⢠ConversationBufferMemory: Stores the entire conversation history
- ⢠Memory integration: Automatically includes past messages in each request
- ⢠Context preservation: Maintains continuity across multiple interactions
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.prompts import ChatPromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain.schema.runnable import RunnablePassthrough
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Initialize the model
llm = ChatGoogleGenerativeAI(
model="gemini-2.0-flash",
temperature=0.7
)
# Initialize memory
memory = ConversationBufferMemory(
return_messages=True,
memory_key="chat_history"
)
# Create a conversation prompt
conversation_prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Use the conversation history to provide context-aware responses."),
("placeholder", "{chat_history}"),
("human", "{input}")
])
# Create a chain with memory
conversation_chain = (
RunnablePassthrough.assign(
chat_history=lambda x: memory.load_memory_variables(x)["chat_history"]
)
| conversation_prompt
| llm
)
# Helper function to handle conversation
def chat(message):
response = conversation_chain.invoke({"input": message})
memory.save_context({"input": message}, {"output": response.content})
return response.content
# Have a conversation
print(chat("Hi, my name is Alice"))
print(chat("What's my name?"))
print(chat("Tell me a joke about my name"))
⨠Magic: The conversation chain automatically remembers previous messages, allowing the AI to reference earlier parts of the conversation!
Common LangChain Patterns
š Template Chaining
Combine multiple prompts for complex workflows
PromptTemplate ā LLMChain ā Output
š¬ Conversational AI
Build chatbots with memory and context
Memory + ConversationChain
š Question Answering
Answer questions from documents
Documents ā RetrievalQA
š¤ Autonomous Agents
AI that can use tools and make decisions
Tools + Agent + AgentExecutor
⨠LangChain Best Practices
Development
- ⢠Use environment variables for API keys
- ⢠Start simple, then add complexity
- ⢠Test prompts thoroughly before production
- ⢠Use verbose=True for debugging chains
Architecture
- ⢠Separate prompts from business logic
- ⢠Use appropriate memory types
- ⢠Handle errors gracefully
- ⢠Monitor token usage and costs
š Next Step
Great job! You now understand LangChain basics. Next, we'll explore Prompt Templates and Output Parsers to create more sophisticated AI applications.