Phase 2Intermediateā± 40 minutes

LangChain Architecture
& Setup

Discover the power of LangChain - the Python framework that simplifies building AI applications. This intermediate-level tutorial covers installation, architecture, core components, and hands-on project setup for developers with Python experience.

šŸŽÆ

Learning Objectives

  • Understand what LangChain is and why it's essential for AI development
  • Learn LangChain's core architecture and component hierarchy
  • Install LangChain in Python using pip and set up your development environment
  • Build your first LangChain Python application step by step

šŸ“‹ Prerequisites: Who This Tutorial Is For

This intermediate-level tutorial assumes you have:

  • • Basic Python programming knowledge
  • • Completed Phase 1 (AI Fundamentals) or equivalent experience
  • • Python 3.8+ installed on your system
  • • Familiarity with pip package management
šŸ”—

What is LangChain?

LangChain is a powerful framework designed to simplify the development of applications powered by large language models (LLMs). It provides a set of tools and abstractions that make it easier to build complex AI applications.

Think of LangChain as:

The "React" or "Django" of AI development - a framework that provides structure, best practices, and reusable components for building LLM-powered applications.

šŸš€ Without LangChain

  • āœ—Write boilerplate code for each LLM integration
  • āœ—Manually handle prompt templates and formatting
  • āœ—Build your own memory management system
  • āœ—Create custom chains and workflows from scratch

✨ With LangChain

  • āœ“Unified interface for all LLM providers
  • āœ“Built-in prompt templates and management
  • āœ“Ready-to-use memory and conversation history
  • āœ“Pre-built chains for common patterns
šŸ—ļø

LangChain Architecture

Core Components Hierarchy

1. Models šŸ¤–

The foundation - interfaces to various LLMs (OpenAI, Anthropic, etc.)

ChatOpenAI, ChatAnthropic, Ollama

2. Prompts šŸ“

Templates and formatters for creating effective prompts

PromptTemplate, ChatPromptTemplate, FewShotPromptTemplate

3. Memory 🧠

Systems for maintaining conversation context and history

ConversationBufferMemory, ConversationSummaryMemory

4. Chains ā›“ļø

Combine models, prompts, and memory into reusable workflows

LLMChain, ConversationChain, RetrievalQA

5. Agents šŸ¤–

Autonomous decision-makers that can use tools and chains

AgentExecutor, Tools, ReActAgent

šŸ’” Key Insight: Each component builds on the previous ones. You start with models, add prompts, include memory if needed, combine them with chains, and create agents for complex tasks.

🌟 Supported LLM Providers

LangChain supports many providers with the same interface:

Google: Gemini
OpenAI: GPT-4, GPT-3.5
Anthropic: Claude
Local: Ollama, LlamaCpp
āš™ļø

How to Install LangChain in Python: Step-by-Step Setup Guide

Setup Instructions

1. Install LangChain and dependencies:

pip install langchain langchain-google-genai langchain-community python-dotenv

2. Create your project structure:

my-langchain-app/
ā”œā”€ā”€ .env
└── app.py
šŸš€

Your First LangChain Application

Basic LangChain Example

Let's build a simple application that uses LangChain to create a helpful assistant:

1. First, get your Google AI API key:

  • • Go to Google AI Studio
  • • Click "Create API Key" (free to use)
  • • Copy your API key
# Create .env file
GOOGLE_API_KEY=AIza...your-api-key-here...

2. Now create your LangChain application:

from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.prompts import ChatPromptTemplate
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Initialize the model
llm = ChatGoogleGenerativeAI(
    model="gemini-2.0-flash",  # Use latest model: gemini-2.0-flash, gemini-1.5-pro, etc.
    temperature=0.7
)

# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that explains concepts clearly."),
    ("human", "Explain {topic} in simple terms")
])

# Create a chain using the new syntax (LLMChain is deprecated)
chain = prompt | llm

# Run the chain
result = chain.invoke({"topic": "quantum computing"})
print(result.content)

What's happening here?

  1. We initialize a ChatGoogleGenerativeAI model (use the latest available model)
  2. Create a prompt template with system and human messages
  3. Combine them using the pipe operator (|) - the new recommended syntax
  4. Execute the chain with our input topic

Note: Use the latest available Gemini model (e.g., gemini-2.0-flash, gemini-1.5-pro). CheckGoogle AI docs for current models. The LLMChain class is deprecated - use the pipe operator instead: prompt | llm

Adding Memory to Your Application

🧠 Why Memory Matters

Without memory, each interaction with the AI is isolated - it can't remember previous messages. This makes it impossible to have natural conversations or build context-aware applications.

āŒ Without Memory:

User: "My name is Alice"
AI: "Nice to meet you!"
User: "What's my name?"
AI: "I don't know your name."

āœ… With Memory:

User: "My name is Alice"
AI: "Nice to meet you, Alice!"
User: "What's my name?"
AI: "Your name is Alice."

What we're adding:

  • • ConversationBufferMemory: Stores the entire conversation history
  • • Memory integration: Automatically includes past messages in each request
  • • Context preservation: Maintains continuity across multiple interactions
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.prompts import ChatPromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain.schema.runnable import RunnablePassthrough
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Initialize the model
llm = ChatGoogleGenerativeAI(
    model="gemini-2.0-flash",
    temperature=0.7
)

# Initialize memory
memory = ConversationBufferMemory(
    return_messages=True,
    memory_key="chat_history"
)

# Create a conversation prompt
conversation_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant. Use the conversation history to provide context-aware responses."),
    ("placeholder", "{chat_history}"),
    ("human", "{input}")
])

# Create a chain with memory
conversation_chain = (
    RunnablePassthrough.assign(
        chat_history=lambda x: memory.load_memory_variables(x)["chat_history"]
    )
    | conversation_prompt
    | llm
)

# Helper function to handle conversation
def chat(message):
    response = conversation_chain.invoke({"input": message})
    memory.save_context({"input": message}, {"output": response.content})
    return response.content

# Have a conversation
print(chat("Hi, my name is Alice"))
print(chat("What's my name?"))
print(chat("Tell me a joke about my name"))

✨ Magic: The conversation chain automatically remembers previous messages, allowing the AI to reference earlier parts of the conversation!

šŸ“‹

Common LangChain Patterns

šŸ“ Template Chaining

Combine multiple prompts for complex workflows

PromptTemplate → LLMChain → Output

šŸ’¬ Conversational AI

Build chatbots with memory and context

Memory + ConversationChain

šŸ” Question Answering

Answer questions from documents

Documents → RetrievalQA

šŸ¤– Autonomous Agents

AI that can use tools and make decisions

Tools + Agent + AgentExecutor

✨ LangChain Best Practices

Development

  • • Use environment variables for API keys
  • • Start simple, then add complexity
  • • Test prompts thoroughly before production
  • • Use verbose=True for debugging chains

Architecture

  • • Separate prompts from business logic
  • • Use appropriate memory types
  • • Handle errors gracefully
  • • Monitor token usage and costs

šŸŽ‰ Next Step

Great job! You now understand LangChain basics. Next, we'll explore Prompt Templates and Output Parsers to create more sophisticated AI applications.