Prompt Templates &
Output Parsers
Learn how to create reusable, dynamic prompts and parse AI responses into structured data. Essential skills for building production-ready AI applications.
Learning Objectives
- Create dynamic, reusable prompt templates with variables
- Use different prompt template types for various scenarios
- Parse unstructured AI outputs into structured data (JSON, lists, etc.)
- Build reliable applications with validated outputs
π Prerequisites for This Intermediate Tutorial
This tutorial assumes you have:
- β’ Completed Lesson 5 (LangChain Setup) or have LangChain installed
- β’ Intermediate Python programming skills
- β’ Basic understanding of prompt engineering
- β’ Familiarity with JSON and data structures
What are LangChain Prompt Templates? Understanding the Basics
Prompt templates are reusable prompts with variables that can be filled dynamically. Instead of hardcoding prompts, you create templates that adapt to different inputs.
π Why Use Templates?
- β’ Reusability: Write once, use many times with different inputs
- β’ Consistency: Ensure uniform prompt structure across your app
- β’ Maintainability: Update prompts in one place
- β’ Dynamic Content: Easily inject user data, context, or variables
How to Create Basic LangChain Prompt Templates: Python Examples
Simple Variable Substitution
The most basic template replaces variables with actual values. This example shows how templates work - we're creating a prompt that could be sent to an LLM for translation:
from langchain.prompts import PromptTemplate
# Create a simple template
template = """You are a helpful assistant that translates {input_language} to {output_language}.
Text: {text}
Translation:"""
# Create the prompt template
prompt = PromptTemplate(
input_variables=["input_language", "output_language", "text"],
template=template
)
# Use the template
formatted_prompt = prompt.format(
input_language="English",
output_language="Spanish",
text="Hello, how are you?"
)
print(formatted_prompt)
# Output will be:
# You are a helpful assistant that translates English to Spanish.
#
# Text: Hello, how are you?
#
# Translation:
Output: The template replaces {'input_language}, {'output_language}, and {'text}with the actual values you provide. Note: This only creates the prompt text - it doesn't actually perform translation yet. To get a translation, you would need to send this prompt to an LLM.
Complete Example: Template + LLM
Here's how to use the template with an LLM to actually perform the translation:
from langchain.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Create the template
template = """You are a helpful assistant that translates {input_language} to {output_language}.
Text: {text}
Translation:"""
prompt = PromptTemplate(
input_variables=["input_language", "output_language", "text"],
template=template
)
# Initialize the LLM with API key
# Make sure you have GOOGLE_API_KEY in your .env file
model = ChatGoogleGenerativeAI(
model="gemini-2.0-flash",
google_api_key=os.getenv("GOOGLE_API_KEY")
)
# Create a chain: template -> LLM
chain = prompt | model
# Run the chain (this will actually translate!)
response = chain.invoke({
"input_language": "English",
"output_language": "Spanish",
"text": "Hello, how are you?"
})
print(response.content)
# Output: "Hola, ΒΏcΓ³mo estΓ‘s?"
Chat Prompt Templates
For chat models, use ChatPromptTemplate to handle message formatting. Chat models expect messages in a specific format with roles (system, human, assistant), making ChatPromptTemplate essential for proper communication:
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_google_genai import ChatGoogleGenerativeAI
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Create a chat template with system and human messages
chat_template = ChatPromptTemplate.from_messages([
("system", "You are a {role} that helps with {task}."),
("human", "{user_input}"),
])
# Format the template
messages = chat_template.format_messages(
role="programming tutor",
task="Python coding",
user_input="How do I read a CSV file?"
)
# Use with your model
llm = ChatGoogleGenerativeAI(
model="gemini-2.0-flash",
google_api_key=os.getenv("GOOGLE_API_KEY") # Set API key explicitly
)
response = llm.invoke(messages)
print(response.content)
π Understanding Message Roles:
- β’ "system": Sets the AI's behavior, personality, or constraints. Like giving instructions to an assistant
- β’ "human": The user's input or question
- β’ "assistant": The AI's previous responses (used for conversation history)
β¨ Benefits of ChatPromptTemplate:
- β’ Automatically formats messages in the correct structure for chat models
- β’ Supports dynamic variables in any message type
- β’ Makes it easy to build conversational applications
- β’ Maintains consistency across different chat model providers
Advanced Template Patterns
Partial Templates
Pre-fill some variables while leaving others dynamic. This is useful when some values are constant (like current date, system settings, or default values) while others change with each use:
from langchain.prompts import PromptTemplate
from datetime import datetime
# Template with multiple variables
template = """Date: {date}
User: {user_name}
Task: {task}
Please complete the following: {request}"""
# Create a partial template with date pre-filled
prompt = PromptTemplate(
input_variables=["user_name", "task", "request"],
template=template,
partial_variables={"date": datetime.now().strftime("%Y-%m-%d")}
)
# Now you only need to provide the remaining variables
formatted = prompt.format(
user_name="Alice",
task="Code Review",
request="Review this Python function for best practices"
)
print(formatted)
π When to Use Partial Templates:
- β’ Timestamps: Automatically include current date/time in logs or reports
- β’ System Info: Pre-fill environment, version, or configuration details
- β’ User Context: Set user preferences or settings once, reuse many times
- β’ Default Values: Provide sensible defaults that can be overridden
Pro Tip: Partial templates are perfect for creating specialized versions of general templates. For example, create a general email template, then use partials to create specific versions for support, sales, or notifications.
Template Composition
Combine multiple templates for complex prompts. This pattern allows you to build modular, reusable prompt components that can be mixed and matched for different use cases:
from langchain.prompts import PromptTemplate
# Define reusable template components
persona_template = "You are a {expertise} expert with {years} years of experience."
task_template = "Your task is to {action} the following {content_type}:"
format_template = "Please format your response as {format}."
# Combine templates
full_template = f"""
{persona_template}
{task_template}
{format_template}
Content: {{content}}
"""
# Create the prompt
prompt = PromptTemplate(
input_variables=["expertise", "years", "action", "content_type", "format", "content"],
template=full_template
)
# Use the composed template
result = prompt.format(
expertise="Python",
years="10",
action="optimize",
content_type="code",
format="a list of improvements with explanations",
content="def calculate_sum(numbers): total = 0; for n in numbers: total = total + n; return total"
)
print(result)
π§© Why Use Template Composition?
- β’ Modularity: Create reusable building blocks for different prompt parts
- β’ Consistency: Ensure uniform structure across all prompts in your application
- β’ Flexibility: Mix and match components for different scenarios
- β’ Maintainability: Update one component to affect all prompts using it
π‘ Real-World Example:
Imagine building a customer support system where you need different combinations:
- β’ Technical Support: persona_template + technical_task + detailed_format
- β’ Sales Inquiry: persona_template + sales_task + friendly_format
- β’ Complaint Handling: persona_template + empathy_task + solution_format
Same persona component, different task and format components!
LangChain Output Parsers Tutorial: Step-by-Step Guide to Structured Data
Why Output Parsers?
LLMs return unstructured text, but applications need structured data. Output parsers convert free-form text into usable formats like:
Structured objects
Enumerated items
Yes/No decisions
Any format
List Output Parser
Parse comma-separated or numbered lists. This parser automatically instructs the LLM to return results in a comma-separated format and then splits the response into a Python list:
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Initialize parser
list_parser = CommaSeparatedListOutputParser()
# Create prompt with format instructions
prompt = PromptTemplate(
template="List 5 {category}.\n{format_instructions}",
input_variables=["category"],
partial_variables={"format_instructions": list_parser.get_format_instructions()}
)
# Create chain
llm = ChatGoogleGenerativeAI(
model="gemini-2.0-flash",
google_api_key=os.getenv("GOOGLE_API_KEY")
)
chain = prompt | llm | list_parser
# Get parsed list
result = chain.invoke({"category": "Python web frameworks"})
print(result) # ['Django', 'Flask', 'FastAPI', 'Pyramid', 'Tornado']
π How It Works:
- β’ The parser adds format instructions to your prompt automatically
- β’ LLM responds with items separated by commas
- β’ Parser splits the response and cleans up whitespace
- β’ Returns a clean Python list ready to use
β¨ Perfect For:
- β’ Generating lists of ideas or suggestions
- β’ Extracting multiple items from text
- β’ Creating categories or tags
- β’ Any task that needs multiple simple outputs
Structured Output Parser
Parse complex structured data with validation. This parser uses ResponseSchema objects to define the expected structure and automatically instructs the LLM to return data in JSON format:
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Define the expected structure
response_schemas = [
ResponseSchema(
name="name",
description="The name of the product"
),
ResponseSchema(
name="price",
description="The price in USD as a number"
),
ResponseSchema(
name="features",
description="A list of key features"
),
ResponseSchema(
name="in_stock",
description="Whether the item is in stock (true/false)"
)
]
# Create parser
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
# Create prompt
prompt = PromptTemplate(
template="""Extract product information from the following text.
{format_instructions}
Text: {product_description}""",
input_variables=["product_description"],
partial_variables={"format_instructions": output_parser.get_format_instructions()}
)
# Use the chain
llm = ChatGoogleGenerativeAI(
model="gemini-2.0-flash",
google_api_key=os.getenv("GOOGLE_API_KEY")
)
chain = prompt | llm | output_parser
# Example usage
result = chain.invoke({
"product_description": "The new iPhone 15 Pro costs $999 and features a titanium design, A17 Pro chip, and improved camera system. Currently available in stores."
})
print(result)
# {'name': 'iPhone 15 Pro', 'price': 999, 'features': ['titanium design', 'A17 Pro chip', 'improved camera system'], 'in_stock': True}
π Key Features:
- β’ Schema Definition: Define each field with name and description
- β’ Type Flexibility: Supports strings, numbers, lists, and booleans
- β’ Automatic Instructions: Parser tells LLM exactly how to format the response
- β’ JSON Output: Returns data as a Python dictionary
π‘ Use Cases:
- β’ Data Extraction: Pull structured information from unstructured text
- β’ Form Processing: Convert natural language into form fields
- β’ API Responses: Generate structured data for APIs
- β’ Database Records: Create records ready for database insertion
Pro Tip: The description field in ResponseSchema is crucial - it tells the LLM exactly what data to extract. Be specific and include examples when helpful!
Advanced: Pydantic Output Parser
For production applications, use Pydantic for type safety and validation. Pydantic provides the most robust parsing with automatic type conversion, validation, and clear error messages.
π Why Pydantic Output Parser?
- β’ Type Safety: Automatic type checking and conversion
- β’ Validation: Built-in field validation with custom rules
- β’ Error Handling: Clear error messages when parsing fails
- β’ IDE Support: Full autocomplete and type hints
- β’ Production Ready: Battle-tested in enterprise applications
π‘ Key Components:
- β’ BaseModel: Define your data structure with types
- β’ Field(): Add descriptions and validation rules
- β’ PydanticOutputParser: Converts LLM output to typed objects
- β’ format_instructions: Auto-generated prompt instructions
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
from pydantic import BaseModel, Field
from typing import List
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Define your data model
class Recipe(BaseModel):
name: str = Field(description="Name of the recipe")
prep_time: int = Field(description="Preparation time in minutes")
servings: int = Field(description="Number of servings")
ingredients: List[str] = Field(description="List of ingredients")
difficulty: str = Field(description="Difficulty level: easy, medium, or hard")
# Create parser
parser = PydanticOutputParser(pydantic_object=Recipe)
# Create prompt
prompt = PromptTemplate(
template="""Extract recipe information from the following text.
{format_instructions}
Text: {recipe_text}""",
input_variables=["recipe_text"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
# Use the chain
llm = ChatGoogleGenerativeAI(
model="gemini-2.0-flash",
temperature=0,
google_api_key=os.getenv("GOOGLE_API_KEY")
)
chain = prompt | llm | parser
# Parse recipe
result = chain.invoke({
"recipe_text": "This easy pasta carbonara takes just 20 minutes to make and serves 4 people. You'll need spaghetti, eggs, bacon, parmesan cheese, and black pepper."
})
print(f"Recipe: {result.name}")
print(f"Time: {result.prep_time} minutes")
print(f"Servings: {result.servings}")
print(f"Ingredients: {', '.join(result.ingredients)}")
print(f"Difficulty: {result.difficulty}")
π― Advanced Features:
- β’ Custom Validators: Add your own validation logic with
@validator
- β’ Default Values: Set defaults for optional fields
- β’ Nested Models: Support complex, nested data structures
- β’ Enums: Restrict fields to specific values
π‘ Example with Validation:
from pydantic import BaseModel, Field, validator
from typing import List
class Recipe(BaseModel):
name: str = Field(description="Name of the recipe")
prep_time: int = Field(description="Prep time in minutes", gt=0, le=480)
ingredients: List[str] = Field(description="List of ingredients", min_items=1)
@validator('name')
def name_must_not_be_empty(cls, v):
if not v.strip():
raise ValueError('Recipe name cannot be empty')
return v.title() # Auto-capitalize
β¨ Best Practices
Prompt Templates
- β’ Keep templates focused on one task
- β’ Use descriptive variable names
- β’ Include examples in your prompts
- β’ Test with edge cases
Output Parsers
- β’ Always include format instructions
- β’ Handle parsing errors gracefully
- β’ Use Pydantic for complex structures
- β’ Validate outputs before using
π‘Common Patterns
Data Extraction
Template + StructuredOutputParser β Extract structured data from unstructured text
Classification
Template with options + EnumOutputParser β Categorize inputs
Multi-step Processing
Chain multiple templates and parsers β Complex workflows
π Next Step: Building Complex Workflows
Excellent work! You've mastered LangChain prompt templates and output parsers - essential tools for creating dynamic, structured AI applications. These skills form the foundation for more complex LangChain patterns.
Ready to level up? In the next lesson, you'll learn how to build complex workflows with LangChain Chains that combine multiple steps and create powerful AI pipelines.