Phase 1Beginner⏱ 20 minutes

AI API
Fundamentals

Get hands-on with AI APIs using Google Gemini - a crucial foundation before diving into LangChain. This step-by-step guide for beginners covers environment setup, authentication, your first API call, and essential concepts like tokens, models, and parameters.

🎯

Learning Objectives

  • Set up your development environment and obtain a Google AI API key (free)
  • Make your first API call using Python (this course focuses on Python)
  • Understand key parameters: temperature, max_output_tokens, top_p, and more
  • Learn about tokens, pricing, and rate limits

🔗 Why Learn AI APIs Before LangChain?

LangChain is a powerful framework that simplifies AI development, but understanding the underlying AI APIs is crucial. This knowledge helps you debug issues, optimize performance, and make informed decisions when building LangChain applications. Think of it as learning to drive manual before automatic!

🚀

Getting Started: Step-by-Step Setup for Beginners

About Programming Languages

While AI APIs and frameworks like LangChain support multiple programming languages (Python, JavaScript, Go, etc.), this course uses Python exclusively. Python is the most popular language for AI/ML development due to its simplicity and extensive ecosystem of data science libraries.

Step 1: Create a Google AI Studio Account

  1. Go to Google AI Studio
  2. Sign in with your Google account
  3. Click "Create API Key"
  4. Choose "Create API key in new project" or select existing project
  5. Copy and save your API key securely (you can view it again later)

Free to Start!

Google AI Studio provides free API access with generous quotas - perfect for learning and experimentation. No credit card required!

Security Best Practices

  • • Never commit API keys to version control
  • • Use environment variables (.env files)
  • • Set up usage alerts in your Google AI Studio account
  • • Rotate keys regularly

Step 2: Set Up Your Development Environment

💻

Your First API Call

Setup Instructions

1. Install the Google Generative AI library:

pip install google-generativeai python-dotenv

2. Create a .env file:

GOOGLE_API_KEY=AIza...your-key-here...

3. Your first Python script:

import os
import google.generativeai as genai
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Configure the API key
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))

# Initialize the model
model = genai.GenerativeModel('gemini-2.0-flash')

# Make your first API call
response = model.generate_content(
    "Hello! What can you do?",
    generation_config=genai.GenerationConfig(
        temperature=0.7,
    )
)

# Print the response
print(response.text)
🎛️

Understanding API Parameters

💡 Important Note:

Parameter names and available options vary between AI providers. This lesson focuses on Google Gemini API. OpenAI uses max_tokens, while Gemini usesmax_output_tokens. Always check the specific API documentation for your chosen provider.

🌡️Temperature (0-2)

Controls randomness in responses. Lower values make output more focused and deterministic.

  • 0-0.3: Factual, consistent responses
  • 0.4-0.7: Balanced creativity
  • 0.8-1.0: Creative, varied responses

📏Max Output Tokens

Maximum length of the response. One token ≈ 4 characters or 0.75 words.

  • Short answer: 50-150 tokens
  • Paragraph: 150-500 tokens
  • Essay: 1000+ tokens

🎲Top P (0-1)

Nucleus sampling: considers tokens with top cumulative probability mass.

  • 0.1: Very focused, predictable
  • 0.5: Moderately diverse
  • 1.0: Consider all options

🔄Frequency Penalty (-2 to 2)

Reduces repetition by penalizing tokens based on their frequency.

  • Negative: Allow more repetition
  • 0: Default behavior
  • Positive: Reduce repetition

Example: Different Parameter Settings

# Creative writing
response = model.generate_content(
    "Write a story about...",
    generation_config=genai.GenerationConfig(
        temperature=0.9,
        max_output_tokens=500,
        top_p=0.9
    )
)

# Factual Q&A
response = model.generate_content(
    "What is the capital of...",
    generation_config=genai.GenerationConfig(
        temperature=0.1,
        max_output_tokens=50,
        top_p=0.1
    )
)

# Code generation
response = model.generate_content(
    "Write a Python function...",
    generation_config=genai.GenerationConfig(
        temperature=0.3,
        max_output_tokens=300,
    )
)
💰

Understanding Tokens and Pricing

What are Tokens?

Tokens are pieces of words that the model processes. As a rough estimate:

  • • 1 token ≈ 4 characters in English
  • • 1 token ≈ ¾ words
  • • 100 tokens ≈ 75 words

💡 Token Examples

• "Hello" = 1 token

• "Hello, world!" = 4 tokens

• "artificial intelligence" = 2 tokens

• "今日は" = 3 tokens (non-English uses more)

Cost Optimization Tips

  • • Use shorter prompts when possible
  • • Set appropriate max_output_tokens limits
  • • Use appropriate models for your task complexity
  • • Cache responses when appropriate
  • • Monitor usage through Google AI Studio dashboard
⚠️

Common Errors and Solutions

401 Unauthorized

Cause: Invalid or missing API key
Solution: Check your API key and environment variables

429 Rate Limit Exceeded

Cause: Too many requests in a short time
Solution: Implement exponential backoff or upgrade your plan

400 Bad Request

Cause: Invalid parameters or model name
Solution: Check model name and parameter values

🎉 Next Step: Building Towards LangChain

Great job! You've mastered the basics of AI APIs - a crucial foundation for LangChain development. Understanding how to work directly with AI APIs will help you appreciate the power and convenience that LangChain brings to AI application development.

Ready to get better results from AI? In the next lesson, you'll learn Prompt Engineering techniques to dramatically improve the quality and consistency of AI responses - another essential skill for LangChain development.