tools
Building AI Applications with Next.js and Vercel AI SDK
Image: AI-generated illustration for Building AI Applications with Next.js and Vercel AI SDK

Building AI Applications with Next.js and Vercel AI SDK

Neural Intelligence

Neural Intelligence

5 min read

A developer's guide to creating modern AI-powered web applications using Next.js, Vercel AI SDK, and leading LLM providers.

Building Modern AI Web Apps

The Vercel AI SDK has become the standard for building AI applications in the JavaScript/TypeScript ecosystem. Combined with Next.js, it provides a powerful foundation for streaming AI experiences.

Getting Started

Installation

# Create Next.js app
npx create-next-app@latest my-ai-app
cd my-ai-app

# Install AI SDK
npm install ai @ai-sdk/openai

Basic Chat Example

// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    messages,
  });

  return result.toDataStreamResponse();
}
// app/page.tsx
'use client';

import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}>
          <strong>{m.role}:</strong> {m.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

Core Concepts

Streaming Responses

Why streaming matters:

ApproachTime to First TokenUser Experience
Standard2-5 secondsUser waits, then sees all
Streaming100-300msInstant feedback

Provider Abstraction

The AI SDK supports multiple providers with a unified API:

import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';

// Switch providers easily
const model = openai('gpt-4o');
// or: anthropic('claude-3-5-sonnet-20241022')
// or: google('gemini-1.5-pro')

const result = streamText({ model, messages });

Supported Providers

ProviderPackageModels
OpenAI@ai-sdk/openaiGPT-4, GPT-4o, o1
Anthropic@ai-sdk/anthropicClaude 3, 3.5
Google@ai-sdk/googleGemini 1.5, 2
Mistral@ai-sdk/mistralMistral, Mixtral
Cohere@ai-sdk/cohereCommand R
Amazon@ai-sdk/amazon-bedrockAll Bedrock models

Advanced Features

Tool Use / Function Calling

import { streamText, tool } from 'ai';
import { z } from 'zod';

const result = streamText({
  model: openai('gpt-4o'),
  messages,
  tools: {
    weather: tool({
      description: 'Get weather for a location',
      parameters: z.object({
        location: z.string().describe('City name'),
      }),
      execute: async ({ location }) => {
        // Call weather API
        return { temperature: 72, conditions: 'sunny' };
      },
    }),
  },
});

Structured Output

import { generateObject } from 'ai';
import { z } from 'zod';

const { object } = await generateObject({
  model: openai('gpt-4o'),
  schema: z.object({
    name: z.string(),
    age: z.number(),
    interests: z.array(z.string()),
  }),
  prompt: 'Generate a fictional person profile',
});
// Returns typed object: { name: string, age: number, interests: string[] }

Image Understanding

import { generateText } from 'ai';

const result = await generateText({
  model: openai('gpt-4o'),
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What is in this image?' },
        { type: 'image', image: imageDataUrl },
      ],
    },
  ],
});

Building a Production Chatbot

Complete Example

// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { rateLimit } from '@/lib/rate-limit';
import { auth } from '@/lib/auth';

export async function POST(req: Request) {
  // Authentication
  const session = await auth();
  if (!session) {
    return new Response('Unauthorized', { status: 401 });
  }

  // Rate limiting
  const rateLimitResult = await rateLimit(session.user.id);
  if (!rateLimitResult.success) {
    return new Response('Rate limit exceeded', { status: 429 });
  }

  const { messages, systemPrompt } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    system: systemPrompt || 'You are a helpful assistant.',
    messages,
    maxTokens: 2000,
    temperature: 0.7,
    async onFinish(completion) {
      // Log usage for billing
      await logUsage(session.user.id, completion.usage);
    },
  });

  return result.toDataStreamResponse();
}

Client with UI Polish

'use client';

import { useChat } from 'ai/react';
import { useState } from 'react';

export default function ChatInterface() {
  const { messages, input, handleInputChange, handleSubmit, isLoading, error } = 
    useChat({
      api: '/api/chat',
      onError: (error) => console.error(error),
    });

  return (
    <div className="flex flex-col h-screen">
      <div className="flex-1 overflow-y-auto p-4">
        {messages.map((m) => (
          <div 
            key={m.id}
            className={`mb-4 ${m.role === 'user' ? 'text-right' : ''}`}
          >
            <div className={`inline-block p-3 rounded-lg ${
              m.role === 'user' 
                ? 'bg-blue-500 text-white' 
                : 'bg-gray-200'
            }`}>
              {m.content}
            </div>
          </div>
        ))}
        {isLoading && (
          <div className="text-gray-400">AI is thinking...</div>
        )}
        {error && (
          <div className="text-red-500">Error: {error.message}</div>
        )}
      </div>
      
      <form onSubmit={handleSubmit} className="p-4 border-t">
        <div className="flex gap-2">
          <input
            value={input}
            onChange={handleInputChange}
            className="flex-1 p-2 border rounded"
            placeholder="Type your message..."
          />
          <button 
            type="submit"
            disabled={isLoading}
            className="px-4 py-2 bg-blue-500 text-white rounded"
          >
            Send
          </button>
        </div>
      </form>
    </div>
  );
}

RAG Integration

Vector Search with AI SDK

import { openai } from '@ai-sdk/openai';
import { embed, embedMany } from 'ai';

// Embed documents
const { embeddings } = await embedMany({
  model: openai.embedding('text-embedding-3-small'),
  values: documents,
});

// Store in vector DB (Pinecone, Supabase, etc.)

// Query
const { embedding } = await embed({
  model: openai.embedding('text-embedding-3-small'),
  value: userQuery,
});

// Find similar, then generate
const context = await vectorDb.search(embedding);
const result = streamText({
  model: openai('gpt-4o'),
  system: `Context: ${context.join('\n')}`,
  messages,
});

Best Practices

Error Handling

Error TypeHandling
Rate limitsImplement retry with backoff
TimeoutSet reasonable limits, show status
Invalid inputValidate before sending
API errorsGraceful degradation

Cost Optimization

StrategySavings
Use smaller models when possible50-90%
Cache common responsesVariable
Limit max tokensPredictable costs
Implement user rate limitsControl

Security

  1. Never expose API keys to client
  2. Validate all user input
  3. Implement authentication
  4. Rate limit per user
  5. Log and monitor usage

Deployment

Vercel Deployment

# Deploy to Vercel
vercel

# Environment variables
OPENAI_API_KEY=sk-...

Edge Runtime

// For faster cold starts
export const runtime = 'edge';

"The Vercel AI SDK makes building AI applications feel like building any other web feature. The abstraction over multiple providers gives you flexibility, while streaming support ensures great UX."

Neural Intelligence

Written By

Neural Intelligence

AI Intelligence Analyst at NeuralTimes.

Next Story

ChatGPT vs Claude vs Gemini: Complete Comparison Guide 2025

The definitive comparison of ChatGPT, Claude, and Gemini—covering performance, pricing, features, and which AI is best for different use cases.