Loading lesson…
Streaming AI chat to production takes one framework and three env vars. Learn the deploy path that actually ships.
Next.js plus the AI SDK plus Vercel gives you a streaming chatbot on the internet in 15 minutes. The trick is getting the streaming route and env vars right the first time.
// app/api/chat/route.ts
import { streamText, convertToModelMessages, UIMessage } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-opus-4-7"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}maxDuration = 30 tells Vercel this function may stream for 30s. toUIMessageStreamResponse is what the useChat hook expects.// app/page.tsx
"use client";
import { useChat } from "@ai-sdk/react";
export default function Chat() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState("");
return (
<div>
{messages.map((m) => (
<div key={m.id}><b>{m.role}:</b> {m.parts.map((p) => p.type === "text" ? p.text : "").join("")}</div>
))}
<form onSubmit={(e) => { e.preventDefault(); sendMessage({ text: input }); setInput(""); }}>
<input value={input} onChange={(e) => setInput(e.target.value)} disabled={status !== "ready"} />
</form>
</div>
);
}useChat manages the message list and streaming state. Pair with AI Elements components for a polished UI.# deploy
npm i -g vercel
vercel link
vercel env add ANTHROPIC_API_KEY production
vercel --prodLink the project, add the secret, and deploy. Preview URLs come free on every push.The big idea: AI SDK on the server, useChat on the client, Vercel in the middle. Three files and three env vars put a real AI app online.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-progx-deploy-vercel-ai-creators
A developer is preparing to deploy their AI chatbot to Vercel. They notice their API key variable starts with NEXT_PUBLIC_. What should they do before deploying?
What does Vercel's Fluid Compute feature primarily improve for AI applications?
In the AI SDK architecture, which function is responsible for handling the server-side streaming logic?
A developer wants to deploy an AI chatbot to Vercel. How many environment variables are typically required to get streaming working?
What is the purpose of the useChat hook in a Next.js AI application?
What does the maxDuration environment variable control in a Vercel AI deployment?
Which three components work together to create a streaming AI chatbot, according to the deployment path described?
In the big idea described, where does the AI SDK processing primarily occur?
What happens if you deploy an AI app with a NEXT_PUBLIC_ prefixed API key?
What advantage does Fluid Compute provide over classic serverless functions?
What are the 'three files' mentioned in the lesson that are needed to put an AI app online?
Which client-side component from the AI SDK manages the chat interface and streaming display?
What is the primary role of Vercel in the three-way architecture described?
What does it mean for a function to be 'warm' in the context of serverless computing?
When deploying to Vercel, where should the code that calls the AI API (using streamText) reside?