Suppose you want a chatbot that works with PDFs: extract text, search across documents, summarize sections. You can build it two ways: by calling an LLM API directly and wiring tools yourself, or by exposing those tools through the Model Context Protocol (MCP). Same user experience — different architecture. This article uses a PDF example to walk through both routes and explain what MCP adds.
The Goal
User asks in natural language → chatbot reads/searches PDFs → returns an answer.