Back to Research
MCP Tools vs Resources vs Prompts Explained

MCP Tools vs Resources vs Prompts Explained

When to Use Each MCP Primitive and How They Work in Production Systems

E
Exo Tech
March 25, 20266 min read

TL;DR

  • What you will learn: The three MCP primitives (tools, resources, prompts), what each does, and a decision framework for choosing the right one for each server capability.

  • Why it matters: Using the wrong primitive creates security gaps, performance issues, and unpredictable AI behavior. Getting it right is the difference between a demo and a production server.

  • Exo Edge: We build production MCP servers with clean primitive separation. This framework comes from real implementation experience across blockchain, DevOps, and internal tooling.


Three Primitives, One Protocol, Zero Confusion

Model Context Protocol defines three primitives that MCP servers expose to AI clients: tools, resources, and prompts. If you are building an MCP server, the first design decision is which primitives to use for each capability. Choosing wrong leads to awkward integrations where the AI calls tools when it should read resources, or users have to manually trigger actions that should be automatic.

This post explains what MCP tools are used for, how they differ from resources and prompts, and provides a decision framework for choosing the right primitive for each capability in your server.


Tools: Model-Controlled Actions

MCP tools are functions that the AI model can invoke autonomously during a conversation. When a user asks "deploy the latest build to staging," the model evaluates available tools, selects the appropriate one, constructs the arguments, and calls it. The key characteristic: the model decides when to use a tool.

What MCP tools are used for in practice:

  • Actions with side effects. Creating records, sending messages, triggering deployments, executing transactions. Anything that changes state in an external system.

  • Computations that require external processing. Running database queries, calling APIs, performing calculations that the model cannot do natively.

  • Dynamic data retrieval where the query depends on conversation context. Searching a knowledge base based on the user's question, fetching user-specific data based on parameters the model extracts from the conversation.

Each tool requires three elements: a name (unique identifier), a description (natural language explanation the model reads to decide relevance), and an inputSchema (JSON Schema defining the expected arguments). The description is the most important element. A well-written description means the model calls your tool at the right time with the right arguments. A vague description means missed invocations or incorrect usage.

Example: A deployment tool might have the description "Deploys a specific git commit to the specified environment (staging or production). Requires a commit SHA and environment name. Returns the deployment URL and status." This tells the model exactly when to use it, what to pass, and what to expect back.


Resources: Application-Controlled Data

Resources are read-only data that the host application provides to the AI model as context. Unlike tools, the model does not decide when to fetch a resource. The host application (or the user through the host's UI) decides which resources to include in the conversation.

Resources use a URI-based addressing scheme. Each resource has a unique URI (like or db://users/schema) that identifies it. The client reads a resource by sending a resources/read request with the URI, and the server returns the content with its MIME type.

When to use resources instead of tools:

  • Static or semi-static context. Configuration files, database schemas, API documentation, project structure. Data that provides background knowledge rather than answering a specific question.

  • Data that should always be available. If the model needs certain context for every interaction (like the current user's permissions or the project's coding standards), expose it as a resource that the host loads at session start.

  • Read-only access to sensitive systems. Resources guarantee no side effects. If you want the model to see database contents but never modify them, resources enforce that boundary at the protocol level.

Resources come in two forms. Static resources have fixed URIs registered at server startup. Dynamic resources use URI templates (like logs://{date}/{service}) that the client fills in. Dynamic resources let a single server expose a large or changing dataset without registering every possible URI upfront.

Resources also support subscriptions. If the client subscribes to a resource, the server sends notifications when the resource content changes. This is useful for live data: monitoring dashboards, log streams, or real-time metrics that the model should always have the latest version of.


Prompts: User-Controlled Interaction Templates

Prompts are the least understood MCP primitive, but they solve a real problem: encoding domain expertise into reusable interaction patterns. A prompt is a template that expands into one or more messages for the AI model to process.

Unlike tools (model-controlled) and resources (application-controlled), prompts are user-controlled. The user explicitly selects a prompt from a menu or triggers it through the host application's UI. The host then fetches the prompt from the server, fills in any arguments, and includes the resulting messages in the conversation.

When to use prompts:

  • Complex workflows that need specific instructions. A code review prompt might include instructions for what to check, the format for findings, and the severity levels to use.

  • Multi-step analysis patterns. A security audit prompt might expand into a system message with audit criteria, a user message with the target code, and instructions for structuring the output.

  • Consistent formatting requirements. If you need the AI to always produce output in a specific format (JSON, markdown tables, structured reports), a prompt can encode those requirements.

Prompts can include embedded resource references. When the prompt expands, the host resolves these references and includes the resource content in the messages. This means a single prompt can combine instructions with live data from the server.


The Decision Framework: Tools, Resources, or Prompts?

Use this framework when designing your MCP server. For each capability you want to expose, ask these three questions:

Question 1: Does it have side effects? If yes, it must be a tool. Resources are read-only by definition. If the capability creates, modifies, or deletes anything in an external system, it is a tool.

Question 2: Who should control when it is invoked? If the AI model should decide based on the conversation, it is a tool. If the application should provide it as background context, it is a resource. If the user should explicitly choose it, it is a prompt.

Question 3: Is the data static or dynamic? Static or slowly-changing data (configuration, schemas, documentation) fits naturally as a resource. Data that depends on the conversation (query results, search results) fits better as a tool return value.


Common Misuse Patterns

The most common mistake is exposing everything as tools. Teams build "get_config" tools, "read_documentation" tools, and "fetch_schema" tools that have no side effects and return static data. These should be resources. Making them tools means the model has to decide to call them, adding latency and consuming tool-calling budget.

The second most common mistake is skipping prompts entirely. Teams embed complex instructions in tool descriptions or system prompts instead of defining reusable prompt templates. This leads to inconsistent behavior across different hosts and makes it hard to update interaction patterns without redeploying the server.

A third pattern to avoid: using resources for data that requires parameters the model should determine. If the data retrieval depends on the user's question (like searching a knowledge base), it needs to be a tool, not a resource.


Real-World Example: A Database MCP Server

Consider building an MCP server for a PostgreSQL database. Here is how you would split capabilities across the three primitives:

Resources: The database schema (tables, columns, types, relationships). This is semi-static data that the model needs as context for every query. Expose it as a resource like db://schema that the host loads at session start. Also expose connection status and database version as resources.

Tools: Query execution (run a SELECT statement and return results), record creation (INSERT), record updates (UPDATE), and record deletion (DELETE). These have side effects (or return conversation-dependent data) and the model needs to decide when and how to call them based on the user's request.

Prompts: A "data analysis" prompt that includes the schema resource, instructions for how to structure analytical queries, and formatting requirements for the output. A "migration review" prompt that includes the current schema and instructions for evaluating proposed changes.

This separation gives you clear security boundaries (resources are read-only, tools require explicit model invocation), clean architecture (each primitive has a defined role), and better model performance (the model sees the schema as context rather than having to fetch it as a tool call).


Conclusion + Action

The three MCP primitives exist for a reason. Tools handle actions and dynamic data retrieval. Resources provide static context. Prompts encode reusable interaction patterns. Using the right primitive for each capability means better security, predictable AI behavior, and servers that scale.

Start by auditing your current MCP server (or your planned one) against the decision framework. For each capability, ask: does it have side effects, who controls invocation, and is the data static or dynamic? If the answers point to a different primitive than what you have, refactor now before the integration hardens.


Exo designs and builds production MCP servers with clean primitive separation for technical teams. Whether you need tools for blockchain state, resources for protocol data, or prompts for complex analysis workflows, we architect MCP integrations that scale. Ready to build? Reach out at founders@exotechnologies.xyz