In recent months, the Model Context Protocol (MCP) has gained a lot of traction as a powerful foundation for building AI assistants. While many developers are familiar with its core request-response flow, there's one feature that I believe remains underappreciated: the ability of MCP servers to send notifications to clients.
Let’s quickly recap the typical flow used by most MCP-based assistants:
- A user sends a prompt to the assistant.
 
- The assistant attaches a list of available tools and forwards the prompt to the LLM.
 
- The LLM generates a response, possibly requesting the use of certain tools for additional context.
 
- The assistant invokes those tools and gathers their responses.
 
- These tool responses are sent back to the LLM.
 
- The LLM returns a final answer, which the assistant presents to the user.
 
This user-initiated flow is incredibly effective—and it’s what powers many AI assistants today.
However, MCP also supports a less obvious but equally powerful capability: tool-initiated communication. That is, tools can trigger actions that cause the MCP server to send real-time notifications to the client, even when the user hasn’t sent a new prompt.