Over the past couple of months, I’ve been experimenting with the Model Context Protocol (MCP) — building AI agents and tools around it. While the experience has been promising, I’ve noticed a few areas where MCP could be expanded or improved.
These aren’t critical issues, but adding them would make MCP more complete and developer-friendly.
Here’s my current wishlist:
1. **A Standard MCP Server Interface**
2. **Bidirectional Notifications**
3. **Built-in or Native Transport Layer**
Let’s walk through each of these in more detail.
## 1. A Standard MCP Server Interface
Several MCP servers have already been developed to address specific needs — such as AI memory or Retrieval-Augmented Generation (RAG). While many of these servers offer similar functionality, they are not interoperable. This becomes especially limiting when an AI assistant needs to call an MCP server directly — not just through standard tool selection by the LLM, but through forced tool invocation.
Imagine if all MCP servers that provide AI memory adhered to a standard interface — a predefined set of tools with consistent behavior. In that case, we could swap one memory server for another without changing how agents interact with it. This introduces the idea of a **server "interface"**, similar to interfaces in object-oriented programming (OOP): if a class implements an interface, it guarantees the presence of certain methods. Likewise, if an MCP server implements a given interface, it would be required to expose a defined set of tools with expected behavior.
For example:
* A **`Memory` interface** might define tools like `remember` and `recall`.
* A **`RAG` interface** could specify the only tool like `knowledge_search`.
* A **`Local File System` interface** might include tools like `save_file`, `list_files`, `read_file`, `create_folder` etc.
If an AI orchestrator or assistant knows that any compliant MCP server supports these tools, we gain flexibility, modularity, and better maintainability.
I’ve explored these ideas in more detail in previous blog posts:
* [Implementing AI Chat Memory with MCP](/blog/post/implementing-ai-chat-memory-with-mcp/) — demonstrating how MCP can be used for managing AI memory.
* [Adding Support for Retrieval-Augmented Generation (RAG) to AI Orchestrator](/blog/post/rag_for_cleverchatty/) — showing how to plug in RAG capabilities via MCP.
Currently, MCP doesn’t formally define a concept of "interfaces," but introducing this abstraction would greatly improve interoperability across servers. It would enable seamless replacement of components without requiring changes in the agent logic — a big win for modular AI system design.
However, some progress is already underway. When I shared these ideas in the MCP Discussions forum, I received responses from others working on similar concepts. One notable initiative is the [**MCP Interfaces RFC**](https://docs.google.com/document/d/18ueGSeqd2sQEGcaR4spzf1NIxFtmcRFOR3F8dnhyYBc/edit?pli=1&tab=t.0#heading=h.w6nylr5u9xte), which proposes a formal structure for defining and standardizing interfaces within the MCP ecosystem.
This kind of collaborative effort is promising. A shared standard for interfaces would not only promote compatibility between servers but also accelerate the development of reusable tools, agents, and orchestrators. It’s encouraging to see the community moving in this direction.

## 2. Bidirectional Notifications
When I imagine what is the place of MCP in the AI ecosystem, I see it as a nervous system connecting LLM (brain) with various tools (organs). In this analogy, the MCP server acts as a communication hub, relaying messages between the LLM and tools. However, the current implementation is primarily unidirectional: the LLM can invoke tools, but tools cannot notify the LLM of events or changes.
This one-way communication model limits the potential of MCP. For example, if a tool detects an important event (like a new message in a chat), it cannot directly notify the LLM to take action. Instead, the LLM must periodically poll the tool for updates, which is inefficient and can lead to missed opportunities.
Technically, MCP SDKs already support bidirectional communication. The MCP specification has a notification definition, and the SDKs can handle notifications. However, the current MCP server implementations do not expose this functionality. This means that while the infrastructure is in place, it is not being utilized effectively. The usage of notifications is limited to the set of some spandard notifications like "list tools".
There are some methods like "subscribe" and "unsubscribe" in the MCP specification, but it is not clear how they should be used in practice. And biggest problem is that it is not clear how the LLM should handle these notifications. LLMs are trained to call tools and process their responses, but they are not designed to handle a notification appeared by itself. LLM understands it as a prompt from a user (because there is no prior context where LLM asks for a tool invocation).
I have shared my experience with this in the [An Underrated Feature of MCP Servers: Client Notifications](/blog/post/an-underrated-feature-of-mcp-servers-client-notifications/) blog post.
So, i would like to see a more standardized approach to bidirectional notifications in MCP. MCp SDKs should support this functionality correctly (without communication problems i sow with Golang SDK). And LLMs must be able to handle these notifications properly, treating them as events that require action rather than just additional prompts.

## 3. Built-in or Native Transport Layer
The MCP protocol currently supports three transport methods: **STDIO**, **SSE (Server-Sent Events)**, and **HTTP Streaming**. These options are sufficient for distributed systems, but in practice, I’ve found that it would be incredibly useful to have a **built-in or native transport layer** — one that allows compiling the MCP server **directly into the same binary** as the AI agent.
This isn’t a requirement of the MCP specification, but rather a **convenience feature** that could be implemented at the SDK level. It would simplify deployment, reduce external dependencies, and streamline performance — especially for agents that always run with a tightly coupled server.
### Why It’s Useful
For example, I’ve built an AI agent using the **Go MCP SDK**, which communicates with a local MCP server over STDIO. To run the agent and it has to manage that MCP server as a separate process. This adds operational complexity and consumes additional system resources — even though the server is implemented in Go and always launched alongside the agent.
Wouldn’t it be better if I could just **embed** the server inside the same binary?
To support this, we’d need a **“native” transport mode**, where the MCP client communicates with the server **via direct function calls**, not over a network or subprocess pipe.
In a configuration file, this might look like:
```json
{
"mcpServers": {
"FileSystem": {
"transport": "native",
"args": [ ... ]
}
}
}
```
Probably, it could work even without a `transport` field, as the SDK could assume that if no transport is specified, it should use the native mode by default.
This "native" transport would work especially well alongside the **MCP Interfaces** concept. I could package some **default native servers** — implementing standard interfaces like `Memory`, `RAG`, or `FileSystem` — directly into my agent binary. And if needed, I could **override** those native components by pointing to external servers that implement the same interface:
```json
{
"mcpServers": {
"FileSystem": {
"args": [ ... ]
},
"AnotherFileSystem": {
"transport": "stdio",
"command": "another_filesystem_server",
"args": ["--config", "another_filesystem_config.json"],
"interface": "FileSystem"
}
}
}
```
In this setup, the external `AnotherFileSystem` MCP server takes precedence (because it implements the `FileSystem` interface), and the native `FileSystem` module is ignored.
### Proposal
To support this model, the MCP specification could formally define a **“native” transport type**, and SDKs — especially those in statically compiled languages like Go or Python — could offer tools to implement it. This would make it much easier to build **self-contained AI agents** with integrated, pluggable MCP modules, deployable as a single binary.
This small addition would significantly improve developer ergonomics and enable a broader range of use cases, from embedded agents to edge devices — all without sacrificing the modularity and interoperability that make MCP powerful.

## Conclusion
MCP is already a powerful and flexible protocol for connecting LLMs with tools, but there's still room for growth — especially as developers push the boundaries of what AI agents can do.
The features I’ve discussed — **standard interfaces**, **bidirectional notifications**, and **native transport layers** — are not just wishlist items. They represent practical improvements that would make MCP-based systems **more modular, reactive, and easier to deploy**. Some early work has already begun, especially around defining interfaces, but broader adoption and clearer specifications are needed.
As the ecosystem matures, I hope to see these ideas integrated into future versions of the MCP specification and SDKs. Until then, I’ll continue experimenting, prototyping, and sharing what I learn — and I encourage others working with MCP to do the same.
If you're building agents, servers, or tools using MCP, I’d love to hear your thoughts. Let’s continue shaping this protocol together.