Building a Custom MCP Server to Query Firebase from Cursor
My experience extending MCP for Firebase so I could ask product analytics questions in plain English and have the AI build the database queries for me
Lately, I've been obsessed with bridging the gap between AI agents and real-world data. For my poker training app, Live Poker Theory, I often just wanted to quickly answer product questions, like "Are users studying tournaments or cash games more?" (These are two different sets of flashcards that users might study).
Now, I could go the traditional route: invest hours in setting up a full analytics platform like Mixpanel or Posthog, defining custom events, and building dashboards. Those are incredible tools, but they take time to configure and learn to use, and for a quick, ad-hoc question, it feels like overkill. My data lived in Firebase, my AI agent in Cursor, and the constant friction was that there wasn't an easy, intelligent way for them to talk.
I've already shown how MCP can connect AI tools to Notion for project planning, but if we can apply the same principle to the product's actual backend data?
Managing a TODO list is cool, but what’s even cooler is the AI doing the TODO list. To do that, the AI needs access to the real data.
This post details exactly how I built a custom MCP server to enable Cursor and Claude Desktop to query my Firebase database directly. My goal was to put MCP to a tangible test, letting me analyze product usage with natural language, all within the comfort of my editor.
The Setup
Quick recap: MCP lets you expose tools that agents, like Claude or Cursor, can call to obtain external knowledge like the weather or your TODO list.
LLMs are trained once and remain static and unchanged, so any “new” knowledge they have must be provided by the user or a tool, and MCP provides a standard way to create these tools.
My goal was to use an MCP server so that a tool like Cursor can talk to Firebase, and specifically, Firestore, which is a document database that serves as a subset of the Firebase suite of products.
When I started this work, no official Firebase MCP server existed, but an open source repo gave me a starting point. The repo is written by Gannon Hall, and it’s an excellent small reference repo to look at if you’re building an MCP server for any database. Since then, Firebase has released their own official MCP server.
But there was a big problem - Neither Firebase’s official server nor Gannon’s unofficial server supported the Firebase count
method - the exact method I needed.
The existing MCP servers do support listing and retrieving documents, and in theory you could simply get all the documents and count them, but with tens of thousands of sessions in my database, this would be slow and expensive, since Firebase charges you based on the number of times you read a document.
Fortunately, the firebase-admin API supports a way to count all documents that match a query without retrieving them all, but that API call just wasn’t added to either of the Firebase MCP servers.
Of course, this problem was a great opportunity for me to explore extending an MCP server for a real use case and write about it in this newsletter!
To Vibe Code Or Not To Vibe Code ?
To do this, I forked Gannon’s repo and made the changes. I’ll try to get these merged upstream, but in the meantime, you can check out my working fork with firestore_count_documents
support on my Github.
While adding the new tool, I did what most MCP tutorials suggest: I asked the AI (via Cursor) to write the code for me.
And to its credit, it worked. The assistant quickly scaffolded a functioning count_documents
tool using Firestore’s admin SDK.
But this is also where “vibe coding” started to break down.
I asked it to add a filters
parameter to the count query. It did - but it also added orderBy
and pagination
fields. That makes no sense for a count — you’re getting a single number. When I pointed this out, the AI agreed and removed them. But it was only my own experience reviewing the code that caught the problem.
To get it working, I had to dig into how MCP servers handle tool registration and request handling under the hood. I also leaned heavily on the MCP Inspector, a debugging tool that helped me simulate tool calls and verify responses without needing to go through Claude or Cursor directly. If you're building your own MCP integration — especially with a backend like Firebase — this next section covers the key pieces you'll want to get right.
If you’re curious about just the changes I made to Gannon’s repo, I’ve uploaded the two steps (listing what the tool does, and actually doing it) in this Github Gist.
Challenges I Encountered and Lessons I Learned
The process was mostly smooth, but I ran into a few challenges, and learned a few things:
Restart fatigue: I often forgot to restart Claude Desktop, reload MCP servers in Cursor, or rebuild the local server after making changes. Small things, but it frequently caused me to stumble when I thought my changes weren’t working when in fact they were not getting used
Environment handling in MCP Inspector: It's a great tool, but make sure you learn about providing command line environment variables with the `e` flag - otherwise it’s another thing thats easy to forget, and it’s tedious to copy and paste them in every time. In my case, I needed to add the credential to authenticate to Firebase as an environment variable
Firebase data modeling trickiness: My sessions don’t explicitly say “tournament” or “cash.” Instead, I inferred it from the
chart
field. Firebase doesn’t support any sort of substring match as its filter. However, it does support string compare, and fortunately, all my tournament charts begin withMTT
, so I used string comparison ("MTT" <= x < "MTU"
) to filter them.Tool registration differences: Gannon’s implementation uses
server.setRequestHandler(ListToolsRequestSchema)
andCallToolRequestSchema
. This gives more control than theserver.tool()
shortcut highlighted in the weather tutorial, the first tutorial on the official MCP website . This is helpful if you want to customize routing or shape tools more directly.Smithery integration: The repo includes a
smithery.yaml
file for publishing to Smithery AI, a community MCP registry. It also includes HTTP transport, which is forward-compatible even if you're using stdio today.Port check issue: The server refused to start if port 3000 was in use, even when running in stdio mode where that port is irrelevant. Not a dealbreaker, but it was confusing until I understood the cause.
Product Insights I Obtained
About 17,000 sessions, or roughly 40 percent, were tournament-focused. That surprised me. I had assumed tournaments would dominate, especially since the demo flow starts in the tournament trainer and I didn’t exclude demo sessions from the count.
For those unfamiliar with poker strategy: tournaments tend to play short-stacked, which makes preflop decisions more structured and math-driven. Cash games, on the other hand, are often deep-stacked and reward postflop creativity. That usually makes preflop study less essential in cash compared to tournaments.
But the data is clear. If 60 percent of users are studying cash game spots, I’d be ignoring reality if I continued thinking of tournaments as the main use case.
Takeaways
MCP + Cursor lets me query my real product data in my editor, by translating natural language questions into database queries.
You can vibe code most of this, but you should still review the logic closely, as even state-of-the-art models still code major mistakes
MCP tooling is still in the early stages, and while remote registries like Smithery AI exist and server authors are adding HTTP transports, for now, we mostly live in a local MCP server world. For that, MCP Inspector is a critical and useful tool.
Conclusion
Thanks for reading. If you have any thoughts or opinions on Firebase + MCP, please get in touch by replying to this email, leaving a comment on Substack, or shooting me an email linked on my personal site.
For the next edition of AI Engineering Report, I’m getting equally excited by bolt.new $1 Million (!) dollar June hackathon, AI Agent architectures, and OpenAI’s recently released image generation API. Thanks for reading and stay tuned!