Introducing Tome: a Magical LLM Client

We’re excited to announce the Technical Preview of Tome, a local LLM client with seamless MCP integration. We quietly dropped it on GitHub a few weeks ago and got some great feedback. Here's more about what Tome is, why we built it, and where we’re headed.

Why Tome?

It might feel like we’ve reached peak AI - models are smarter and faster, MCP servers are multiplying like tribbles, and all of San Francisco is covered in AI billboards. Yet most interfaces are still chat boxes. It’s as if we built the first wheel out of stone and said “okay cool great work everyone let’s just yabbadabbadoo in our Flinstones cars for the rest of time”.

So our first release is.. yet another chat client. Wait - hear us out!

We are big fans of building in the open, open source, and making a speculative bet on the future with a community that’s fired up about our shared vision. Tome was a response to the friction we saw - no easy way to tinker with MCP and local models without juggling uv, npm, and config.jsons. We wanted to install an app, spin up a few MCPs, and start tinkering. So we decided we'd ship early and often, starting with chat but quickly growing beyond the current SmarterChild-era level of innovation.

What’s in the Technical Preview?

  • Fully local chat interface
  • Plug and play MCP support for quick experimentation
    • Batteries included: we install and manage uv/npm for you
  • Instant integration into Ollama to connect with your favorite models (We’re partial to Qwen3)
  • Integration into Smithery so you can browse and one-click install thousands of MCP servers

What’s Next?

From here, we’re expanding support across operating systems, adding solid debugging tools, and adding new primitives beyond chat. As models get smarter and hardware gets faster, we’re exploring a future where AI isn’t just reactive - it’s ambient. Background tasks, automation, invisible copilots - all running locally and within your control. We want to live in a world where LLMs work invisibly in the background, while we’re busy or asleep, making us feel genuinely empowered by the tech around us.

The Approach

We believe the next generation of AI experiences should be:

  • Local First - for privacy, performance, and control
  • Open Source - for transparency and community
  • Composable - build what you want, how you want

Want to run everything 100% locally? You can. Want to plug in a remote LLM or a remote MCP server? It’s totally up to you, but it’s up to you.

The Vision

The ceiling is that we build a canvas for the next generation of software. The floor is that I can keep asking Qwen3 to write me songs about dark matter and string theory in the style of Sum 41. Either way we think that’s a win.

Tome is open source and ready to play with today. Download the latest release, connect your favorite models and tools, and follow along as we build weird and wonderful things together.

Find us on: GitHub | Discord | BlueSky | Twitter

P.S. We’re 99.9% there on Windows support - stay tuned!