mitchwainer 7 hours ago

Grafbase just launched Nexus, an open-source AI Router that unifies MCP servers and LLMs through a single endpoint. Designed for enterprise-grade governance, control, and observability, Nexus helps teams manage AI complexity, enforce policies, and monitor performance across their entire stack. Built to work with any MCP server or LLM provider out-of-the-box, Nexus is designed for developers who want to integrate AI with the same rigor as production APIs.

  • CptanPanic 7 hours ago

    Sounds like litellm which I use, I wonder how it compares?

    • vid 6 hours ago

      There is also https://github.com/maximhq/bifrost which apparently overcomes some performance issues of litellm and is easy to get going.

      • tomhoule 5 hours ago

        Yeah they definitely belong in the same space. Nexus is an LLM Gateway, but early on, the focus has been on MCP: aggregation, authentication, and a smart approach to tool selection. There is that paper, and a lot of anecdotal evidence, pointing to LLMs not coping well with a selection of tools that is too large: https://arxiv.org/html/2411.09613v1

        So Nexus takes a tool search based approach to solving that, among other cool things.

        Disclaimer: I don't work on Nexus directly, but I do work at Grafbase.

    • fbjork 6 hours ago

      Founder of Grafbase here.

      Here are a few key differentiators vs LiteLLM today:

      - Nexus does MCP server aggregation and LLM routing - LiteLLM only does LLM routing

      - The Nexus router is a standalone binary that can run with minimal TOML configuration and optionally Redis - LiteLLM is a whole package with dashboard, database etc.

      - Nexus is written in Rust - LiteLLM is written in Python

      That said, LiteLLM is an impressive project, but we're just getting started with Nexus so stay tuned for a steady barrage of feature launches the coming months:)

      • SparkyMcUnicorn 5 hours ago

        What's the difference between "MCP Server Aggregation" and the litellm_proxy endpoint described here?

        https://docs.litellm.ai/docs/mcp

        • tomhoule 4 hours ago

          The main difference is that while you can get Nexus to list all tools, by default the LLM accesses tools by semantic search — Nexus returns only the relevant tools for the what the LLM is trying to accomplish. Also, Nexus speaks MCP to the LLM, it doesn't translate like litellm_proxy seems to do (I wasn't familiar with it previously).

makita34 5 hours ago

Seems quite similar to the commercial nexos.ai platform, which also focuses on routing, governance, and observability for AI workloads, but as a proprietary solution rather than open source

  • fbjork 4 hours ago

    From what I can tell they don’t offer a self-hosted router?

mbrumlow 6 hours ago

I thought it was a phone :/, for developers.

  • fbjork 6 hours ago

    That phone was discontinued:)

owenthejumper 5 hours ago

Another proxy?

  • fbjork 3 hours ago

    MCP aggregation is one of the big differentiators