MCP tools with dependent types

(vlaaad.github.io)

73 pontos | por vlaaad 240 dias atrás

6 comentários

  • dvse
    240 dias atrás
    This is already supported via listChanged. The problem is that >90% of clients currently don’t implement this - including Anthropic’s, https://modelcontextprotocol.io/clients
  • vlaaad
    240 dias atrás
    I was considering making an MCP SEP (specification enhancement proposal) — https://modelcontextprotocol.io/community/sep-guidelines, though I'm curious if other MCP tinkerers feel the issue exists, should be solved like that, etc. What do you think?
  • jonfw
    240 dias atrás
    I have a blog post here that has an example of dynamically changing the tool list- https://jonwoodlief.com/rest3-mcp.html.

    In this situation, I would have a tool called "request ability to edit GLTF". I This would trigger an addition to the tool list specifically for your desired GLTF. The model would send the "tool list changed' notification and now the LLM would have access.

    If you want to do it without the tool list changed notification ability, I'd have two tools, get schema for GLTF, and edit GLTF with schema. If you note that the get schema is a dependency for edit, the LLM could probably plumb that together on it's own fairly well

    You could probably also support this workflow using sampling.

    • spullara
      240 dias atrás
      do any of the clients support this? I have some dynamic mcps and it doesn't seem like claude.ai supports for example.
  • LudwigNagasena
    240 dias atrás
    > there is no way to tell the AI agent “for this argument, look up a JSON schema using this other tool”

    There is a description field, it seems sufficient for most cases. You can also dynamically change your tools using `listChanged` capability.

    • vlaaad
      240 dias atrás
      Sure, but the need for accuracy will only increase; there is a difference between suggesting an LLM to put a schema in its context before calling the tool vs forcing the LLM to use a structured output returned from a tool dynamically.

      We already have 100% reliable structured outputs if we are making chatbots with LLM integrations directly; I don't want to lose this.

      • WithinReason
        240 dias atrás
        And LLMs will get more accurate. What happens when the LLM uses the wrong parameters? If it's an immediate error then it will just try again, no need for protocol changes, just better LLMs.
        • vlaaad
          240 dias atrás
          The difference between 99% reliability and 100% reliability is huge in this case.
          • WithinReason
            240 dias atrás
            I misunderstood the problem then, I thought it would take only a few seconds for the LLM to issue the call, see the error, fix the call.
            • jtbayly
              240 dias atrás
              Last time I used Gemini CLI it still couldn’t consistently edit a file. That was just a few weeks ago. In fact, it would go into a loop attempting the same edit, burning through many thousands of tokens and calls in the process, re-reading the file, attempting the same edit, rinse, repeat until I stopped it.

              I didn’t find it entertaining.

            • wahnfrieden
              240 dias atrás
              Big waste of context
  • matt-smith
    240 dias atrás
    The Arazzo specification[0] (from OpenAPI contributors) aims to solve the dependent arguments issue by introducing the concept of a "runtime expressions"[1] within a series of independent tool calls which compose a workflow.

    [0] - https://www.openapis.org/arazzo-specification [1] - https://spec.openapis.org/arazzo/v1.0.1.html#runtime-express...

  • nmilo
    240 dias atrás
    I don't think this is a protocol issue, the LLMs simply weren't RLHFed to do that
    • vlaaad
      240 dias atrás
      Not true, structured outputs enforce output formats with 100% reliability, e.g., https://platform.openai.com/docs/guides/structured-outputs says "Structured Outputs is a feature that ensures the model will always generate responses that adhere to your supplied JSON Schema, so you don't need to worry about the model omitting a required key, or hallucinating an invalid enum value"