Claude Code may be burning your limits with invisible tokens

(efficienist.com)

49 pontos | por jenic_ 1 dia atrás

6 comentários

  • marginalia_nu
    11 horas atrás
    This is methodologically flawed, as bytes only weakly correlate with tokens.

    Unless you're sending identical requests, you can't expect the same token counts for any given of bytes, or that a slightly longer (but different) message will lead to more tokens than a slightly shorter one, or vice versa.

    • Bolwin
      2 horas atrás
      > The numbers came from the same project and the same prompt across versions.

      I'm pretty sure the tester checked. If the request format is the same (which it is, given it uses the same as Anthropic's stable public API) and the same prompt/messages then bytes will correlate pretty well.

      • marginalia_nu
        2 horas atrás
        The prompt may be the same, but the project context would have have surely changed. User prompt itself is unlikely to be ~200KB.
  • tencentshill
    4 horas atrás
    On the free plan, I hit the limit instantly by uploading one 45kb PDF and one prompt. Even for a free plan, I expect a bit more. Oh well, local models can be pushed to do what I need.
  • a_c
    1 dia atrás
    I had the same suspicion so made this to examine where my tokens went.

    Claude code caches a big chunk of context (all messages of current session). While a lot of data is going through network, in ccaudit itself, 98% is context is from cache.

    Granted, to view the actual system prompt used by claude, one can only inspect network request. Otherwise best guess is token use in first exchange with Claude.

    https://github.com/kmcheung12/ccaudit

  • F7F7F7
    1 dia atrás
    What is the system prompt for $1000 Alex (RIP)?
  • simianwords
    17 horas atrás
    I don’t buy it. The same problem was reported in Claude.ai at the same time which means same underlying root cause.