MCP Server Is Eating Your Context Window. There's a Simpler Way

Article URL: https://www.apideck.com/blog/mcp-server-eating-context-window-cli-alternative Comments URL: https://news.ycombinator.com/item?id=47400261 Points: 45 # Comments: 49

MCP Server Is Eating Your Context Window. There's a Simpler Way
MCP Server Is Eating Your Context Window. There's a Simpler Way Photo: Hacker News

Here's a scenario that'll feel familiar if you've wired up MCP servers for anything beyond a demo.

You connect GitHub, Slack, and Sentry.

Three services, maybe 40 tools total.

Before your agent has read a single user message,55,000 tokens of tool definitionsare sitting in the context window.

That's over a quarter of Claude's 200k limit.

Gone.

It gets worse.

Each MCP tool costs550–1,400 tokensfor its name, description, JSON schema, field descriptions, enums, and system instructions.

Connect a real API surface, say a SaaS platform with 50+ endpoints, and you're looking at 50,000+ tokens just to describe what the agentcoulddo, with almost nothing left for what itshoulddo.

One teamreportedthree MCP servers consuming 143,000 of 200,000 tokens.

That's 72% of the context window burned on tool definitions.

The agent had 57,000 tokens left for the actual conversation, retrieved documents, reasoning, and response.

Good luck building anything useful in that space.

This isn't a theoretical concern.

David Zhang (@dzhng), building Duet, described ripping out their MCP integrations entirely, even after getting OAuth and dynamic client registration working.

The tradeoff was impossible:
He called it a "trilemma." That feels about right.

And the numbers hold up under controlled testing.

Arecent benchmark by Scalekitran 75 head-to-head comparisons (same model, Claude Sonnet 4, same tasks, same prompts) and found MCP costing4 to 32× more tokensthan CLI for identical operations.

Their simplest task, checking a repo's language, consumed 1,365 tokens via CLI and 44,026 via MCP.

The overhead is almost entirely schema: 43 tool definitions injected into every conversation, of which the agent uses one or two.

The industry is converging on three responses to context bloat.

Each has a sweet spot.

The first response is to keep MCP but fight the bloat.

Teams compress schemas, use tool search to load definitions on demand, or build middleware that slices OpenAPI specs into smaller chunks.

This works for small, well-defined interactions like looking up an issue, creating a ticket, or fetching a document.

MCP's structured tool calls and typed schemas are genuinely useful when you have a tight set of operations that agents use frequently.

But it adds infrastructure.

You need a tool registry, search logic, caching, and routing.

You're building a service to manage your services.

And you're still paying per-tool token costs every time the agent decides it needs a new capability.

Duet's answer was to treat the agent like a developer with a persistent workspace.

When the agent needs a new integration, it reads the API docs, writes code against the SDK, runs it, and saves the script for reuse.

This is powerful for long-lived workspace agents that maintain state across sessions and need complex workflows (loops, conditionals, polling, batch operations).

Things that are awkward to express as individual tool calls become natural in code.

The downside: your agent is now writing and executing arbitrary code against production APIs.

The safety surface is enormous.

You need sandboxing, review mechanisms, and a lot of trust in your agent's judgment.

The third approach is the one we took.

Instead of loading schemas into the context window or letting the agent write integration code, you give it a CLI.

A well-designed CLI is a progressive disclosure system by nature.

When a human developer needs to use a tool they haven't touched before, they don't read the entire API reference.

They runtool --help, find the subcommand they need, runtool subcommand --help, and get the specific flags for that operation.

They pay attention costs proportional to what they actually need.

Agents can do exactly the same thing.

And the token economics are dramatically different.

Here's what theApideck CLIagent prompt looks like.

This is the entire thing an AI agent needs in its system prompt:
That's ~80 tokens.

Compare that to the alternatives:
The agent starts with 80 tokens of guidance and discovers capabilities on demand:
Each step costs 50–200 tokens, loaded only when the agent decides it needs that information.

An agent handling an accounting query might consume 400 tokens total across three--helpcalls.

The same surface through MCP would cost 10,000+ tokens loaded upfront whether the agent uses them or not.

This mirrors howClaude Agent Skillswork.

Metadata first, full details only when selected, reference material only when needed.

The CLI is doing the same thing through a different mechanism.

Scalekit's benchmark independently validated this pattern.

They found that even a minimal ~800-token "skills file" (a document of CLI tips and common workflows) reduced tool calls by a third and latency by a third compared to a bare CLI.

Our approach takes it further: the ~80-token agent prompt provides the same progressive discovery at a tenth of the cost.

The principle is the same.

A small, upfront hint about how to navigate the tool is worth more than thousands of tokens of exhaustive schema.

There's a dimension of the MCP problem that doesn't get enough attention:availability.

Scalekit's benchmark recorded a28% failure rateon MCP calls to GitHub's Copilot server.

Out of 25 runs, 7 failed with TCP-level connection timeouts.

The remote server simply didn't respond in time.

Not a protocol error, not a bad tool call.

The connection never completed.

CLI agents don't have this failure mode.

The binary runs locally.

There's no remote server to time out, no connection pool to exhaust, no intermediary to go down.

When your agent runsapideck accounting invoices list, it makes a direct HTTPS call to the Apideck API.

One hop, not two.

This matters at scale.

At 10,000 operations per month, a 28% failure rate means roughly 2,800 retries, each burning additional tokens and latency.

Scalekit estimated the monthly cost difference at$3.20 for CLI versus $55.20 for direct MCP, a 17× cost multiplier, with the reliability tax on top.

Remote MCP servers will improve.

Connection pooling, better infrastructure, and gateway layers will close the gap.

But "the binary is on your machine" is a reliability guarantee that no amount of infrastructure engineering on the server side can match.

Telling an agent "never delete production data" in a system prompt is like putting a sticky note on the nuclear launch button.

It might work.

Probably.

Until a creative prompt injection peels the note off.

Security research on AI agents in CI/CDhas shown how prompt injection can manipulate agents with high-privilege tokens into leaking secrets or modifying infrastructure.

The pattern is always the same: untrusted input gets injected into a prompt, the agent has broad tool access, and bad things happen.

The Apideck CLI takes a structural approach.

Permission classification is baked into the binary based on HTTP method:
No prompt can override this.

ADELETEoperation is blocked unless the caller explicitly passes--force.

APOSTrequires--yesor interactive confirmation.GEToperations run freely because they can't modify state.

The agent frameworks reinforce this.

Claude Code, Cursor, and GitHub Copilot all have permission systems that gate shell command execution.

So you get two layers of structural safety: the agent framework asks "should I run this command?" and the CLI itself enforces "is this operation allowed?"
You can also customize the policy per operation:
This is the same principle behindDuda blocking destructive MCP actions, but enforced structurally in the binary, not through prompt instructions that compete with everything else in the context window.

Every serious agent framework ships with "run shell command" as a primitive.

Claude Code hasBash.

Cursor has terminal access.

GitHub Copilot SDK exposes shell execution.

Gemini CLI runs commands natively.

MCP requires dedicated client support, connection plumbing, and server lifecycle management.

A CLI requires a binary on the PATH.

This matters more than it sounds.

When you're building an agent that needs to interact with APIs, the integration path for a CLI is:
The integration path for MCP is:
The CLI approach also means your agent integration isn't locked to any specific framework.

The sameapideckbinary works from Claude Code, Cursor, a custom Python agent, a bash script, or a CI/CD pipeline.

TheApideck CLIis a single static binary that parses our OpenAPI spec at startup and generates its entire command tree dynamically.

OpenAPI-native, no code generation.The binary embeds the latest Apideck Unified API spec.

On startup, it parses the spec withlibopenapiand builds commands for every API group, resource, and operation.

When the API adds new endpoints,apideck syncpulls the latest spec.

No SDK regeneration, no version bumps.

Smart output defaults.When running in a terminal, output defaults to a formatted table with colors.

When piped or called from a non-TTY (which is how agents call it), output defaults to JSON.

Agents get machine-parseable output without needing to remember--output json.

Auth is invisible.Credentials are resolved from environment variables (APIDECK_API_KEY,APIDECK_APP_ID,APIDECK_CONSUMER_ID) or a config file, and injected into every request automatically.

The agent never handles tokens, never sees auth headers, never needs to manage sessions.

Connector targeting.The--service-idflag lets agents target specific integrations.apideck accounting invoices list --service-id quickbookshits QuickBooks.

Swap to--service-id xeroand the same command hits Xero.

Same interface, different backend.

That's the unified API doing its job.

CLIs aren't universally better.

Here's where the other approaches still win.

MCP is better for tightly scoped, high-frequency tools.If your agent calls the same 5–10 tools hundreds of times per session, the upfront schema cost amortizes well.

A customer support agent that only ever looks up tickets, updates status, and sends replies doesn't need progressive disclosure.

It needs those tools ready immediately.

Code execution is better for complex, stateful workflows.If your agent needs to poll an API every 30 seconds, aggregate results across paginated endpoints, or orchestrate multi-step transactions with rollback logic, writing code is more natural than chaining CLI calls.

Duet's approach makes sense for agents that are essentially autonomous developers.

MCP is better when your agent acts on behalf of other people's users.This is the dimension most CLI-vs-MCP comparisons gloss over, and it's worth being direct about.

When your agent automatesyour ownworkflow, ambient credentials are fine.

You are the user, and the only person at risk is you.

But if you're building a B2B product where agents act on behalf of your customers' employees, across organizations those customers control, the identity problem becomes three-layered: which agent is calling, which user authorized it, and which tenant's data boundary applies.

Per-user OAuth with scoped access, consent flows, and structured audit trails are real requirements at that boundary, and they're requirements that raw CLI auth (gh auth login, environment variables) wasn't designed to solve.

MCP's authorization model, whatever its efficiency cost, addresses this natively.

That said, the gap is narrower than it looks for unified API architectures.

Apideck already centralizes auth throughVault: credentials are managed per-consumer, per-connection, and scoped by service.

The--service-idflag targets a specific integration within a specific consumer's vault.

The structural permission system enforces read/write/delete boundaries in the binary.

What's missing is the per-user OAuth consent flow and tenant-scoped audit trail, real gaps, but ones that sit at the platform layer, not the agent interface layer.

A CLI can be the interface while a backend handles delegated authorization.

These aren't mutually exclusive.

It's also worth noting that MCP's auth story is less settled than it appears.

AsSpeakeasy's MCP OAuth guidemakes clear, user-facing OAuth exchange is not actually required by the MCP spec.

Passing access tokens or API keys directly is completely valid.

The real complexity kicks in when MCP clients need to handle OAuth flows dynamically, which requires Dynamic Client Registration (DCR), a capability most API providers don't support today.

Companies like Stripe and Asana have started adding DCR to accommodate MCP, but it remains a high-friction integration.

The auth advantage MCP has over CLI is real in theory, but in practice, the ecosystem is still catching up to the spec.

CLIs are weaker at streaming and bi-directional communication.A CLI call is request-response.

If you need server-sent events, WebSocket streams, or long-lived connections, you'll want an SDK or MCP server that can hold a connection open.

Distribution has friction.MCP servers can theoretically live behind a URL.

CLIs need a binary per platform, updates, and PATH management.

For the Apideck CLI, we ship a static Go binary that runs everywhere without dependencies, but it's still a binary you need to install.

The honest framing: MCP, code execution, and CLIs are complementary tools.

The mistake is treating MCP as the universal answer when, for many integration patterns, a CLI does the job with two orders of magnitude less context overhead.

If you're building developer tools in 2026, AI agents are becoming a primary consumer of your API surface.

Not the only consumer (human developers still matter), but a rapidly growing one.

A few things are worth considering:
Your OpenAPI spec is too big for a context window.If you have 50+ endpoints, converting your spec to MCP tools will burn the budget of most agent interactions.

Think about what a minimal entry point looks like.

Progressive disclosure isn't just a UX pattern anymore.It's a token optimization strategy.

Give agents a way to discover capabilities incrementally instead of dumping everything upfront.

Structural safety is non-negotiable.Prompt-based guardrails are the security equivalent of honor system parking.

Build permission models into your tools, not your prompts.

Classify operations by risk level and enforce that classification in code.

Ship machine-friendly output formats.JSON by default in non-interactive contexts.

Stable exit codes.

Deterministic output.

These aredocumented principles for agentic CLI design, and they matter because your next power user might not have hands.

Scale your integration strategy and deliver the integrations your customers need in record time.

Source: This article was originally published by Hacker News

Read Full Original Article →

Share this article

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

Maximum 2000 characters