Connect any LLM agent to our analysis. Natively.
We are building an MCP server so Claude, Cursor, Continue, Cline, Goose, and any MCP-aware client can read our published equity analysis as native tool calls and resources — recommendations, scenarios, sensitivity matrices, kill-scenario risk registers. No prompt engineering, no prose scraping. The REST API ships today; the MCP server is next on the roadmap.
Why MCP, not just an API?
A REST API is a contract for code. MCP is a contract for agents.The Model Context Protocol is the open standard for letting LLM agents discover tools and pull structured context at runtime. Instead of you writing fetch wrappers and prompting your model to use them, the agent introspects the server, sees a typed catalogue of what it can do, and calls it the same way it would call a native function.
For us specifically, that means an agent will be able to ask “what does StockMarketAgent currently think about MSFT, and what would have to be true for the bear case?” — and get back an answer grounded in the actual report we published this month, with citations back to the source. Not a paraphrase from the model’s training data.
What this is not
This is not a way to make the agent generate stock advice. The MCP server will return analysis we have already produced and signed off on, exactly as it appears on the website — only structured so the agent can quote it accurately.
- Client connects to the StockMarketAgent MCP endpoint
- Server advertises tools + resources via the MCP handshake
- User asks the agent a question about a covered ticker
- Agent picks the right tool — typically
get_reportorget_llm_bundle - Server returns structured JSON; agent answers, citing report sections
- Citations link back to
stockmarketagent.ai/stocks/{ticker}
Planned tools
Read-only. Scoped to published reports. Ten typed tools across the full corpus.list_reports()PreviewReturn the coverage universe with current rating and fair-value mid for each ticker.
args: sector?, archetype?, limit?get_report()PreviewFull latest monthly report for a single ticker. Same shape as the REST /reports/{ticker} endpoint.
args: tickerget_recommendation()PreviewJust the headline action, confidence, and one-sentence summary — minimal context cost.
args: tickerget_scenarios()PreviewBull/base/bear scenarios with probabilities, target prices, and returns.
args: tickerget_sensitivity()Preview5×5 sensitivity matrix across cost-of-equity and terminal growth assumptions.
args: tickerget_risks()PreviewThe kill-scenario risk register: what would have to be true for the bear case.
args: tickercompare_tickers()PreviewSide-by-side metrics for up to 6 tickers — same payload as the Compare tool.
args: tickers[]get_changes()PreviewRating-change feed: what flipped recently across the universe and why.
args: since?, ticker?search_universe()PreviewFree-text search across covered tickers, sectors, and archetypes.
args: queryget_llm_bundle()PreviewThe transferrable bundle — one compact JSON shaped for direct injection into any LLM context.
args: ticker
Auth & API key tiers
One key, two transports. The MCP server reads the sameX-API-Key credential as the REST API.Authentication is identical to the REST API. Mint a key from /preferences → API, set it as SMA_KEY in your environment, and your MCP-aware client picks it up via the config snippets below. There is no separate MCP credential — the same key, scoped to your subscription tier, gates both surfaces.
Quotas are per-key, per-month. Responses carry X-RateLimit-Remaining and X-RateLimit-Reset so the client can pace itself. Exceeding the quota returns a 429 with a Retry-After hint and an upgrade_url in the body.
Scope by tier
Free unlocks list_reports, get_recommendation, and search_universe. Growth adds the full structured payload — get_report, get_scenarios, get_sensitivity, get_risks, compare_tickers, get_changes, and get_llm_bundle. Enterprise removes the monthly cap and enables SSE streaming + webhook redrive. See pricing for the full matrix.
Connect at launch
Drop these snippets into the client config you already use. The endpoint will resolve once the server ships.The MCP server runs at mcp.stockmarketagent.ai over the standard MCP http + sse transports. Every snippet below reads SMA_KEY from your environment so the credential never lands in version control.
Claude Desktop
Preview{
"mcpServers": {
"stockmarketagent": {
"command": "npx",
"args": ["-y", "@stockmarketagent/mcp@latest"],
"env": {
"SMA_KEY": "${SMA_KEY}"
}
}
}
}Cursor
Preview{
"mcpServers": {
"stockmarketagent": {
"url": "https://mcp.stockmarketagent.ai/sse",
"headers": {
"X-API-Key": "${SMA_KEY}"
}
}
}
}Continue
Preview{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@stockmarketagent/mcp@latest"]
},
"env": { "SMA_KEY": "${SMA_KEY}" }
}
]
}
}Cline / Goose
Preview# Cline (VS Code) and Goose both read the same
# environment-driven config. Add a server with:
#
# command: npx
# args: -y @stockmarketagent/mcp@latest
# env: SMA_KEY=${SMA_KEY}
#
# Or point at the hosted SSE endpoint:
#
# url: https://mcp.stockmarketagent.ai/sse
# header: X-API-Key: ${SMA_KEY}Example prompts
What the agent can answer once the server is wired up. Each prompt calls one tool — none requires you to hand the model a URL.“What does StockMarketAgent currently think about MSFT, and what would have to be true for the bear case?”
get_report + get_risks·PM stress-testing a thesis“Compare NVDA, AMD, and AVGO on fair-value mid, six-factor score, and rating. Cite the published reports.”
compare_tickers·Sector analyst building a screen“Show me the bull/base/bear scenarios for TSLA with their probability weights and target prices.”
get_scenarios·Risk officer modeling tail outcomes“Run a sensitivity matrix for AAPL across cost-of-equity and terminal-growth assumptions.”
get_sensitivity·Quant cross-checking own DCF“Which covered tickers had a rating change in the last 30 days, and why?”
get_changes·Allocator watching for alpha drift“Build me an LLM context bundle for GOOGL — I want to ask Claude follow-ups grounded in the latest report.”
get_llm_bundle·Hands-on researcher
Get the launch notice
One email when the MCP server is generally available. No drip.We will email you exactly once — when the server is live, the install steps are documented, and the tools above are callable from Claude Desktop, Cursor, Continue, Cline, Goose, or any MCP-aware client.
In the meantime, the same data is available today via our REST API. If you are wiring an LLM workflow now, the developer reference has cURL, Python, and Node samples for the report endpoint, which is the same payload the MCP get_report tool will return.
Notify me at launch
We will not share your email or use it for anything other than the MCP launch announcement.