Review
Mastra: The Framework That Grew Into a Platform
Mastra is a strong fit for TypeScript teams building AI agents and workflows, but the platform now spans enough surfaces to feel like infrastructure.
Last updated April 2026 · Pricing and features verified against official documentation
Mastra began as the kind of project TypeScript developers tend to wish already existed: a serious framework for building agents without leaving the stack they already use.
By April 2026, though, Mastra had become more than a framework. The company launched Studio, Server, and Memory Gateway around the open-source runtime, turning it into a platform for building, deploying, observing, and extending agents.
If your team already ships in TypeScript and wants workflows, evals, observability, deployment, and managed memory in one ecosystem, Mastra makes a real case for itself. The framework is free and useful on its own; the cloud surfaces are what make it feel production-ready.
The tradeoff is complexity. Mastra is now broad enough that the harder question is no longer whether it can do the job, but whether you want to buy into the whole stack. For teams that do, it is one of the more credible options in the category. For everyone else, it is easy to overbuy.
What the Product Actually Is Now
Mastra is best understood as three related products wrapped around an open-source TypeScript framework. The framework handles agents, workflows, RAG, tools, MCP, and server adapters. Studio handles observability and evals. Server handles API deployment. Memory Gateway extends managed memory to Mastra and other frameworks.
The product has moved quickly. Mastra 1.0 landed in January 2026 with server adapters and AI SDK v6 support, February added datasets and experiments, and April brought the cloud platform with local and hosted Studio, Server, and Memory Gateway. That cadence matters because Mastra no longer looks like a side project. It looks like infrastructure under active construction.
Strengths
TypeScript is the point, not a compromise.
Mastra feels built for teams whose product code already lives in TypeScript. The New Stack framed the product correctly: agents are closer to web application work than model training, and Mastra leans into that reality instead of asking developers to route around it. If you want typed tool calls, workflows, memory, and MCP support without leaving the JavaScript ecosystem, this is one of the cleanest paths available.
Studio closes the feedback loop.
Studio combines metrics, logs, traces, datasets, and experiments in one place, with side-by-side comparison and rollback for agent changes. That is the difference between watching agent behavior and actually improving it. The April launch also matters because it moved the observability workflow into a shareable cloud surface without removing the self-hosted option.
Deployment is more pragmatic than most agent tools.
Server adapters let Mastra run inside Express, Hono, Fastify, or Koa instead of forcing a separate service boundary. That keeps the architecture closer to the app you already have, which is what most production teams want. You still need to think like an operator, but at least the framework is not adding friction where it should be reducing it.
Memory Gateway is a real second product.
Managed memory is useful even if you do not standardize on Mastra for everything. Supporting any framework widens the audience, and the bring-your-own-key and retention options give the product a more serious enterprise shape than a toy memory layer would. It also creates a cleaner adoption path for teams who want to test memory as a standalone capability first.
Weaknesses
The product surface is broader than the job usually is.
Framework, Studio, Server, and Memory Gateway solve adjacent problems, but they are still separate products with separate pricing and different operational assumptions. That is powerful if you need the whole stack. It is annoying if you only wanted one of the pieces.
The meter can creep up quickly.
The free tiers are genuinely usable, but the platform starts charging for the things production teams actually care about: CPU time, egress, observability volume, persistent servers, and add-on memory usage. That is reasonable infrastructure pricing, yet it is not simple SaaS pricing, and it can surprise teams that treat Mastra like a library.
The ecosystem is young enough to demand tolerance for change.
Mastra is already at 1.0, but the release pace is still fast and the changelogs are still full of structural changes, codemods, and new primitives. That is normal for a young framework, but it means teams with large existing systems should budget for migration work rather than assuming the surface will stay still.
Pricing
The open-source framework itself is free under Apache 2.0. The paid bill is for hosted platform services: Studio and Server on one side, Memory Gateway on the other.
Studio and Server start at $0 with unlimited users and deployments, 100,000 observability events, 24 hours of CPU uptime, and 10GB of egress. Teams costs $250 per team per month and adds multiple teams, custom SSO, SOC 2 documentation, 250 hours of CPU time, and 100GB of egress. Enterprise is custom and adds RBAC, SLA coverage, dedicated support, and custom CPU and egress limits.
Memory Gateway follows the same $0 and $250 structure, but the meter is different: 100K or 1M memory tokens, $10 per million add-on tokens, 250MB or 1GB retrieval storage, and 15 days or 6 months of stale-thread retention. Teams also supports bring-your-own-key. The gateway also prices model inference at market rate plus 5.5 percent, which is a reminder that this is a hosted service, not just storage.
The pricing page also makes the infrastructure tax explicit. Add-on CPU and egress are billed separately, and persistent server uptime is $100 per project. That is fair if you plan to run real workloads. It is less appealing if you wanted a cheap experiment.
Privacy
Mastra’s privacy policy is a generic service policy, not a detailed product DPA. It was last published on April 25, 2025. It says the company collects personal information and log data, uses third-party service providers, and uses that information to provide and improve the service.
I did not find a public statement saying customer content is used to train models. The more immediate issue is that the public privacy policy does not spell out the operational details buyers usually want for a platform like this. If your data is sensitive, the real questions are where it lives, which deployment model you choose, and what the commercial terms say.
Who It’s Best For
- TypeScript teams building AI agents, workflows, or RAG systems inside an existing web stack.
- Teams that want observability, evals, deployment, and memory from one vendor rather than four.
- Builders who are comfortable treating Mastra as infrastructure and paying infrastructure-style prices.
- Companies that need self-hosted or on-prem options without abandoning TypeScript.
Who Should Look Elsewhere
- Teams that only want observability or evals should compare Braintrust or Portkey first.
- Python-first teams that want multi-agent orchestration will probably be happier with CrewAI.
- Buyers who care more about compute and deployment than agent primitives should look at Modal.
- Teams that want a narrow, stable library instead of a broader platform should start with something simpler.
Bottom Line
Mastra is one of the stronger choices for teams that want to build agents in TypeScript and keep observability and deployment close to the app. Its real advantage is not any one feature; it is that the pieces fit together without asking you to switch languages or assemble the stack yourself.
The price of that convenience is breadth. Mastra now behaves like infrastructure, and that is a purchase worth making only if you plan to use more than one slice of the platform. If you want the stack, it earns the seat. If you only want one slice of it, buy something narrower.