Pricing database
Every cost number Margint shows is computed the same way:
cost_microdollars = (input_tokens * input_cost_per_million
+ output_tokens * output_cost_per_million) / 1,000,000
The interesting part is where *_cost_per_million comes from.
Source of truth: LiteLLM
We mirror LiteLLM's canonical pricing registry — the same MIT-licensed file used by 400+ open-source projects and proxy gateways. It covers 400+ models across OpenAI, Anthropic, Google, Mistral, Cohere, Bedrock, Azure, Groq, Fireworks, Perplexity, together.ai, and more.
LiteLLM updates it within hours of provider price announcements. We pull it on a schedule and publish snapshots at /docs/concepts/pricing-database per release.
Local calculation
Both SDKs bundle a snapshot of the pricing database at install time. When you call track() without costMicrodollars, cost is calculated locally — no network hop. This means:
- Your ingestion endpoint never blocks on a pricing lookup.
- Offline / airgapped deployments still get accurate cost numbers.
- SDK-only "local mode" works without a Margint account (handy for development).
Keeping pricing fresh
- SDKs ship with pricing at build time. Reinstall to pick up new models or price changes.
- The server uses the latest database on every ingest. If you pass an unknown model, we record token counts with
cost_microdollars = 0and flag it in the dashboard. - Override locally by passing
costMicrodollarsdirectly ontrack().
Requesting a model
We're happy to add models. Email hi@margint.dev with the provider name and a link to their pricing page.
Most new frontier models are priced by LiteLLM within 48 hours of release — usually we don't have to do anything.