AI

10 items

Projects

infermux

Route inference across LLM providers. Track cost per request.

89 Go
Go AI Inference Mist-stack Observability

matchspec

Eval framework. Define correct, test against it, get results.

21 Go
Go AI Evaluation Mist-stack

evaldriven.org

Ship evals before you ship features.

7 Markdown
AI Evaluation Methodology

typescript-sdk supermodeltools

TypeScript SDK for Supermodel. Generate useful graphs of your codebase.

5 TypeScript
TypeScript AI

openapi-spec supermodeltools

OpenAPI spec for the Supermodel public API. Use as reference or generate your own clients.

4 YAML
AI

mcp supermodeltools

Supermodel MCP server. Generate code graphs in Cursor, Codex, or Claude Code.

4 TypeScript
TypeScript AI MCP

mcpbr supermodeltools

Benchmark runner for Model Context Protocol servers. Paired comparison experiments on SWE-bench.

4 Python
Python AI Evaluation MCP Methodology

dead-code-hunter supermodeltools

GitHub Action to find unreachable functions using Supermodel call graphs.

2 TypeScript
TypeScript AI

arch-docs supermodeltools

GitHub Action to generate architecture documentation for any repository using Supermodel.

2 JavaScript
AI

mist-go

Shared core for the MIST stack. Zero external deps.

1 Go
Go AI Mist-stack

All tags

AI (10) Aws (1) Cloud Computing (1) Compiler (1) Evaluation (3) Go (6) Inference (1) MCP (2) Methodology (2) Mist-stack (5) Observability (3) Python (2) TypeScript (4)