Leaderboard/Hatchable
MCP ServerScored via MCP protocol probing: initialize handshake, tools/list conformance, and ping + tool invocation performance.

Hatchable

Build, deploy, and host full-stack web apps from any MCP client. DB, auth, storage, cron included.

79/100
Operational Score
Score Breakdown
Availability30/30
Conformance30/30
Performance19/40
Key Metrics
Uptime 30d
100.0%
P95 Latency
175.3ms
Conformance
Pass
Trend
Stable
What's Being Tested
Availability
HTTP health check to the service endpoint
Responded with HTTP 200 in 175ms
Conformance
MCP initialize handshake + tools/list
Valid MCP server info returned, tools/list responded
Performance
MCP ping + zero-arg tool invocation benchmarking
P95 latency: 175ms, task completion: 0%
Skills
create_project

Create a new Hatchable project. This generates a URL slug, creates a dedicated PostgreSQL database, and returns the project ID and URLs. Call this first before writing files or creating tables. ## Project structure ``` public/ static files, served at their file path api/ backend functions — each file is one endpoint hello.js → /api/hello users/list.js → /api/users/list users/[id].js → /api/users/:id (req.params.id — one segment) docs/[...path].js → /api/docs/*path (req.params.path — string[], catches multi-segment) _lib/ shared code, not routed migrations/*.sql SQL files, run in filename order on every deploy seed.sql optional — runs on first deploy / fork, once per project hatchable.toml optional overrides (cron, auth, project name) package.json dependencies (no build scripts yet — build locally, commit public/) ``` ### Routing precedence Most-specific wins. For a request to `/api/users/42`: 1. `api/users/42.js` (static) — beats 2. `api/users/[id].js` (single-param, `params.id = "42"`) — beats 3. `api/users/[...rest].js` (catch-all, `params.rest = ["42"]`) Catch-all params arrive as `string[]`, never slash-joined. Use `req.params.path` as an array: `const [first, ...rest] = req.params.path;` ### Static file resolution (public/) A request to `/foo/bar/baz` tries, in order: 1. `public/foo/bar/baz` (exact file) 2. `public/foo/bar/baz.html` 3. `public/foo/bar/baz/index.html` 4. Ancestor `index.html` fallback — walks up: `public/foo/bar/index.html` → `public/foo/index.html` → `public/index.html` Step 4 means each folder with an `index.html` acts as its own mini-site. You can ship an `/admin/*` React SPA alongside a static marketing page at `/` — unmatched paths under `/admin/` fall back to `public/admin/index.html`, not the root one. ## Handler contract Every file under api/ exports a default async function: ```js // api/users/list.js import { db, auth } from "hatchable"; export default async function (req, res) { const user = auth.getUser(req); if (!user) return res.status(401).json({ error: "Not logged in" }); const { rows } = await db.query( "SELECT id, name FROM users WHERE org_id = $1", [user.id] ); res.json(rows); } // Optional: restrict methods export const methods = ["GET"]; // Optional: register this endpoint as a recurring scheduled task. // Minimum interval is hourly. See also: scheduler.at() in the SDK // for imperative / one-shot / per-firing-payload scheduling. // export const schedule = "0 */6 * * *"; ``` ### req (Express-shaped) - method, url, path, headers, cookies, params, query - body — parsed by Content-Type: JSON → object, urlencoded → object, multipart/form-data → object of non-file fields - files — present for multipart uploads: [{ field, filename, contentType, buffer }] ### res (Express-shaped) - res.json(data), res.status(code) (chainable), res.send(text|buffer) - res.redirect(url), res.cookie(name, value, opts), res.setHeader(name, value) ## SDK — import from "hatchable" ``` db.query(sql, params) → { rows, rowCount } db.transaction([{sql, params}, ...]) → { results: [{rows, rowCount}] } auth.getUser(req) → { id, email, name } | null email.send({ to, subject, html }) storage.put(key, buffer, contentType) → url storage.get(key) → { buffer, contentType } storage.del(key) scheduler.at(when, route, opts?) → { id, next_fire_at } scheduler.cancel(taskId) → boolean ``` That's the entire SDK. Everything else uses standard Node: `fetch` for external HTTP, `process.env.KEY` for secrets (set with set_env), crypto/etc from `node:*`. ### Scheduling Two ways to schedule a function — pick based on whether the "when" is known at deploy time or at runtime. **Declared** (static, lives in source, reconciled on deploy): ```js // api/nightly-report.js export const schedule = "0 9 * * *"; // 5-field cron, minimum hourly export default async function (req, res) { /* ... */ } ``` **Armed** (dynamic, from user code, preserved across deploys): ```js import { scheduler } from "hatchable"; // recurring — first arg is a 5-field cron string await scheduler.at("0 * * * *", "/api/ping"); // one-shot at a specific moment, with per-firing payload await scheduler.at("2026-05-01T07:00:00Z", "/api/book", { payload: { missionId: 42 } }); // idempotent named arm — repeated calls update the same task await scheduler.at("0 9 * * *", "/api/digest", { name: "daily-digest" }); // cancel by id await scheduler.cancel(taskId); ``` Each firing invokes `route` with `req.headers['x-hatchable-trigger'] === 'cron'` and `req.body === payload`. Use one-shot + payload instead of writing your own "pending jobs" table with a polling cron — that's the pattern the primitive replaces. ## Database Postgres. Write schema in migrations/*.sql. Files run in filename order, tracked in __hatchable_migrations so each runs once. Always use RETURNING to get inserted ids in the same round trip: ```sql INSERT INTO users (email) VALUES ($1) RETURNING id ``` Never call lastval() or LAST_INSERT_ID() — each db.query is a fresh connection, so session-local state doesn't carry across calls. ## Available Node.js APIs and packages Functions run in Node.js 20. The full hatchable SDK is always available. In addition, these packages are pre-installed and ready to import: sharp, puppeteer-core (with Chromium at /usr/bin/chromium), csv-parse, csv-stringify, xlsx, bcrypt, jsonwebtoken, uuid, date-fns, lodash, marked, sanitize-html, cheerio, xml2js, archiver, qrcode, stripe, openai. Standard Node.js APIs are available: fs, child_process, net, http, Buffer, stream, path, os, crypto, etc. External HTTP via global fetch(). Secrets via process.env (set with the set_env tool). ## Visibility Three tiers — each one a step up in who the software is for: - **personal** — free. You and anyone you invite. Login-gated via Hatchable accounts. Build anything including auth — test the full flow with your invitees before going live. - **public** — $12/mo. On the open web. Custom domains. No branding. No app-level auth (use Hatchable identity only). - **app** — $39/mo. On the open web + your app has its own users. Email/password signup, OAuth, password reset. If your project has [auth] enabled, this is the only live tier — you can't go Public with auth, you go straight to App. ## Calling the API from public/ At deploy time, Hatchable injects a tiny bootstrap into every HTML file: ```js window.__HATCHABLE__ = { slug: "my-app", api: "/api" }; ``` Use it as the base URL: ```js const API = window.__HATCHABLE__.api; fetch(API + "/users/list").then(r => r.json()).then(render); ``` ## Auth (optional) Enable auth in hatchable.toml to get a complete signup/login/session system with one config block. The platform auto-mounts /api/auth/* — do not write files under api/auth/ when auth is enabled. ```toml [auth] enabled = true providers = ["email"] # or ["email", "google", "hatchable"] ``` Auto-mounted endpoints: - POST /api/auth/sign-up/email — create account with email + password - POST /api/auth/sign-in/email — log in - POST /api/auth/sign-out — clear session - GET /api/auth/get-session — current session + user - POST /api/auth/forget-password — send password-reset email - POST /api/auth/reset-password — complete password reset - GET /api/auth/sign-in/social/:provider — OAuth flow (google, github) - GET /api/auth/hatchable/sso — one-click Hatchable SSO (when enabled) Users live in these tables inside your project's own database: users, sessions, accounts, verifications You can extend the users table with your own columns: ```sql -- migrations/002_user_profile.sql ALTER TABLE users ADD COLUMN phone text; ALTER TABLE users ADD COLUMN tier text DEFAULT 'free'; ``` You CANNOT drop or rename users/sessions/accounts/verifications or create your own tables with those names — the deploy will fail with a clear error. In your API functions, auth.getUser works the same whether auth is enabled or not: ```js import { auth, db } from "hatchable"; export default async function (req, res) { const user = await auth.getUser(req); // NOTE: await when auth is enabled if (!user) return res.status(401).json({ error: "Not logged in" }); const { rows } = await db.query( "SELECT * FROM bookings WHERE user_id = $1", [user.id] ); res.json(rows); } ``` OAuth providers need credentials set via `hatchable secret set`: GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET GITHUB_CLIENT_ID, GITHUB_CLIENT_SECRET ## Deploy After writing files, call the `deploy` tool. It runs migrations, seeds (first deploy only), copies public/ to the CDN, registers api/ routes, and — if [auth] enabled — provisions the auth tables in your database.

get_project

Get project details including slug, visibility, status, deployed functions, and the database schema (tables, columns, types).

list_projects

List all projects you own or collaborate on, with their visibility, tier, role, and current version.

deploy

Deploy the project. Runs migrations/*.sql (tracked so each runs once), runs seed.sql on first deploy, copies public/ files to the CDN, and registers api/ files as live endpoints. Increments the project version. Call this after writing all your files. To verify your functions work after deploying, use `run_function` — it calls the function directly through your authenticated session and works for all project visibilities. The `url` field is the public URL for end users — personal projects require visitors to sign up before they can view the site.

write_file

Write or overwrite a project file. Paths are relative to the project root. Valid locations: public/** static files (HTML, CSS, JS, images, etc.) api/**.js backend functions (each file is one endpoint) api/_lib/** shared helpers imported by api/ files, not routed migrations/*.sql database migrations, run in filename order seed.sql optional seed data, runs once on fresh installs hatchable.toml optional config overrides package.json dependencies (no build script yet) Files are stored but not live until you call `deploy`.

write_files

Write multiple project files in a single call. Same rules as write_file but batched — faster for scaffolding a new project or updating several files at once. Each entry in the files array has a path and content. All files are written atomically — if any path is invalid, none are written.

read_file

Read the content of a project file. Pass offset/limit to read a range of lines — useful for large files where the whole file would blow the context window. When either is set, the response includes cat -n style line-numbered content so subsequent patch_file calls can reference exact line numbers.

grep

Regex content search across a project's files. Postgres-backed, scoped to one project, with glob filtering. Three output modes: - files_with_matches (default) — list paths containing a match - content — matching lines with optional context and line numbers - count — per-file match counts + total Default head_limit is 250 to prevent context blowups on broad patterns. Use glob to narrow by path (e.g. 'api/**/*.js', 'public/**/*.html'). Regex uses Postgres syntax (~ / ~*). Invalid or catastrophic patterns error out via a 2s statement timeout — simplify the pattern if that happens.

list_files

List all files in a project with their paths, sizes, and hashes.

patch_file

Apply a targeted edit to an existing project file without rewriting the entire file. Finds the first occurrence of `old_string` and replaces it with `new_string`. Use this instead of write_file when modifying large files (e.g. HTML) — you only send the changed portion, not the whole file. The old_string must match exactly (including whitespace). If it's not found, the tool returns an error. To insert at a specific position, use a nearby string as old_string and include it in new_string with your addition.

delete_file

Delete a project file. Takes effect after the next deploy.

execute_sql

Run SQL against the project's dedicated PostgreSQL database. Supports: CREATE TABLE, ALTER TABLE, DROP TABLE, INSERT, SELECT, UPDATE, DELETE. Use parameterized queries for safety: pass values in the `params` array with $1, $2, etc. placeholders. Return format: - SELECT: { rows: [...], count: N } — DECIMAL columns return as strings (e.g. "45.00") - INSERT/UPDATE/DELETE: { changes: N } - DDL: { changes: 0 }

get_schema

Return the database schema for the project's PostgreSQL database: tables, columns (with types), and indexes.

set_env

Set environment variables for a project. Available in functions via process.env.KEY. Keys containing SECRET, PASSWORD, TOKEN, API_KEY, or PRIVATE are automatically marked as secrets.

set_visibility

Change a project's visibility. - personal: you + invitees, login-gated, free - public: on the open web, requires Public plan ($12/mo). No app-level auth. - app: on the open web + user signups, requires App plan ($39/mo). Required if [auth] is enabled.

run_function

Execute a deployed function and return the real response. Use this to test your API endpoints. Returns: { status, headers, body, logs, error, duration_ms } Example: run_function({ project_id: 1, path: "/api/users", method: "GET" }) Example: run_function({ project_id: 1, path: "/api/users", method: "POST", body: { name: "Alice" } }) IMPORTANT: Always run_function on your API endpoints after writing them. Inspect the response body field names and types. Then write your frontend to match those exact names.

view_logs

View function execution logs with rich filtering. Each entry includes status_code, duration_ms, log_output (captured console.log), error (if any), and a derived `level` field (error/warning/info). Filter by any combination of function_name, route, method, status_code (exact or 4xx/5xx wildcards), level, time range (since/until — ISO or relative like '1h'/'30m'/'7d'), full-text query across log_output and error, or specific request_id. Use this to debug production issues: e.g. `level='error'` + `since='1h'` finds everything that blew up in the last hour.

list_deployments

List deployments for a project in reverse-chronological order. Each entry includes version, status, deployed_at, description, and summary counts (files, functions). Use this to understand recent deploy history, identify a known-good version for rollback, or debug a regression by comparing two versions.

list_functions

List every deployed API function for a project: route, method, runtime tier, type ('scheduled' if the function has at least one active scheduled task, else 'api'), and 24-hour invocation and error counts. This is the 'what routes did I ship' introspection tool. Call it after a fork, after picking up an unfamiliar project, or to verify a deploy registered the endpoints you expected. Much cheaper than reading every api/ file with read_file. For scheduling details (cron, fire_at, payload, run history) use list_cron_jobs.

get_deployment

Detail view of one deployment by version number — returns the full file manifest (paths, hashes, sizes) and function list captured when that version shipped. Use it with list_deployments to audit or compare what changed between versions.

list_cron_jobs

List every scheduled task for a project. A task points at a function and carries a cron expression (recurring) or a one-shot fire_at, plus an optional payload delivered as the request body. Tasks are either 'declared' (written into source via export const schedule or hatchable.toml, reconciled on deploy) or 'armed' (inserted by the SDK scheduler.at() call, preserved across deploys). Response includes next_fire_at, last_fired_at, attempts, last_error, and 7-day run/error counts from FunctionLog. Diagnostic: if next_fire_at keeps moving forward but last_fired_at never advances, the scheduler isn't running.

list_env

List environment variable keys for a project. Only key names and an is_secret flag are returned — values are never exposed through this tool. Use process.env.KEY inside a deployed function to read the actual value.

delete_env

Delete one or more environment variables by key. Pass `key` for a single delete or `keys` for a batch. Missing keys are reported in `skipped`, not errored, so retries are idempotent. Takes effect on the next deploy.

update_project

Update project metadata: name, tagline, description, category. Only the fields you pass are touched. For visibility changes use set_visibility; slug and tier are immutable.

import_file_from_url

Fetch a remote URL and save the response body as a project file — server-side, so the bytes never pass through your context window. Useful for seed data, vendor libs, and asset migration. Capped at 10 MB and 10s timeout. Private/loopback addresses are rejected. Path must live under public/, api/, or migrations/, or be one of seed.sql / hatchable.toml / package.json.

search_documentation

Search Hatchable's own documentation for platform behavior — routing, the SDK surface, deploy semantics, auth config, runtime limits. Call this instead of guessing when you're unsure how a Hatchable feature works. Ranks results by term frequency across headed sections. Returns source file, section heading, and a snippet around the hit.

dry_run_deploy

Run every deploy-time validator against the project's current files without actually deploying. Returns `errors` (hard gates) and `warnings` (soft lints), plus a `would_deploy` summary of what would ship. Errors catch: package.json build scripts, reserved table names in migrations, auth route collisions, usage cap breaches. Warnings catch known runtime footguns that type-check but silently misbehave — most notably `auth.getUser()` / `auth.getSession()` / `db.query()` calls without `await` (returning a Promise is truthy, so `if (!user)` guards pass and downstream `user.id` is undefined). Safer than calling deploy blindly and finding out mid-flight.

upload_file

Multipart file upload for content that exceeds a single model response's output token cap (big SPA bundles, large seed data, inline vendor libs). Flow: first call with chunk_index=0 and NO upload_id — response returns an upload_id. Subsequent calls pass that upload_id with chunk_index=1, 2, 3…. Last call sets final=true to atomically concatenate and commit as one ProjectFile. Chunks are staged in Redis with a 10-minute TTL. chunk_index overwrites (safe to retry). Max chunk size: 64 KB. Max assembled file: 20 MB.

list_pending_uploads

Show multipart uploads currently staged for this project that haven't yet been committed. Use this to recover from a disconnect — find the upload_id and resume from the next chunk_index. Uploads expire 10 minutes after the last chunk was added.

run_code

Execute arbitrary JS in the project's isolate runtime with the same bindings a deployed function gets: `db`, `auth`, `email`, `storage` from "hatchable", plus process.env and global fetch. The return value of the snippet becomes the `result` field. Use this as a REPL: probe the database, verify a computation, test an API shape before committing it to a file. Nothing is persisted — the snippet runs once and disappears. Caps: 5s default timeout (max 30s), 256 KB max source length. Example: run_code({ project_id, code: ` const { db } = await import("hatchable"); const { rows } = await db.query("SELECT count(*) FROM users"); return rows[0]; `})

fork_project

Fork a public project into your account. Copies all code and database schema (no data). The fork starts as a personal project you can modify freely. This is the recommended way to start from an existing app: fork it, then modify the code.

search_projects

Search the public Hatchable project directory — other people's projects that you can view or fork. Use this to find existing apps to fork-and-modify as a starting point. Note: this searches the public *marketplace*. To search inside your own project's files, use the `grep` tool instead.

setup_account

Associate an email and handle with your account. Step 1: Call with just email — sends a 6-digit verification code. Step 2: Call with email + code + handle — verifies and completes setup. This lets you log in to the console and sets your permanent @handle.

Tools
33 tools verified via live probe
verified 1d ago
Server: hatchableVersion: 2.0.0Protocol: 2025-03-26
Recent Probe Results
TimestampStatusLatencyConformance
Apr 28, 2026success175.3msPass
Apr 28, 2026success172.4msPass
Apr 27, 2026success92.4msPass
Source Registries
mcp-registry
First Seen
Apr 23, 2026
Last Seen
Apr 27, 2026
Last Probed
Apr 28, 2026