MCP Server
The Clawnify CLI can run as a Model Context Protocol server, exposing its deploy tools directly to AI coding agents like Claude Code, Cursor, and anything else that speaks MCP. Instead of asking the agent to shell out to clawnify deploy, the agent calls structured tools and gets structured responses.
Why you want this
- Structured tool calls —
deploy,list_apps,get_app,delete_appall take typed parameters and return typed JSON. The agent knows exactly what it’s doing. - Shared auth — the MCP server reads
~/.clawnify/auth.json, the same file the CLI uses. One login, both surfaces. - No shell parsing — the agent doesn’t have to read free-text CLI output; every deploy returns a clean object with app ID, URL, and status.
Wire it up in Claude Code
claude mcp add clawnify -- npx clawnify --mcp
That’s it. Claude Code now has four new tools available in every session.
Run clawnify login first if you haven’t already — the MCP server errors out if there’s no auth file to read.
Available tools
| Tool | Params | Description |
|---|---|---|
deploy | directory?, name?, repo? | Deploy an app. Supports local-directory, name-only (uses cwd), and owner/repo GitHub modes. |
list_apps | (none) | List every app in the active org. |
get_app | app_id | Get full app status and URL. |
delete_app | app_id | Delete an app and clean up all its resources. |
Example agent session
user: deploy this project
assistant (tool call): deploy({ directory: "." })
tool result: { app_id: "...", slug: "my-app", url: "https://my-app.apps.clawnify.com", status: "building" }
assistant (tool call): get_app({ app_id: "..." })
tool result: { status: "ready", url: "https://my-app.apps.clawnify.com" }
assistant: Deployed to https://my-app.apps.clawnify.com. Anything else?
Running it manually
For debugging or if you’re integrating with a non-Claude MCP client:
npx clawnify --mcp
The server listens on stdio, which is the MCP standard. Use any MCP client to talk to it.