Quick Start
Start your 3-day free trial
Get 3 agent slots free for 3 days. Build your library of reusable components, deploy agents, and see real results before you pay.
Start free trial1Build Your Library
Start by creating reusable building blocks in your Library:
- Tools — Capabilities like shell commands, HTTP APIs, MCP servers
- Actions — Custom scripts in Python, Bash, or Node.js
- Message Schemas — Structured communication formats
- Agent Templates — Reusable agent configurations
2Create a Project
Projects are workspaces that contain your agents. Each project has its own LLM configuration and can connect to external repositories.
3Add Agents
Create agents from templates or from scratch. Assign tools, actions, and schemas from your library. Each agent gets its own system prompt and runtime configuration.
4Deploy and Test
Deploy your agent to Cloud Run. Send tasks through the chat interface or create jobs via the API. Monitor execution in real-time.
Core Concepts
Library
Your personal collection of reusable building blocks: tools, actions, message schemas, and agent templates.
Projects
Workspaces containing agents, with their own LLM config, repository connections, and project-level overrides.
Agents
AI workers with specific roles, tools, and knowledge. Built from templates or configured from scratch.
Marketplace
Public library of tools, actions, schemas, and templates shared by the community.
Configuration Layering
Agent configuration is assembled from multiple sources. The Agent Packager combines project settings, template configuration, and agent-specific overrides into the final package.
Library
The Library is your personal collection of reusable building blocks. Everything you create here can be used across all your projects and agents.
Library Structure
Library items can be private (only you can use them) or public (published to the Marketplace for others to install).
Agent Templates
Agent Templates are reusable agent configurations. They define the agent's role, system prompt, and reference the tools, actions, and schemas from your library.
{
"name": "Graphic Designer",
"role": "Visual Designer",
"description": "Creates visual assets including logos and graphics",
"category": "design",
"icon": "🖼️",
"system_prompt": "You are a Visual Designer agent...",
"guidelines": [
"Follow brand color palette",
"Use vector formats for logos and icons",
"Maintain visual consistency across assets"
],
"qa_pairs": [
{
"question": "What are the social media image sizes?",
"answer": "Twitter: 1200x675px. Instagram: 1080x1080px.",
"category": null
}
],
"tool_ids": [],
"action_ids": [],
"schema_ids": [
{ "id": "schema-uuid", "source": "library" }
],
"publish_to_marketplace": false
}Fields
nameTemplate name (e.g., "Senior Backend Engineer")roleThe role this agent playsiconEmoji icon for the templatesystem_promptCore instructions and context for the agentguidelinesOperating rules the agent should followqa_pairsKnowledge base Q&A to teach specific responsestool_idsReferences to tools from your libraryaction_idsReferences to actions from your libraryschema_idsReferences to message schemasNote: Dependencies from selected tools and actions are automatically inherited. You don't need to specify them separately.
Tools
Tools give agents capabilities to interact with external systems. Define installation commands and required environment variables.
Tool Configuration
{
"name": "supabase-cli",
"description": "Supabase CLI for managing databases and migrations",
"installation_command": "npm install -g supabase",
"setup_command": "supabase login --token ${SUPABASE_ACCESS_TOKEN}",
"required_env_vars": ["SUPABASE_ACCESS_TOKEN"],
"publish_to_marketplace": true
}Fields
nameUnique identifier (e.g., "supabase-cli")descriptionWhat the tool does and how to use itinstallation_commandCommand to install (npm, pip, apt-get, curl)setup_commandPost-install config. Use ${VAR_NAME} for env varsrequired_env_varsArray of environment variable names neededpublish_to_marketplaceMake available to other usersActions
Actions are custom scripts that agents can execute. Write your own code in Python, Shell, or JavaScript.
Python
Shell/Bash
JavaScript/Node
{
"name": "sync_agent_config",
"description": "Sync configuration across agents in a project",
"language": "python",
"code": "<your Python code here>",
"tool_dependencies": [],
"action_dependencies": ["httpx"],
"required_env_vars": ["API_SERVICE_KEY", "API_BASE_URL"],
"publish_to_marketplace": true
}Fields
namesnake_case identifier (e.g., "sync_agent_config")descriptionWhat the action doeslanguagepython, sh (shell), or js (JavaScript)codeThe actual script codetool_dependenciesTool IDs this action requiresaction_dependenciesPackage dependencies (pip packages, npm packages)required_env_varsEnvironment variable names neededpublish_to_marketplaceMake available to other usersDependencies are automatically installed when the agent container is built.
Message Schemas
Message Schemas define the structure of messages agents send. They provide type safety and validation for inter-agent communication.
{
"display_name": "Deployment Request",
"description": "Request to deploy code changes to an environment",
"schema_fields": [
{
"key": "branch",
"description": "Git branch name",
"type": "string",
"required": true
},
{
"key": "environment",
"description": "Target environment (staging, production)",
"type": "string",
"required": true
},
{
"key": "services",
"description": "List of services to deploy",
"type": "array",
"required": true
},
{
"key": "commit_sha",
"description": "Git commit SHA to deploy",
"type": "string",
"required": true
},
{
"key": "approval_required",
"description": "Whether human approval is required",
"type": "boolean",
"required": false
}
],
"publish_to_marketplace": true
}Fields
schema_nameUnique identifier (lowercase, snake_case)display_nameHuman-friendly nameschema_fieldsArray of field definitions with key, type, description, requiredField types: string,number,boolean,object,array
Marketplace
The Marketplace is a public library of tools, actions, schemas, and templates shared by the community. Install items to your library with one click.
Publishing
Set is_public: true on any library item to publish it to the Marketplace.
Versioning
Installed items track marketplace_versionand user_version separately.
When you edit an installed marketplace item, your changes are tracked in user_version. You can restore to the original marketplace version at any time.
Projects & Agents Overview
Projects are workspaces that contain your agents. Agents are AI workers built from templates in your library. Together, they form your AI team.
Projects
Workspaces containing agents, LLM config, repository connections, and project-level overrides.
Agents
AI workers with specific roles, tools, and knowledge. Built from templates or configured from scratch.
Projects
Projects are workspaces that contain agents. They provide:
- LLM configuration (provider, model, API keys)
- Repository connections (GitHub, GitLab)
- Project-level tool/action/schema overrides
- Shared Q&A knowledge base
- Message routing between agents
Project-Level Overrides
Projects can override tenant-level (library) tools and actions. This allows you to customize behavior for specific use cases while sharing the base configuration.
Agents
Agents are AI workers with specific roles. Create them from templates or configure from scratch.
Agent Configuration
{
"name": "Mona",
"description": "Coordinates work across agents",
"template_id": "template-uuid",
"avatar": { "face": {...}, "eyes": {...}, ... },
"status": "active",
"knowledge_base": {
"system_prompt": "You are a Technical PM agent...",
"guidelines": ["Break large features into tasks", ...],
"qa_pairs": [{ "question": "...", "answer": "..." }]
},
"tools": ["tool-uuid-1", "tool-uuid-2"],
"actions": ["action-uuid-1"],
"message_schemas": ["schema-uuid-1", "schema-uuid-2"],
"llm": {
"provider": "openai",
"model": "gpt-4o"
},
"runtime": {
"preset": "nimble",
"cpu_limit": 0.5,
"memory_mb": 512
}
}Agent configuration is assembled from: template → project settings → agent-specific overrides. The Agent Packager bundles everything into a deployable package.
Message Bus
The Message Bus is the central communication backbone. All messages between agents and users flow through it, and it manages the agent jobs queue.
Messages
Structured messages sent between agents and users using defined message schemas. Each message is validated against its schema.
Agent Jobs Queue
When a message targets an agent, a job is created. Jobs are queued if the agent is busy and processed in order.
Job States
Jobs track execution from creation through completion, including timing, output, and any errors encountered.
Message Router
The Message Router maps message schemas to agents and users. When a message using a schema is sent, it automatically routes to all configured recipients.
{
"message_schema_id": "schema-uuid",
"agent_ids": [
"agent-uuid-1",
"agent-uuid-2"
],
"recipient_user_ids": [
"user-uuid-1"
]
}How It Works
- Define a Message Schema in your library (e.g., "code_review_request")
- Create a Message Route that maps that schema to target agents/users
- When any agent or user sends a message using that schema, the router kicks in
- Jobs are created for each target agent, notifications for each target user
Tip: You can route the same schema to multiple agents. All configured agents will receive the message and process it independently.
Architecture
Agent Packager
Bundles agent configuration (tools, actions, schemas, knowledge) into a deployable package by combining project, template, and agent-level settings.
Agent Brain
The AI execution engine. Processes tasks via LLM, executes tools/actions, sends messages, tracks token usage.
Agent Runner
Polls for pending jobs, checks agent availability, dispatches to Cloud Run, handles queuing and completion.
Message Router
Routes messages between agents based on configured routes. Creates jobs when messages match schema mappings.
Agent Runner
Each agent container runs its own job processor that polls for and executes work. The runner lives inside the container alongside the LLM.
How It Works
- Container starts and begins polling
agent_jobstable - Filters for jobs assigned to this specific agent
- Picks up pending jobs and marks them as running
- Processes job using the LLM (Agent Brain) directly in the container
- Updates job status to completed/failed when done
- Continues polling for next job
Note: Each container manages its own job queue. The container polls the database directly—there's no external service pushing work to it.
Agent Chat
Users can communicate directly with agents through the chat interface. This provides a real-time, conversational way to interact with your AI team.
Direct Messages
Send messages directly to any agent. The message creates a job that the agent processes, and you see the response in real-time.
Conversations
Chat history is preserved in conversations. Continue where you left off or start fresh conversations for different tasks.
Tip: Use message schemas in chat to send structured data to agents. The router will also deliver messages to other configured recipients.
Cloud Run Deployment
Each deployed agent runs as a Cloud Run service with its packaged configuration.
Container Contents
- Agent package JSON with full configuration
- Agent Brain execution engine
- Tool executor runtime
- LLM client for configured provider
- Installed dependencies from tools/actions