Your autonomous AI software engineer
Forge takes a GitHub issue, spins up an isolated Docker sandbox, autonomously writes and tests code, and produces a verified git diff ready to merge.
What Forge does
A complete autonomous engineering pipeline, from issue to branch.
Docker Sandbox
Every run is fully isolated — no leftover state, no host pollution.
Any OpenAI-compatible Model
Works with OpenAI, Gemini, Anthropic, Ollama, or any compatible endpoint.
Autonomous Agent Loop
Thinks, acts, observes, repeats until the task is done or the step limit is reached.
Full Trajectory Recording
Every step, command, and model response saved to a .traj file.
ElizaOS Integration
Drop-in action plugin for ElizaOS AI agents.
Auto-fix on Label
Label any issue 'forge' — Forge picks it up, fixes it, and pushes branch forge/issue-{N} automatically. No commands to run.
How it works
Eight autonomous steps from issue to pull request, no human in the loop.
- 1Fetch the problem statement (GitHub issue, text, or file)
- 2Start an isolated Docker sandbox
- 3Clone the repository
- 4Enter the agent loop — think, act, observe
- 5Execute bash commands autonomously
- 6Run submit to capture the git diff
- 7Save the full trajectory to a .traj file
- 8Push branch forge/issue-{N} — review and merge when ready
Get started
You pick the issue. Forge does the work.
No Rust · No cloning · No compiling · Just Docker + .env
Install Docker
That's the only prerequisite. No Rust, no compiling, no cloning.
docker --versionCreate a .env file
Pick your model provider, paste your API key, and optionally add a GitHub token for automatic PR creation.
# .env — create this file anywhere and run from that directory
# Google Gemini (recommended)
FORGE_MODEL=models/gemini-2.0-flash-001
FORGE_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai
FORGE_API_KEY=your-gemini-api-key
# OpenAI
# FORGE_MODEL=gpt-4o
# FORGE_BASE_URL=https://api.openai.com/v1
# FORGE_API_KEY=sk-...
# GitHub token — enables automatic pull request creation after each fix
# Create one at: github.com/settings/tokens (needs repo scope)
GITHUB_TOKEN=ghp_...
# Find your Docker group GID: getent group docker | cut -d: -f3
DOCKER_GID=132Run against a GitHub issue
Pass the repo and issue number. Forge pulls the image, clones the repo inside a sandbox, works autonomously, and opens a PR when done.
docker compose run --rm \
-e FORGE_REPO=owner/repo \
-e FORGE_ISSUE=42 \
akachiokey/forge:latestEnable always-on watch mode
Start once, fix forever. Add your repo and token, start the watcher, then just label issues on GitHub — Forge does the rest.
# Add to .env:
FORGE_WATCH_REPO=owner/repo
FORGE_WATCH_LABEL=forge
FORGE_WATCH_INTERVAL=60
GITHUB_TOKEN=ghp_...
# Start in the background:
docker compose up watch -dAny OpenAI-compatible model
Forge is model-agnostic. Three lines in your .env is all it takes.
FORGE_MODEL=gpt-4o
FORGE_BASE_URL=https://api.openai.com/v1
FORGE_API_KEY=sk-...Copy the relevant block into your .env file. These three variables are the only ones required to start.
All environment variables
| Variable | Required | Description |
|---|---|---|
| FORGE_MODEL | Yes | Model identifier passed to the API |
| FORGE_BASE_URL | Yes | Base URL of an OpenAI-compatible completions endpoint |
| FORGE_API_KEY | Yes | API key for the model endpoint |
| FORGE_REPO | No | GitHub repo for one-shot mode (owner/repo) |
| FORGE_ISSUE | No | Issue number for one-shot mode |
| FORGE_WATCH_REPO | No | GitHub repo to monitor in watch mode |
| FORGE_WATCH_LABEL | No | Label to watch for (default: "forge") |
| FORGE_WATCH_INTERVAL | No | Seconds between polls (default: 60) |
| GITHUB_TOKEN | No | GitHub PAT — raises rate limit; required for private repos |