I wanted to type /ai add dark mode in a GitHub issue and get a PR with the implementation. No copy-pasting between ChatGPT and my editor.
My first attempt used Cloudflare Workers AI. I work at Cloudflare, it’s free, seemed like a natural fit. I built a private GitHub App that listened for /ai commands, fetched the repo, sent files to Llama 3.1, and committed the generated code.
It kind of worked. The bot created PRs. But I kept hitting walls.
Workers support up to 5 minutes of CPU time, but the AI Gateway times out after 2 minutes. Llama 3.1 70B often needs longer than that. I switched to Cloudflare Workflows for better orchestration, but the AI inference call itself still timed out.
The models also have limited context windows, so I had to truncate files aggressively. Most of the codebase never reached the model. And the smaller models that ran fast enough made mistakes: deleting existing code, returning malformed JSON. I added validation layers and retry logic. Hacks on hacks.
Fun to build. Learned a lot about Workflows, GitHub App auth, prompt engineering. But I was spending more time fixing the bot than it saved me.
Then I found OpenCode. It’s open source, runs in GitHub Actions, and does exactly what I wanted. You comment /oc fix this and it analyzes your codebase, writes code, opens a PR. No timeout issues because Actions can run for hours. You bring your own model.
Here’s the setup:
Install the OpenCode GitHub App, then add this workflow:
name: opencode
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
jobs:
opencode:
if: |
contains(github.event.comment.body, '/oc') ||
contains(github.event.comment.body, '/opencode')
runs-on: ubuntu-latest
permissions:
id-token: write
contents: write
pull-requests: write
issues: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 1
- uses: anomalyco/opencode/github@latest
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
model: openrouter/google/gemini-2.0-flash-exp:free
use_github_token: true
Get a free API key from OpenRouter, add it as OPENROUTER_API_KEY in your repo secrets. That’s it.
Gemini 2.0 Flash through OpenRouter is free. Llama 3.3 70B (openrouter/meta-llama/llama-3.3-70b-instruct:free) works too. Fair warning: the free models are buggy. Timeouts, empty responses, inconsistent results. Fine for experimenting, but I wouldn’t trust them on repos I care about. If you’re serious about this workflow, pay for Claude or GPT-4.
If you have Workers Builds connected to your repo, the PRs that OpenCode creates will automatically build and deploy previews. Nice for seeing changes before merging.
Cloudflare folks are aware of the desire for AI-assisted coding in the dashboard. I’m excited about where the dev platform is heading and confident we’ll get there. OpenCode fills the gap for now.