Most autonomous content demos are fake. They show a model taking a prompt and emitting a draft, but they skip the part that actually matters in a working repository: intake structure, validation, repo rules, PR flow, and failure handling.

For this blog, I wanted a GitHub-native pipeline where an idea could start as a structured issue, get normalized into a deterministic brief, be assigned to GitHub Copilot, and come back as a draft PR that still respected the repo.

As of March 24, 2026, GitHub’s current Copilot documentation makes that possible, but only if you design around the platform’s real execution model:

  • the hosted coding agent is PR-first, not direct-to-default-branch;
  • assignment requires a real user or GitHub App token;
  • repository-level MCP on GitHub.com gives the agent tools, not MCP resources or prompts;
  • and once work has started, the PR is the real iteration surface.

That is the model I implemented in this repo.

The Constraint That Changes the Whole Design

The most important GitHub fact is simple: the hosted coding agent creates a branch and PR. It does not turn your repository into an unsupervised content publisher.

That sounds like a limitation. It is actually the right boundary.

For a blog repository, PR-first execution gives you:

  • a review surface
  • CI hooks
  • draft visibility
  • source inspection
  • an obvious place to iterate

The current GitHub docs on the coding agent also make the operational implications pretty clear:

  • the agent pushes to copilot/* branches instead of the default branch;
  • PR workflows created by the agent require explicit approval to run;
  • and issue-to-PR execution is the intended shape of the workflow.

So the goal was never “autonomous publish.” The goal was autonomous intake to draft PR.

What Was Missing Before

The repo already had a solid blog factory:

  • repo instructions
  • path-specific instructions
  • custom agents
  • a skill
  • deterministic scaffold and validation commands
  • Exa MCP for research
  • CI and hooks

What it did not have was an intake layer for GitHub-hosted Copilot.

That missing layer had to do four things:

  1. capture a scoped, structured brief
  2. validate the brief before starting work
  3. assign the issue to Copilot with repo-aware instructions
  4. keep the rest of the workflow PR-first and deterministic

The Repo Surfaces That Now Implement Intake

The new GitHub-native intake pipeline is built from these files:

LayerRepo SurfaceJob
Structured request/.github/ISSUE_TEMPLATE/01-post-intake.ymlCanonical intake form
PR review shape/.github/PULL_REQUEST_TEMPLATE.mdForces summary, validation, and source notes
Intake parsing/scripts/blog/post_intake.pyParses and renders the intake schema
Copilot assignment/scripts/blog/assign_copilot_issue.pyAssigns the issue to Copilot through GitHub GraphQL
Issue workflow/.github/workflows/blog-intake.ymlValidates, comments, labels, assigns
Readiness validation/scripts/blog/check_copilot_setup.py, /.github/workflows/validate-copilot-assignment.ymlVerifies that hosted assignment is actually configured before relying on automation
Copilot behavior/.github/agents/dark-factory-autonomous-writer.agent.mdTurns the issue into a researched draft PR
Operating guide/docs/github-autonomous-intake.mdExplains settings, flow, and failure modes

This is the key point: the pipeline is not one YAML file. It is an intake contract plus deterministic repo plumbing.

The Intake Schema

The issue form is deliberately narrow. It asks for:

  • working title
  • core thesis
  • target audience
  • desired deliverables
  • primary sources
  • claims that must be verified
  • tone and style guardrails
  • timing constraints
  • supporting context
  • execution mode
  • definition of done

It also emits a stable schema marker:

<!-- blog-intake:v1 -->

That marker matters. The workflow does not try to interpret every issue in the repository as a blog production task. It only acts on issues that match the intake schema.

That is a much better approach than trying to infer intent from arbitrary issue text.

The Pipeline

Here is the end-to-end flow now running in the repo:

flowchart LR
  A["Structured issue form"] --> B["blog-intake.yml"]
  B --> C["post_intake.py parse"]
  C --> D["Status comment + labels"]
  D --> E["assign_copilot_issue.py"]
  E --> F["dark-factory-autonomous-writer"]
  F --> G["Draft PR"]
  G --> H["make quality"]
  H --> I["PR comments / review"]

And here is what each stage does.

1. Issue creation

The intake begins with the issue form, not with a free-form prompt.

That is the first real improvement. The workflow gets structured fields instead of a vague paragraph.

2. Parsing and validation

The intake workflow writes the issue body to disk and passes it through scripts/blog/post_intake.py.

That script does two jobs:

  • parse issue-form markdown into normalized JSON
  • render deterministic assignment instructions for Copilot

It also validates the intake before the workflow tries to assign anything.

If required fields are missing, the workflow does not fail silently. It updates a single status comment and labels the issue as blocked.

3. Status comment and labels

The workflow updates one issue comment in place rather than spamming new comments on every edit.

It manages these labels:

  • blog-intake
  • copilot-ready
  • copilot-assigned
  • copilot-blocked

That turns the issue list itself into a lightweight operational dashboard.

4. Copilot assignment

If the intake is valid and the execution mode is autonomous, the workflow calls scripts/blog/assign_copilot_issue.py.

That helper:

  • queries suggestedActors to find the Copilot coding agent
  • loads issue-specific custom instructions generated from the intake
  • calls the GraphQL assignment mutation with:
    • repository
    • base branch
    • custom agent
    • optional model
    • issue-specific instructions

The important detail is that this uses a real GitHub user or App token from COPILOT_ASSIGN_TOKEN, not the default GITHUB_TOKEN.

That is not cosmetic. It is the difference between “repo workflow” and “workflow that only looks automated in YAML.”

The repo now also includes a safe validation path for that setting:

That workflow does not mutate issues. It checks that the token authenticates, that the repository resolves correctly, and that Copilot is visible as an assignable actor.

5. Draft PR execution

The custom agent profile dark-factory-autonomous-writer tells Copilot how to behave once the issue is assigned:

  • treat the issue as the canonical brief
  • consume provided sources first
  • expand with Exa only when needed
  • verify non-trivial claims
  • create the requested bundle and sidecars
  • run make quality
  • open a draft PR with validation and source notes

That is where the earlier repo work pays off. The issue can be short because the repo already carries the content rules.

Why the Assignment Helper Exists

It would have been easy to jam the assignment mutation directly into the workflow YAML.

That would have been a mistake.

Moving the logic into assign_copilot_issue.py gives three advantages:

  1. it keeps the workflow readable
  2. it makes local testing possible
  3. it makes the Linear bridge reuse the same assignment path instead of creating a second one

That is the same design principle I used earlier for post scaffolding and validation: keep the repo logic in scripts, not in shell one-liners hidden in YAML.

Why GITHUB_TOKEN Matters More Than It Looks

One of the less obvious GitHub Actions constraints is that events caused by GITHUB_TOKEN generally do not trigger new workflow runs, except for workflow_dispatch and repository_dispatch.

That changes the architecture in two places.

First: the GitHub-native issue flow

This part is straightforward because the issue is created by a human in the repository UI. The issues event fires normally, so blog-intake.yml can parse and assign the issue.

Second: the Linear bridge

The Linear bridge creates a GitHub issue from a repository_dispatch event. Because that issue is created inside Actions, I do not rely on the issues workflow to pick it up afterward.

Instead, the bridge workflow does the whole job itself:

  1. render a valid GitHub intake issue
  2. create the mirrored issue
  3. parse it with the same parser
  4. assign it with the same helper

That is not accidental duplication. It is a direct response to GitHub’s event model.

The PR Is the Real Work Surface

Another GitHub detail that matters here: once Copilot starts work, the PR becomes the real operational surface.

That is why the repo now includes /.github/PULL_REQUEST_TEMPLATE.md. The template forces:

  • summary
  • source intake reference
  • validation commands
  • source list
  • notes

I also updated the repo instructions and the agent guidance so follow-up happens in PR comments, not by piling new instructions into the issue.

That keeps the discussion tied to the actual work product.

Why This Is Better Than “One Big Prompt”

The bad version of autonomous content automation looks like this:

  1. write a clever prompt
  2. hand it to the model
  3. hope it respects the repo

The GitHub-native intake model is stronger because it turns the task into explicit repo surfaces:

  • issue form for input
  • parser for normalization
  • script for assignment
  • custom agent for execution
  • PR template for review
  • quality gate for deterministic checks

Each surface does one job.

That is why the system is more reliable than a saved chat prompt, even if the model underneath is the same.

The Failure Modes That Still Matter

This setup is much stronger than the old flow, but it is not magic.

There are still clear failure modes.

1. Missing COPILOT_ASSIGN_TOKEN

If the secret is not configured, the issue validates but autonomous assignment is blocked.

That is intentional. The workflow should explain why it stopped instead of pretending it finished.

The practical fix now lives in the repo itself:

  1. add COPILOT_ASSIGN_TOKEN
  2. run Validate Copilot Assignment from the Actions tab
  3. confirm the readiness report is green before trusting autonomous intake

2. No hosted MCP setup

The repo has local MCP config in .vscode/mcp.json, but GitHub.com’s hosted coding agent does not inherit that automatically.

If you want hosted Copilot to use Exa, you still need repository-level MCP settings on GitHub.com.

3. Weak intake

If the issue form is vague, the PR will still be weak.

Autonomy does not remove the need for a good brief. It only makes the workflow repeatable.

4. Treating issue comments as the active control plane

That is the wrong place once Copilot has started. The PR is the place to iterate.

5. Fully autonomous publishing

You can build it, but I would not trust it by default for a public technical blog. PR-first autonomy is the right default.

The Linear Bridge Is an Extension, Not a Fork

I also added /.github/workflows/linear-bridge.yml and /docs/linear-bridge.md.

The important design choice there is that Linear does not get a second content workflow.

It simply feeds the same intake contract through repository_dispatch.

That keeps the system coherent:

  • GitHub-native intake is the source of truth
  • Linear is just another entrypoint

That is the only sane way to support multiple source systems without growing two inconsistent automation paths.

Result

The repo now has a real GitHub-native autonomous intake pipeline:

  • structured issue in
  • deterministic validation
  • hosted-assignment readiness check
  • explicit status and labels
  • scripted issue assignment to Copilot
  • custom-agent execution
  • draft PR out

That is a much better place to build autonomous content operations from than “ask Copilot to write a post.”

The repo is no longer just Copilot-aware. It is intake-aware.

Key Takeaways

  • GitHub-hosted Copilot works best for content automation when you treat it as a PR-producing agent, not a direct publisher.
  • Issue forms are the missing intake layer for autonomous repo workflows.
  • The reliable pattern is schema -> parser -> assignment helper -> custom agent -> draft PR -> deterministic checks.
  • Setup needs its own verification path; a configured secret is not the same thing as a proven assignment surface.
  • The Linear path should feed the GitHub-native intake contract instead of inventing a second workflow.

Sources