Back to blog

The Ticket Is the New Prompt

7 April 20265 min read
The Ticket Is the New Prompt

TL;DR

  • Chat is ephemeral and context-free. Structured work items (issues, tickets, bugs) carry history, intent, and acceptance criteria that make agents dramatically more effective.
  • Linear's zero-bugs policy works because issues are precise enough to be machine-actionable. The agent doesn't need to guess what "done" looks like.
  • The PM skill that matters isn't writing specs — agents can do that. It's knowing which problems deserve a ticket at all.

Everyone building with AI agents has settled on the same default interface: chat. You describe what you want, the agent does it, you review the result. Maybe you iterate a few times. The interaction looks like a conversation.

That default is fine for one-off tasks. It's the wrong architecture for professional software development, where work is ongoing, context is deep, and the difference between a good outcome and an expensive mess is usually how precisely the problem was defined before anyone wrote a line of code.

Linear is building around a different insight. The primary interface between humans and agents, at least for software work, isn't a chat window. It's the issue.

What a ticket actually contains

A chat prompt carries what you type in the moment. That's it. It has no history, no related work, no indication of what's been tried before, no link to the customer who first reported the problem, no context about why previous solutions were rejected.

A well-maintained issue is different. It has a description of the problem written by someone who understood it. Comments from engineers who investigated it. Links to related issues that failed for similar reasons. Labels that indicate priority and category. An acceptance criteria section that defines what "fixed" actually means. In a mature codebase, an issue might have months of accumulated context attached to it.

That's a far better instruction for an agent than anything you'd type in a chat box.

Linear's zero-bugs policy makes this concrete. Every bug gets a one-week SLA. Coding agents do the first pass on fixes. The agent reviews the issue, writes the fix, and flags the responsible engineer for review. The engineer can then examine the diff, request changes, and iterate — all inside Linear.

This workflow only functions because the issues are structured enough to be machine-actionable. Vague bug reports ("it sometimes crashes") produce garbage agent output. Precise issues with reproduction steps, context, and expected behaviour produce useful ones. The agent is only as good as the ticket it's working from.

I saw this pattern clearly when building AI features into OpenChair and OpenTradie. The agents that produced useful output weren't the ones with the best prompts. They were the ones working off structured records with accumulated context. Vague instructions produced plausible-looking garbage. Precise ones produced work we could ship.

From conversation to commitment

There's a second workflow worth paying attention to. Kirill described how his team uses Slack: discussions happen there, decisions get made, and then someone asks the Linear agent to "create issues out of this conversation." The conversation becomes structured work items. Automatically.

This is the right flow direction. Thinking and discussion belong in unstructured spaces: Slack, whiteboards, meetings. But the moment you've decided something deserves to be worked on, it needs to become a structured artefact. An issue is a commitment and an instruction combined. It signals intent to the team and provides context to the agent.

The old PM workflow was: write a spec, then break it into tickets. The new workflow is: have a conversation, pull structured issues from it, let agents execute against those issues. The spec and the ticket collapse into the same thing.

The quality of thinking that goes into naming and scoping the ticket still matters. Maybe more than it used to.

Execution is cheap. Problem definition is not.

Kirill made a point that cuts through a lot of the AI productivity discourse: "I don't want the problem-finding to be fast. You should take the time to find the right problem and the right approach. Then, once you decide that, you can go faster on it."

This is where the ticket metaphor gets philosophically interesting. Execution is cheap now. An agent can write a fix in minutes that would have taken an engineer half a day. That changes what's expensive. What's expensive is identifying the right problems to work on, defining them precisely enough to be machine-actionable, and making the judgment call about which ones deserve resources.

Those are PM skills. They always were. The difference is that previously, a PM who wrote vague tickets just created rework and frustration for engineers. Now, a PM who writes vague tickets produces bad agent output at scale. The signal gets amplified.

Teams that take ticket quality seriously will get substantially better agent output than teams that treat issues as an administrative afterthought.

Build around the work management layer, not the chat layer

If you're building software for teams that will increasingly use agents to execute work, the question of how humans instruct those agents is a product decision you can't ignore. This connects directly to which platforms will own the agent layer: the ones where structured work originates, not the ones with the best chat UI.

Chat interfaces are seductive because they're familiar and they demo well. But they don't accumulate context. They don't create a record of intent. They don't connect to the history of why work exists in the first place.

Structured work items do all of that. The companies that understand this will build their agent integrations around the work management layer, not around the chat layer. Linear is the most visible example right now, but this logic applies across any domain where work is ongoing and context matters.

The prompt isn't where the intelligence lives. The ticket is.


Related: Stop Building AI Agents. Start Building SOPs Wrapped in Code. and SaaS Isn't Dead. Hollow SaaS Is.

Share

Logan Lincoln

Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms. Written from experience agentic AI at OpenChair.