01The setup
Engineering ships faster. Testing landed on you.
Pull request after pull request. The AI-assisted engineer is moving faster than the team you used to rely on to test the work. So now you’re the one clicking through the build, writing test cases, filing bugs, retesting fixes — in between everything else a PM is supposed to do.
For a lot of teams, dedicated QA was already thin. AI made the bottleneck obvious. Engineering velocity went up. The testing burden didn’t disappear — it slid sideways, onto the PM. If you don’t change how you do it, QA will quietly consume the part of your week that was supposed to go to strategy, customers, and roadmap.
The fix isn’t to do QA faster by hand. It’s to build the same kind of AI-assisted workflow you’ve built for requirements — just pointed at testing.
02The foundation
Quality didn’t get less important. Ownership moved.
PMs already own the spec, the customer judgment, and the definition of done. As AI compresses the engineering cycle, the part of QA that depends on product judgment — what should this actually do, what would a real customer try, what are the edge cases that matter — sits closer to the PM than to anyone else on the team.
That’s the part you can’t outsource. The rest — writing the cases, running them, opening tickets when things break — is exactly the kind of structured, repeatable work AI is good at. The PM stays in the judgment. AI takes the mechanical work.
QA didn’t disappear. It got redistributed onto the role closest to the customer.
03Level 1
Use AI to generate the test plan.
Start where the leverage is highest and the risk is lowest: generating the test plan from the spec. Hand the LLM the requirement, the JTBD context, and the acceptance criteria. Get back a structured set of test cases — happy path, edge cases, failure modes — before you start clicking through anything.
Given this requirement and JTBD context, generate a test plan. List the happy path, edge cases, and failure modes a real customer would actually hit.
You’ll catch missing acceptance criteria before engineering ships, not after. Even if you stop here, this one shift — AI-drafted plans, PM-edited — pulls hours out of your week and tightens the spec at the same time.
04Level 2
Automate the run with browser automation.
The next layer is execution. Tools like Playwright let an AI agent drive a real browser through the cases the LLM just wrote. Instead of you clicking through the build, the agent runs the plan and reports what passed and what didn’t.
Be honest about the ceiling: AI testing still misses bugs. Visual regressions, weird timing issues, the thing only a real customer would notice — those slip through. Treat automation as the floor, not the roof. The PM still spot-checks the parts that matter most. The agent handles the repetitive coverage that was eating your calendar.
Automation isn’t about removing the PM from QA. It’s about removing the parts that don’t need a PM.
05Level 3
Failures become tickets, automatically.
When a case fails, the workflow shouldn’t hand you a log file. It should hand engineering a clean ticket. Wire the agent into your tracker via MCP and let it create the bug directly — right project, right severity, repro steps, screenshots, the failing test name. The same way the requirements skill creates feature tickets, this one creates bug tickets.
Now the loop closes without you. A change ships, the agent runs the plan, anything that breaks lands in the tracker before you’ve opened your laptop. Your job becomes triage and judgment, not transcription.
06Startup mode
Don’t just file the bug — fix it.
In a startup environment where the PM has more access and more accountability, you don’t stop at the ticket. The same agent that found the bug opens a PR with a candidate fix. Engineering reviews and merges. The loop runs end-to-end — plan, run, file, fix — with the PM in the judgment seat instead of the labor seat.
This isn’t for every team. It needs trust, sandboxes, and a real review process on the engineering side. But it’s the ceiling, and it’s where the leverage really compounds. The same workflow that surfaces the bug helps ship the fix.
Old loop vs. new loop.
The old loop
spec → build → PM clicks through → writes bug → engineer fixes → PM retests → ship
→
The new loop
spec → AI plan → agent run → auto-filed ticket → (auto-PR) → ship
Same job. The PM stays in the parts that need judgment. AI absorbs the mechanical middle — which is the part that was eating your calendar.
07Takeaways
Stop running QA by hand. Build the workflow that runs it for you.
QA didn’t disappear — it landed on you. The PMs who get the most leverage from AI right now treat testing the same way they treat requirements: a structured workflow, judgment at the edges, AI doing the mechanical middle.
- 01As engineering speeds up with AI, QA slides onto the PM. Don’t do it by hand.
- 02Level 1: have AI draft the test plan from the spec. Edit it. Tighten the spec.
- 03Level 2: run the plan with browser automation. Treat it as the floor, not the roof.
- 04Level 3: failures become bug tickets in your tracker, automatically.
- 05Startup mode: the same agent opens the fix PR. PM in judgment, not labor.
- 06Goal isn’t to eliminate testing. It’s to protect strategy time.
Plan → run → file → fix. The PM stays in the judgment.