AI QA & Testing Scanner
TestGen
Stop confusing test setup with actual QA coverage.
TestGen is not another thin testing template. Lite scans the repo, detects the stack, and gets the harness in place. Full ranks the highest-value targets, generates the tests that matter, runs them, and writes the audit and findings reports that tell you where the real bugs are.
Scan the repo and see exactly what is worth testing before burning time on low-value files.
Repair or install the test harness with the smallest stack that actually fits the project.
Generate the highest-signal tests first for utilities, validators, hooks, server actions, route handlers, and logic-heavy components.
Run the suite and get structured audit plus findings reports instead of vague test-generation output.

Argument Layer
Your AI-Built App Has Zero Tests. Here's How to Fix That in One Command.
TestGen scans your vibe-coded repo, finds what is worth testing, generates real tests, runs them, and tells you what is actually broken instead of leaving you in debug whack-a-mole.
Why it exists
What breaks without it
setup != coverageA `vitest.config.ts` file looks reassuring, but it does not tell you what logic is actually protected.
counting files != testingGeneric test generators spread shallow assertions everywhere instead of ranking what matters most first.
templates != workflowWithout boundary mapping and mocks for App Router, auth, database, or billing, the suite still breaks on contact.
logic gapThe repo often has high-risk actions, handlers, and components, but no reliable test suite around the critical path.
no findings layerPassing or failing tests alone do not tell you whether the problem is the product, the mocks, or the harness.
wrong target orderTeams burn time on low-signal UI coverage while auth, mutations, and route handlers remain exposed.
Offer
What you actually get
repo auditScan the framework, runner, and testable targets first instead of guessing where coverage should start.
harness repairInstall or fix the smallest compatible Vitest, RTL, Playwright, and CI foundation for the repo in front of you.
ranked targetsPrioritize actions, handlers, hooks, validators, and logic-heavy components by actual testing value.
generated testsProduce useful tests for server actions, route handlers, hooks, utilities, and behavior-heavy components.
mock adaptersReuse adapter patterns for App Router, Supabase, NextAuth, Prisma, Stripe, React Query, and Zustand.
findings reportsShip with TEST_AUDIT and TEST_FINDINGS so you know what passed, what failed, and what the likely bug actually is.
How it works
Drop one file. Keep coding normally.
Scan and classify the repo
TestGen Lite detects the framework, stack, current runner, and what is actually testable before changing anything.
Repair the harness first
It installs or fixes the smallest compatible testing stack and templates instead of dumping a one-size-fits-all setup.
Generate the highest-value tests
Full TestGen prioritizes utilities, validators, hooks, server actions, route handlers, and logic-heavy components by return on effort.
Run and write findings
The runner executes the suite and writes `TEST-AUDIT.md` plus `TEST_FINDINGS.md` so you know what passed, what failed, and why.
Mock Preview
The product, framed like something you can actually deploy
This is not sold as vague expertise. It is packaged as a concrete operating layer with rules, examples, and repeatable usage inside real AI workflows.
Founders shipping AI-generated apps who need deterministic QA before launch.
Developers tired of losing hours to Vitest setup, mock wiring, and target selection.
Teams reviewing App Router mutations, handlers, and auth-sensitive logic that cannot rely on happy-path demos.
python3 full-testgen/scripts/run_full_testgen.py /path/to/my-app [] Wrote TEST-AUDIT.md[] Ranked Top 5 test targets by score[] Generated tests for actions, routes, utils, and form behavior[] Applied App Router + Supabase mock patterns[] Executed vitest --run[] Wrote TEST_FINDINGS.md Result: 12/12 passing on validated fixture. Regression in revalidatePath() caught immediately.Inside the pack
- Free Lite scanner for framework detection, stack detection, runner detection, and testable-target scanning.
- Priority scoring that labels targets by business risk, branch density, blast radius, and friction.
- A full `TEST-AUDIT.md` pass with top-five ranked targets and recommended commands.
- Generation patterns for pure functions, Zod validators, hooks, server actions, route handlers, and behavior-heavy components.
- Optional Playwright patterns and config for auth, CRUD, and payment flows when E2E is justified.
What's inside
Everything in the pack
testgen-private/├── test-setup/│ ├── SKILL.md // setup-only scanner and harness repair│ ├── scripts/scan_repo.py│ ├── references/setup-checklist.md│ └── assets/templates/ // vitest, playwright, CI, reports├── full-testgen/│ ├── SKILL.md // full generation workflow│ ├── scripts/run_full_testgen.py│ ├── scripts/write_findings.py│ ├── references/stack-adapters.md│ └── assets/templates/ // tests, mocks, reports, CI├── demo-generated-tests/│ ├── TEST-AUDIT.md│ └── TEST_FINDINGS.md└── real-next-validation/ ├── TEST-AUDIT.md └── TEST_FINDINGS.mdBefore / After
What changes once the product is installed
CI/CD
A real test workflow instead of local-only QA
Without the pack, tests stay tribal and manual. TestGen ships a working GitHub Actions workflow template so verification stops depending on memory.
Without TestGen
.github/workflows/tests.yml not present $ npm run test -- --run create-post.test.ts (3) posts-route.test.ts (4) Test Files 2 passed (2) Tests 7 passed (7) no pull_request gate no push gate no shared CI baselineWith TestGen
name: Tests on: push: branches: [main] pull_request: jobs: unit: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 cache: npm - run: npm ci - run: npm run test -- --run e2e: if: ${{ false }} runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 cache: npm - run: npm ci - run: npx playwright install --with-deps - run: npm run test:e2e validated fixture 12 tests passing 82.71 statements and lines after generationGeneration — E2E
Critical flows get exercised, not just smoke-clicked
The full pack includes E2E patterns for login, navigation, and write flows when end-to-end coverage is actually justified.
Without TestGen
import { expect, test } from '@playwright/test' test('dashboard renders', async ({ page }) => { await page.goto('/dashboard') await expect(page.locator('body')).toBeVisible()})With TestGen
import { expect, test } from '@playwright/test' test('user can create a post from the dashboard', async ({ page }) => { await page.goto('/login') await page.getByLabel('Email').fill('qa@example.com') await page.getByLabel('Password').fill('password123') await page.getByRole('button', { name: 'Log in' }).click() await expect(page).toHaveURL(/dashboard/) await page.getByRole('link', { name: 'New post' }).click() await page.getByLabel('Title').fill('Playwright post') await page.getByRole('button', { name: 'Create post' }).click() await expect(page.getByText('Playwright post')).toBeVisible()})Adapters
Mocks are reusable and stack-aware instead of ad hoc
You do not have to reinvent Supabase, auth, or data mocks on every repo. The pack includes adapter patterns the generator can reuse consistently.
Without TestGen
import { vi } from 'vitest' vi.mock('@/lib/supabase/server', () => ({ createClient: async () => ({ auth: { getUser: async () => ({ data: { user: null }, error: null }), }, from: () => ({ select: vi.fn(), }), }),}))With TestGen
import { vi } from 'vitest' export const mockGetUser = vi.fn()export const mockFrom = vi.fn()export const mockSelect = vi.fn()export const mockInsert = vi.fn()export const mockUpdate = vi.fn()export const mockDelete = vi.fn() mockFrom.mockImplementation(() => ({ select: mockSelect, insert: mockInsert, update: mockUpdate, delete: mockDelete,})) export function mockAuthUser(user = { id: 'u1', email: 'qa@example.com' }) { mockGetUser.mockResolvedValue({ data: { user }, error: null, })} export function mockNoSession() { mockGetUser.mockResolvedValue({ data: { user: null }, error: null, })} vi.mock('@/lib/supabase/server', () => ({ createClient: vi.fn().mockResolvedValue({ auth: { getUser: mockGetUser, }, from: mockFrom, }),}))Diagnosis
You get a findings layer, not just a pass/fail blur
The difference is not only generated tests. The runner writes a structured diagnosis report so you can separate product bugs from infra or mock gaps.
Without TestGen
FAIL tests/create-post.test.ts > createPost > creates the post and revalidatesAssertionError: expected "spy" to be called with arguments: [ '/posts' ] Received: 1st spy call: [ '/dashboard' ] tests/create-post.test.ts:80:28 78 expect(result.ok).toBe(true) 79 expect(insertSpy).toHaveBeenCalled() 80 expect(revalidatePath).toHaveBeenCalledWith('/posts') you know something broke you still have to diagnose whether its product logic, mocks, or harnessWith TestGen
Test Findings Summary - tests added: 12- files changed: source, tests, and Next.js project metadata- passing: 12- failing: 0 in final state- coverage before: not configured- coverage after: 71.87 statements and lines Probable Product Bugs Severity File Or Flow Failing Test Observed Behavior Expected Behavior Recommended Action --- --- --- --- --- --- medium `src/app/actions/create-post.ts` `createPost > creates the post and revalidates` deliberate regression changed `revalidatePath` from `/posts` to `/dashboard`; generated test failed immediately revalidate the posts listing after a successful create fixed during validation and confirmed with a green rerun Infra Or Mock Gaps Type File Or Flow Problem Action Needed --- --- --- --- none - - - Tests Adjusted Or Rejected File Or Flow Reason Decision --- --- --- `src/app/page.tsx` and `src/app/posts/page.tsx` low-value in this small repo without browser flows defer to later E2E or smoke coverage Coverage Notes - `src/app/actions/create-post.ts`: 100- `src/app/api/posts/route.ts`: 100- `src/components/CreatePostForm.tsx`: 93.54- low global coverage is driven by untouched pages and boundary stubs, not by the tested critical flowOffer Compare
Free vs Full
Lite gives you visibility and setup. Full gives you the actual audit, generation, execution, and findings loop.
| Layer | TestGen Lite (Free) | TestGen Full |
|---|---|---|
| Framework & Stack Detection | Included | Included |
| Scan of testable targets | Included | Included |
| Vitest setup scaffolding | Included | Included |
| Priority scoring | Included | Included |
| Top 5 ranked targets | Not included | Included |
| Boundary mapping | Not included | Included |
| Server action tests | Not included | Included |
| API route tests | Not included | Included |
| Component behavior tests | Not included | Included |
| Zod / hook / utility tests | Not included | Included |
| Playwright patterns | Not included | Included |
| App Router + Supabase adapters | Not included | Included |
| Prisma / Stripe / NextAuth adapters | Not included | Included |
| One-shot runner | Not included | Included |
| TEST-AUDIT.md | Not included | Included |
| TEST_FINDINGS.md | Not included | Included |
| GitHub Actions test workflow | Not included | Included |
Compatibility
Where it fits
TestGen is a skill layer, a scanner, and a runnable script set. It is not locked to one editor.
| Tool | Format | Status |
|---|---|---|
| Codex | SKILL.md + scripts | Full support |
| Claude Code | SKILL.md + scripts | Full support |
| Cursor | workflow reference + scripts | Portable |
| ChatGPT coding sessions | repo guidance + scripts | Portable |
| Manual CLI | Python scripts | Native runner |
FAQ
Questions before checkout
Does Lite generate the actual tests?
No. Lite scans the repo, identifies what is testable, recommends the smallest compatible stack, and gets the harness in place. Full is the product that generates and executes the tests.
What does Full generate, concretely?
It covers utilities, validators, hooks, server actions, API route handlers, logic-heavy components, and optional Playwright flows when E2E is actually justified.
Which stack adapters are included?
App Router globals, Supabase, NextAuth, Prisma, Stripe, React Query, and Zustand are all covered with reusable mock patterns and selection rules.
Is this already validated on real repos?
Yes. The private beta includes a generated demo fixture and a real Next.js App Router validation repo. Both hit 12 out of 12 passing tests in the final validated state, and the real fixture proved the deliberate regression detection flow.
Can I use TestGen Full with Claude Code, Cursor, or Codex?
Yes. The skills are delivered for Codex-style workflows, but the scanner, templates, and scripts are tool-agnostic and work across Claude Code, Cursor, ChatGPT coding sessions, and manual CLI use.
What is the honest limit of the current release?
It is ship-ready for private beta, client delivery with operator oversight, and internal use on Next.js plus Vitest plus RTL projects. The included large-stack patterns should still be forward-tested on real client repos before broad public claims.
Final CTA
$39
one-time · digital delivery
Stop confusing test setup with actual QA coverage.
If the free material already made the problem obvious, this is the faster path to a production-ready implementation. Buy the full pack, or start with the linked free asset first.