Validated on real Next.js App Router repos

AI QA & Testing Scanner

TestGen

Stop confusing test setup with actual QA coverage.

TestGen is not another thin testing template. Lite scans the repo, detects the stack, and gets the harness in place. Full ranks the highest-value targets, generates the tests that matter, runs them, and writes the audit and findings reports that tell you where the real bugs are.

LOGIC_KERNEL_06Cursor AI NativeGumroad
Audit + findingsTarget scoringMock adapters
CodexClaude CodeCursorChatGPTManual CLI

Scan the repo and see exactly what is worth testing before burning time on low-value files.

Repair or install the test harness with the smallest stack that actually fits the project.

Generate the highest-signal tests first for utilities, validators, hooks, server actions, route handlers, and logic-heavy components.

Run the suite and get structured audit plus findings reports instead of vague test-generation output.

12/12tests passing on the validated demo and real Next.js fixtures
82.71%coverage reached on the generated demo suite after one pass
7adapter families covered by reusable mock patterns
Abstract testing and QA preview for TestGen on a dark technical canvas
menu_bookRead Article

Argument Layer

Your AI-Built App Has Zero Tests. Here's How to Fix That in One Command.

TestGen scans your vibe-coded repo, finds what is worth testing, generates real tests, runs them, and tells you what is actually broken instead of leaving you in debug whack-a-mole.

Why it exists

What breaks without it

setup != coverage

A `vitest.config.ts` file looks reassuring, but it does not tell you what logic is actually protected.

counting files != testing

Generic test generators spread shallow assertions everywhere instead of ranking what matters most first.

templates != workflow

Without boundary mapping and mocks for App Router, auth, database, or billing, the suite still breaks on contact.

logic gap

The repo often has high-risk actions, handlers, and components, but no reliable test suite around the critical path.

no findings layer

Passing or failing tests alone do not tell you whether the problem is the product, the mocks, or the harness.

wrong target order

Teams burn time on low-signal UI coverage while auth, mutations, and route handlers remain exposed.

Offer

What you actually get

repo audit

Scan the framework, runner, and testable targets first instead of guessing where coverage should start.

harness repair

Install or fix the smallest compatible Vitest, RTL, Playwright, and CI foundation for the repo in front of you.

ranked targets

Prioritize actions, handlers, hooks, validators, and logic-heavy components by actual testing value.

generated tests

Produce useful tests for server actions, route handlers, hooks, utilities, and behavior-heavy components.

mock adapters

Reuse adapter patterns for App Router, Supabase, NextAuth, Prisma, Stripe, React Query, and Zustand.

findings reports

Ship with TEST_AUDIT and TEST_FINDINGS so you know what passed, what failed, and what the likely bug actually is.

Get the TestGen now

How it works

Drop one file. Keep coding normally.

1

Scan and classify the repo

TestGen Lite detects the framework, stack, current runner, and what is actually testable before changing anything.

2

Repair the harness first

It installs or fixes the smallest compatible testing stack and templates instead of dumping a one-size-fits-all setup.

3

Generate the highest-value tests

Full TestGen prioritizes utilities, validators, hooks, server actions, route handlers, and logic-heavy components by return on effort.

4

Run and write findings

The runner executes the suite and writes `TEST-AUDIT.md` plus `TEST_FINDINGS.md` so you know what passed, what failed, and why.

Mock Preview

The product, framed like something you can actually deploy

This is not sold as vague expertise. It is packaged as a concrete operating layer with rules, examples, and repeatable usage inside real AI workflows.

Founders shipping AI-generated apps who need deterministic QA before launch.

Developers tired of losing hours to Vitest setup, mock wiring, and target selection.

Teams reviewing App Router mutations, handlers, and auth-sensitive logic that cannot rely on happy-path demos.

LOGIC_KERNEL_06
sales mockup
python3 full-testgen/scripts/run_full_testgen.py /path/to/my-app [] Wrote TEST-AUDIT.md[] Ranked Top 5 test targets by score[] Generated tests for actions, routes, utils, and form behavior[] Applied App Router + Supabase mock patterns[] Executed vitest --run[] Wrote TEST_FINDINGS.md Result: 12/12 passing on validated fixture. Regression in revalidatePath() caught immediately.

Inside the pack

  • Free Lite scanner for framework detection, stack detection, runner detection, and testable-target scanning.
  • Priority scoring that labels targets by business risk, branch density, blast radius, and friction.
  • A full `TEST-AUDIT.md` pass with top-five ranked targets and recommended commands.
  • Generation patterns for pure functions, Zod validators, hooks, server actions, route handlers, and behavior-heavy components.
  • Optional Playwright patterns and config for auth, CRUD, and payment flows when E2E is justified.

What's inside

Everything in the pack

testgen-private/├── test-setup/   ├── SKILL.md  // setup-only scanner and harness repair   ├── scripts/scan_repo.py   ├── references/setup-checklist.md   └── assets/templates/  // vitest, playwright, CI, reports├── full-testgen/   ├── SKILL.md  // full generation workflow   ├── scripts/run_full_testgen.py   ├── scripts/write_findings.py   ├── references/stack-adapters.md   └── assets/templates/  // tests, mocks, reports, CI├── demo-generated-tests/   ├── TEST-AUDIT.md   └── TEST_FINDINGS.md└── real-next-validation/    ├── TEST-AUDIT.md    └── TEST_FINDINGS.md

Before / After

What changes once the product is installed

CI/CD

A real test workflow instead of local-only QA

Without the pack, tests stay tribal and manual. TestGen ships a working GitHub Actions workflow template so verification stops depending on memory.

Without TestGen

 .github/workflows/tests.yml not present $ npm run test -- --run  create-post.test.ts (3) posts-route.test.ts (4) Test Files  2 passed (2)     Tests  7 passed (7)  no pull_request gate no push gate no shared CI baseline

With TestGen

name: Tests on:  push:    branches: [main]  pull_request: jobs:  unit:    runs-on: ubuntu-latest    steps:      - uses: actions/checkout@v4       - uses: actions/setup-node@v4        with:          node-version: 20          cache: npm       - run: npm ci      - run: npm run test -- --run   e2e:    if: ${{ false }}    runs-on: ubuntu-latest    steps:      - uses: actions/checkout@v4      - uses: actions/setup-node@v4        with:          node-version: 20          cache: npm      - run: npm ci      - run: npx playwright install --with-deps      - run: npm run test:e2e  validated fixture 12 tests passing 82.71 statements and lines after generation

Generation — E2E

Critical flows get exercised, not just smoke-clicked

The full pack includes E2E patterns for login, navigation, and write flows when end-to-end coverage is actually justified.

Without TestGen

import { expect, test } from '@playwright/test' test('dashboard renders', async ({ page }) => {  await page.goto('/dashboard')  await expect(page.locator('body')).toBeVisible()})

With TestGen

import { expect, test } from '@playwright/test' test('user can create a post from the dashboard', async ({ page }) => {  await page.goto('/login')   await page.getByLabel('Email').fill('qa@example.com')  await page.getByLabel('Password').fill('password123')  await page.getByRole('button', { name: 'Log in' }).click()   await expect(page).toHaveURL(/dashboard/)  await page.getByRole('link', { name: 'New post' }).click()  await page.getByLabel('Title').fill('Playwright post')  await page.getByRole('button', { name: 'Create post' }).click()   await expect(page.getByText('Playwright post')).toBeVisible()})

Adapters

Mocks are reusable and stack-aware instead of ad hoc

You do not have to reinvent Supabase, auth, or data mocks on every repo. The pack includes adapter patterns the generator can reuse consistently.

Without TestGen

import { vi } from 'vitest' vi.mock('@/lib/supabase/server', () => ({  createClient: async () => ({    auth: {      getUser: async () => ({ data: { user: null }, error: null }),    },    from: () => ({      select: vi.fn(),    }),  }),}))

With TestGen

import { vi } from 'vitest' export const mockGetUser = vi.fn()export const mockFrom = vi.fn()export const mockSelect = vi.fn()export const mockInsert = vi.fn()export const mockUpdate = vi.fn()export const mockDelete = vi.fn() mockFrom.mockImplementation(() => ({  select: mockSelect,  insert: mockInsert,  update: mockUpdate,  delete: mockDelete,})) export function mockAuthUser(user = { id: 'u1', email: 'qa@example.com' }) {  mockGetUser.mockResolvedValue({    data: { user },    error: null,  })} export function mockNoSession() {  mockGetUser.mockResolvedValue({    data: { user: null },    error: null,  })} vi.mock('@/lib/supabase/server', () => ({  createClient: vi.fn().mockResolvedValue({    auth: {      getUser: mockGetUser,    },    from: mockFrom,  }),}))

Diagnosis

You get a findings layer, not just a pass/fail blur

The difference is not only generated tests. The runner writes a structured diagnosis report so you can separate product bugs from infra or mock gaps.

Without TestGen

FAIL  tests/create-post.test.ts > createPost > creates the post and revalidatesAssertionError: expected "spy" to be called with arguments: [ '/posts' ] Received:  1st spy call:    [ '/dashboard' ]  tests/create-post.test.ts:80:28  78     expect(result.ok).toBe(true)  79     expect(insertSpy).toHaveBeenCalled()  80     expect(revalidatePath).toHaveBeenCalledWith('/posts')  you know something broke you still have to diagnose whether its product logic, mocks, or harness

With TestGen

 Test Findings  Summary - tests added: 12- files changed: source, tests, and Next.js project metadata- passing: 12- failing: 0 in final state- coverage before: not configured- coverage after: 71.87 statements and lines  Probable Product Bugs  Severity  File Or Flow  Failing Test  Observed Behavior  Expected Behavior  Recommended Action  ---  ---  ---  ---  ---  ---  medium  `src/app/actions/create-post.ts`  `createPost > creates the post and revalidates`  deliberate regression changed `revalidatePath` from `/posts` to `/dashboard`; generated test failed immediately  revalidate the posts listing after a successful create  fixed during validation and confirmed with a green rerun   Infra Or Mock Gaps  Type  File Or Flow  Problem  Action Needed  ---  ---  ---  ---  none  -  -  -   Tests Adjusted Or Rejected  File Or Flow  Reason  Decision  ---  ---  ---  `src/app/page.tsx` and `src/app/posts/page.tsx`  low-value in this small repo without browser flows  defer to later E2E or smoke coverage   Coverage Notes - `src/app/actions/create-post.ts`: 100- `src/app/api/posts/route.ts`: 100- `src/components/CreatePostForm.tsx`: 93.54- low global coverage is driven by untouched pages and boundary stubs, not by the tested critical flow
Implement these rules today

Offer Compare

Free vs Full

Lite gives you visibility and setup. Full gives you the actual audit, generation, execution, and findings loop.

LayerTestGen Lite (Free)TestGen Full
Framework & Stack DetectionIncludedIncluded
Scan of testable targetsIncludedIncluded
Vitest setup scaffoldingIncludedIncluded
Priority scoringIncludedIncluded
Top 5 ranked targetsNot includedIncluded
Boundary mappingNot includedIncluded
Server action testsNot includedIncluded
API route testsNot includedIncluded
Component behavior testsNot includedIncluded
Zod / hook / utility testsNot includedIncluded
Playwright patternsNot includedIncluded
App Router + Supabase adaptersNot includedIncluded
Prisma / Stripe / NextAuth adaptersNot includedIncluded
One-shot runnerNot includedIncluded
TEST-AUDIT.mdNot includedIncluded
TEST_FINDINGS.mdNot includedIncluded
GitHub Actions test workflowNot includedIncluded

Compatibility

Where it fits

TestGen is a skill layer, a scanner, and a runnable script set. It is not locked to one editor.

ToolFormatStatus
CodexSKILL.md + scriptsFull support
Claude CodeSKILL.md + scriptsFull support
Cursorworkflow reference + scriptsPortable
ChatGPT coding sessionsrepo guidance + scriptsPortable
Manual CLIPython scriptsNative runner

FAQ

Questions before checkout

Does Lite generate the actual tests?

No. Lite scans the repo, identifies what is testable, recommends the smallest compatible stack, and gets the harness in place. Full is the product that generates and executes the tests.

What does Full generate, concretely?

It covers utilities, validators, hooks, server actions, API route handlers, logic-heavy components, and optional Playwright flows when E2E is actually justified.

Which stack adapters are included?

App Router globals, Supabase, NextAuth, Prisma, Stripe, React Query, and Zustand are all covered with reusable mock patterns and selection rules.

Is this already validated on real repos?

Yes. The private beta includes a generated demo fixture and a real Next.js App Router validation repo. Both hit 12 out of 12 passing tests in the final validated state, and the real fixture proved the deliberate regression detection flow.

Can I use TestGen Full with Claude Code, Cursor, or Codex?

Yes. The skills are delivered for Codex-style workflows, but the scanner, templates, and scripts are tool-agnostic and work across Claude Code, Cursor, ChatGPT coding sessions, and manual CLI use.

What is the honest limit of the current release?

It is ship-ready for private beta, client delivery with operator oversight, and internal use on Next.js plus Vitest plus RTL projects. The included large-stack patterns should still be forward-tested on real client repos before broad public claims.

Final CTA

$39

one-time · digital delivery

Stop confusing test setup with actual QA coverage.

If the free material already made the problem obvious, this is the faster path to a production-ready implementation. Buy the full pack, or start with the linked free asset first.

Lite available free on GitHubAudit plus findings reports includedValidated on demo and real Next.js fixtures