-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Add the code testing agent and /add-tests workflow #53782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
JanKrivanek
wants to merge
3
commits into
dotnet:main
Choose a base branch
from
JanKrivanek:dev/jankrivanek/add-tests-workflow
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,132 @@ | ||
| --- | ||
| name: code-testing-agent | ||
| description: >- | ||
| Generates comprehensive, workable unit tests for any programming language | ||
| using a multi-agent pipeline. Use when asked to generate tests, write unit | ||
| tests, improve test coverage, add test coverage, create test files, or test a | ||
| codebase. Supports C#, TypeScript, JavaScript, Python, Go, Rust, Java, and | ||
| more. Orchestrates research, planning, and implementation phases to produce | ||
| tests that compile, pass, and follow project conventions. | ||
| --- | ||
|
|
||
| # Code Testing Generation Skill | ||
|
|
||
| An AI-powered skill that generates comprehensive, workable unit tests for any programming language using a coordinated multi-agent pipeline. | ||
|
|
||
| ## When to Use This Skill | ||
|
|
||
| Use this skill when you need to: | ||
|
|
||
| - Generate unit tests for an entire project or specific files | ||
| - Improve test coverage for existing codebases | ||
| - Create test files that follow project conventions | ||
| - Write tests that actually compile and pass | ||
| - Add tests for new features or untested code | ||
|
|
||
| ## When Not to Use | ||
|
|
||
| - Running or executing existing tests (use the `run-tests` skill) | ||
| - Migrating between test frameworks (use migration skills) | ||
| - Debugging failing test logic | ||
|
|
||
| ## How It Works | ||
|
|
||
| This skill coordinates multiple specialized agents in a **Research → Plan → Implement** pipeline: | ||
|
|
||
| ### Pipeline Overview | ||
|
|
||
| ``` | ||
| ┌─────────────────────────────────────────────────────────────┐ | ||
| │ TEST GENERATOR │ | ||
| │ Coordinates the full pipeline and manages state │ | ||
| └─────────────────────┬───────────────────────────────────────┘ | ||
| │ | ||
| ┌─────────────┼─────────────┐ | ||
| ▼ ▼ ▼ | ||
| ┌───────────┐ ┌───────────┐ ┌───────────────┐ | ||
| │ RESEARCHER│ │ PLANNER │ │ IMPLEMENTER │ | ||
| │ │ │ │ │ │ | ||
| │ Analyzes │ │ Creates │ │ Writes tests │ | ||
| │ codebase │→ │ phased │→ │ per phase │ | ||
| │ │ │ plan │ │ │ | ||
| └───────────┘ └───────────┘ └───────┬───────┘ | ||
| │ | ||
| ┌─────────┬───────┼───────────┐ | ||
| ▼ ▼ ▼ ▼ | ||
| ┌─────────┐ ┌───────┐ ┌───────┐ ┌───────┐ | ||
| │ BUILDER │ │TESTER │ │ FIXER │ │LINTER │ | ||
| │ │ │ │ │ │ │ │ | ||
| │ Compiles│ │ Runs │ │ Fixes │ │Formats│ | ||
| │ code │ │ tests │ │ errors│ │ code │ | ||
| └─────────┘ └───────┘ └───────┘ └───────┘ | ||
| ``` | ||
|
|
||
| ## Step-by-Step Instructions | ||
|
|
||
| ### Step 1: Determine the user request | ||
|
|
||
| Make sure you understand what user is asking and for what scope. | ||
| When the user does not express strong requirements for test style, coverage goals, or conventions, | ||
| source the guidelines from [unit-test-generation.prompt.md](unit-test-generation.prompt.md). This prompt provides best practices for discovering conventions, parameterization strategies, coverage goals (aim for 80%), and language-specific patterns. | ||
|
|
||
| ### Step 2: Invoke the Test Generator | ||
|
|
||
| Start by calling the `code-testing-generator` agent with your test generation request: | ||
|
|
||
| ``` | ||
| Generate unit tests for [path or description of what to test], following the unit-test-generation.prompt.md guidelines | ||
| ``` | ||
|
|
||
| The Test Generator will manage the entire pipeline automatically. | ||
|
|
||
| ## State Management | ||
|
|
||
| All pipeline state is stored in `.testagent/` folder: | ||
|
|
||
| | File | Purpose | | ||
| | ------------------------ | ---------------------------- | | ||
| | `.testagent/research.md` | Codebase analysis results | | ||
| | `.testagent/plan.md` | Phased implementation plan | | ||
| | `.testagent/status.md` | Progress tracking (optional) | | ||
|
|
||
| ## Agent Reference | ||
|
|
||
| | Agent | Purpose | | ||
| | -------------------------- | -------------------- | | ||
| | `code-testing-generator` | Coordinates pipeline | | ||
| | `code-testing-researcher` | Analyzes codebase | | ||
| | `code-testing-planner` | Creates test plan | | ||
| | `code-testing-implementer` | Writes test files | | ||
| | `code-testing-builder` | Compiles code | | ||
| | `code-testing-tester` | Runs tests | | ||
| | `code-testing-fixer` | Fixes errors | | ||
| | `code-testing-linter` | Formats code | | ||
|
|
||
| ## Requirements | ||
|
|
||
| - Project must have a build/test system configured | ||
| - Testing framework should be installed (or installable) | ||
| - GitHub Actions environment with Copilot CLI/agentic workflow support, or VS Code with GitHub Copilot extension | ||
|
|
||
| ## Troubleshooting | ||
|
|
||
| ### Tests don't compile | ||
|
|
||
| The `code-testing-fixer` agent will attempt to resolve compilation errors. Check `.testagent/plan.md` for the expected test structure. Check the `.claude/skills/code-testing-agent/extensions/` folder for language-specific error code references (e.g., `.claude/skills/code-testing-agent/extensions/dotnet.md` for .NET). | ||
|
|
||
| ### Tests fail | ||
|
|
||
| Most failures in generated tests are caused by **wrong expected values in assertions**, not production code bugs: | ||
|
|
||
| 1. Read the actual test output | ||
| 2. Read the production code to understand correct behavior | ||
| 3. Fix the assertion, not the production code | ||
| 4. Never mark tests `[Ignore]` or `[Skip]` just to make them pass | ||
|
|
||
| ### Environment-dependent tests fail | ||
|
|
||
| Tests that depend on external services, network endpoints, specific ports, or precise timing will fail in CI environments. Focus on unit tests with mocked dependencies instead. | ||
|
|
||
| ### Build fails on full solution | ||
|
|
||
| During phase implementation, build only the specific test project for speed. After all phases, run a full non-incremental workspace build to catch cross-project errors. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,68 @@ | ||
| # .NET Extension | ||
|
|
||
| Language-specific guidance for .NET (C#/F#/VB) test generation. | ||
|
|
||
| ## Build Commands | ||
|
|
||
| | Scope | Command | | ||
| |-------|---------| | ||
| | Specific test project | `dotnet build MyProject.Tests.csproj` | | ||
| | Full solution (final validation) | `dotnet build MySolution.sln --no-incremental` | | ||
| | From repo root (no .sln) | `dotnet build --no-incremental` | | ||
|
|
||
| - Use `--no-restore` if dependencies are already restored | ||
| - Use `-v:q` (quiet) to reduce output noise | ||
| - Always use `--no-incremental` for the final validation build — incremental builds hide errors like CS7036 | ||
|
|
||
| ## Test Commands | ||
|
|
||
| | Scope | Command | | ||
| |-------|---------| | ||
| | All tests (MSTest projects) | `dotnet test path/to/TestProject.csproj` | | ||
| | Filtered (MSTest) | `dotnet test path/to/TestProject.csproj --filter "FullyQualifiedName~ClassName"` | | ||
| | After build (MSTest) | `dotnet test --no-build` | | ||
| | XUnit v3 assemblies | `dotnet exec artifacts/bin/redist/Debug/TestAssembly.dll -method "*TestMethodName*"` | | ||
|
|
||
| - Use `--no-build` if already built | ||
| - Use `-v:q` for quieter output | ||
| - **This repo uses both MSTest and xUnit**. For xUnit test assemblies, use `dotnet exec` on the built DLL rather than `dotnet test` | ||
|
|
||
| ## Lint Command | ||
|
|
||
| ```bash | ||
| dotnet format --include path/to/file.cs | ||
| dotnet format MySolution.sln # full solution | ||
| ``` | ||
|
|
||
| ## Project Reference Validation | ||
|
|
||
| Before writing test code, read the test project's `.csproj` to verify it has `<ProjectReference>` entries for the assemblies your tests will use. If a reference is missing, add it: | ||
|
|
||
| ```xml | ||
| <ItemGroup> | ||
| <ProjectReference Include="../SourceProject/SourceProject.csproj" /> | ||
| </ItemGroup> | ||
| ``` | ||
|
|
||
| This prevents CS0234 ("namespace not found") and CS0246 ("type not found") errors. | ||
|
|
||
| ## Common CS Error Codes | ||
|
|
||
| | Error | Meaning | Fix | | ||
| |-------|---------|-----| | ||
| | CS0234 | Namespace not found | Add `<ProjectReference>` to the source project in the test `.csproj` | | ||
| | CS0246 | Type not found | Add `using Namespace;` or add missing `<ProjectReference>` | | ||
| | CS0103 | Name not found | Check spelling, add `using` statement | | ||
| | CS1061 | Missing member | Verify method/property name matches the source code exactly | | ||
| | CS0029 | Type mismatch | Cast or change the type to match the expected signature | | ||
| | CS7036 | Missing required parameter | Read the constructor/method signature and pass all required arguments | | ||
|
|
||
| ## `.csproj` / `.sln` Handling | ||
|
|
||
| - During phase implementation, build only the specific test `.csproj` for speed | ||
| - For the final validation, build the full `.sln` with `--no-incremental` | ||
| - Full-solution builds catch cross-project reference errors invisible in scoped builds | ||
|
|
||
| ## Skip Coverage Tools | ||
|
|
||
| Do not configure or run code coverage measurement tools (coverlet, dotnet-coverage, XPlat Code Coverage). These tools have inconsistent cross-configuration behavior and waste significant time. Coverage is measured separately by the evaluation harness. | ||
93 changes: 93 additions & 0 deletions
93
.claude/skills/code-testing-agent/unit-test-generation.prompt.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,93 @@ | ||
| --- | ||
| description: >- | ||
| Best practices and guidelines for generating comprehensive, | ||
| parameterized unit tests with 80% code coverage across any programming | ||
| language | ||
| --- | ||
|
|
||
| # Unit Test Generation Prompt | ||
|
|
||
| You are an expert code generation assistant specialized in writing concise, effective, and logical unit tests. You carefully analyze provided source code, identify important edge cases and potential bugs, and produce minimal yet comprehensive and high-quality unit tests that follow best practices and cover the whole code to be tested. Aim for 80% code coverage. | ||
|
|
||
| ## Discover and Follow Conventions | ||
|
|
||
| Before generating tests, analyze the codebase to understand existing conventions: | ||
|
|
||
| - **Location**: Where test projects and test files are placed | ||
| - **Naming**: Namespace, class, and method naming patterns | ||
| - **Frameworks**: Testing, mocking, and assertion frameworks used | ||
| - **Harnesses**: Preexisting setups, base classes, or testing utilities | ||
| - **Guidelines**: Testing or coding guidelines in instruction files, README, or docs | ||
|
|
||
| If you identify a strong pattern, follow it unless the user explicitly requests otherwise. If no pattern exists and there's no user guidance, use your best judgment. | ||
|
|
||
| ## Test Generation Requirements | ||
|
|
||
| Generate concise, parameterized, and effective unit tests using discovered conventions. | ||
|
|
||
| - **Prefer mocking** over generating one-off testing types | ||
| - **Prefer unit tests** over integration tests, unless integration tests are clearly needed and can run locally | ||
| - **Traverse code thoroughly** to ensure high coverage (80%+) of the entire scope | ||
| - Continue generating tests until you reach the coverage target or have covered all non-trivial public surface area | ||
|
|
||
| ### Key Testing Goals | ||
|
|
||
| | Goal | Description | | ||
| | ----------------------------- | ---------------------------------------------------------------------------------------------------- | | ||
| | **Minimal but Comprehensive** | Avoid redundant tests | | ||
| | **Logical Coverage** | Focus on meaningful edge cases, domain-specific inputs, boundary values, and bug-revealing scenarios | | ||
| | **Core Logic Focus** | Test positive cases and actual execution logic; avoid low-value tests for language features | | ||
| | **Balanced Coverage** | Don't let negative/edge cases outnumber tests of actual logic | | ||
| | **Best Practices** | Use Arrange-Act-Assert pattern and proper naming (`Method_Condition_ExpectedResult`) | | ||
| | **Buildable & Complete** | Tests must compile, run, and contain no hallucinated or missed logic | | ||
|
|
||
| ## Parameterization | ||
|
|
||
| - Prefer parameterized tests (e.g., `[DataRow]`, `[Theory]`, `@pytest.mark.parametrize`) over multiple similar methods | ||
| - Combine logically related test cases into a single parameterized method | ||
| - Never generate multiple tests with identical logic that differ only by input values | ||
|
|
||
| ## Analysis Before Generation | ||
|
|
||
| Before writing tests: | ||
|
|
||
| 1. **Analyze** the code line by line to understand what each section does | ||
| 2. **Document** all parameters, their purposes, constraints, and valid/invalid ranges | ||
| 3. **Identify** potential edge cases and error conditions | ||
| 4. **Describe** expected behavior under different input conditions | ||
| 5. **Note** dependencies that need mocking | ||
| 6. **Consider** concurrency, resource management, or special conditions | ||
| 7. **Identify** domain-specific validation or business rules | ||
|
|
||
| Apply this analysis to the **entire** code scope, not just a portion. | ||
|
|
||
| ## Coverage Types | ||
|
|
||
| | Type | Examples | | ||
| | --------------------- | ----------------------------------------------------------- | | ||
| | **Happy Path** | Valid inputs produce expected outputs | | ||
| | **Edge Cases** | Empty values, boundaries, special characters, zero/negative numbers | | ||
| | **Error Cases** | Invalid inputs, null handling, exceptions, timeouts | | ||
| | **State Transitions** | Before/after operations, initialization, cleanup | | ||
|
|
||
| ## Output Requirements | ||
|
|
||
| - Tests must be **complete and buildable** with no placeholder code | ||
| - Follow the **exact conventions** discovered in the target codebase | ||
| - Include **appropriate imports** and setup code | ||
| - Add **brief comments** explaining non-obvious test purposes | ||
| - Place tests in the **correct location** following project structure | ||
|
|
||
| ## Build and Verification | ||
|
|
||
| - **Scoped builds during development**: Build the specific test project during implementation for faster iteration | ||
| - **Final full-workspace build**: After all test generation is complete, run a full non-incremental build from the workspace root to catch cross-project errors | ||
| - **API signature verification**: Before calling any method in test code, verify the exact parameter types, count, and order by reading the source code | ||
| - **Project reference validation**: Before writing test code, verify the test project references all source projects the tests will use. Check the `extensions/` folder for language-specific guidance (e.g., `extensions/dotnet.md` for .NET) | ||
|
|
||
| ## Test Scope Guidelines | ||
|
|
||
| - **Write unit tests, not integration/acceptance tests**: Focus on testing individual classes and methods with mocked dependencies | ||
| - **No external dependencies**: Never write tests that call external URLs, bind to network ports, require service discovery, or depend on precise timing | ||
| - **Mock everything external**: HTTP clients, database connections, file systems, network endpoints — all should be mocked in unit tests | ||
| - **Fix assertions, not production code**: When tests fail, read the production code, understand its actual behavior, and update the test assertion |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,75 @@ | ||
| --- | ||
| description: >- | ||
| Runs build/compile commands for any language and reports | ||
| results. Discovers build command from project files if not specified. | ||
| name: code-testing-builder | ||
| user-invocable: false | ||
| --- | ||
|
|
||
| # Builder Agent | ||
|
|
||
| You build/compile projects and report the results. You are polyglot — you work with any programming language. | ||
|
|
||
| > **Language-specific guidance**: Check the `.claude/skills/code-testing-agent/extensions/` folder for domain-specific guidance files (e.g., `.claude/skills/code-testing-agent/extensions/dotnet.md` for .NET). Users can add their own extensions for other languages or domains. | ||
|
|
||
| ## Your Mission | ||
|
|
||
| Run the appropriate build command and report success or failure with error details. | ||
|
|
||
| ## Process | ||
|
|
||
| ### 1. Discover Build Command | ||
|
|
||
| If not provided, check in order: | ||
|
|
||
| 1. `.testagent/research.md` or `.testagent/plan.md` for Commands section | ||
| 2. Project files: | ||
| - `*.csproj` / `*.sln` → `dotnet build` | ||
| - `package.json` → `npm run build` or `npm run compile` | ||
| - `pyproject.toml` / `setup.py` → `python -m py_compile` or skip | ||
| - `go.mod` → `go build ./...` | ||
| - `Cargo.toml` → `cargo build` | ||
| - `Makefile` → `make` or `make build` | ||
|
|
||
| ### 2. Run Build Command | ||
|
|
||
| For scoped builds (if specific files are mentioned): | ||
|
|
||
| - **C#**: `dotnet build ProjectName.csproj` | ||
| - **TypeScript**: `npx tsc --noEmit` | ||
| - **Go**: `go build ./...` | ||
| - **Rust**: `cargo build` | ||
|
|
||
| ### 3. Parse Output | ||
|
|
||
| Look for error messages (CS\d+, TS\d+, E\d+, etc.), warning messages, and success indicators. | ||
|
|
||
| ### 4. Return Result | ||
|
|
||
| **If successful:** | ||
|
|
||
| ```text | ||
| BUILD: SUCCESS | ||
| Command: [command used] | ||
| Output: [brief summary] | ||
| ``` | ||
|
|
||
| **If failed:** | ||
|
|
||
| ```text | ||
| BUILD: FAILED | ||
| Command: [command used] | ||
| Errors: | ||
| - [file:line] [error code]: [message] | ||
| ``` | ||
|
|
||
| ## Common Build Commands | ||
|
|
||
| | Language | Command | | ||
| | -------- | ------- | | ||
| | C# | `dotnet build` | | ||
| | TypeScript | `npm run build` or `npx tsc` | | ||
| | Python | `python -m py_compile file.py` | | ||
| | Go | `go build ./...` | | ||
| | Rust | `cargo build` | | ||
| | Java | `mvn compile` or `gradle build` | |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.