developer coding template risk: high
Test-First Bug Fix Implementer
The prompt directs the model to fix a specified bug using a test-first approach: read source files and tests, write a failing test reproducing the bug, implement a minimal fix, run…
- Policy sensitive
- Human review
- External action: high
PROMPT
I have a bug: ${bug}. Take a test-first approach: 1) Read the relevant source files and existing tests. 2) Write a failing test that reproduces the exact bug. 3) Run the test suite to confirm it fails. 4) Implement the minimal fix. 5) Re-run the full test suite. 6) If any test fails, analyze the failure, adjust the code, and re-run—repeat until ALL tests pass. 7) Then grep the codebase for related code paths that might have the same issue and add tests for those too. 8) Summarize every change made and why. Do not ask me questions—make reasonable assumptions and document them. INPUTS
- bug REQUIRED
-
Description of the bug to reproduce and fix
e.g. Null pointer exception occurs when processing empty input in function X
REQUIRED CONTEXT
- bug description
TOOLS REQUIRED
- code_execution
- file_search
ROLES & RULES
- Read the relevant source files and existing tests.
- Write a failing test that reproduces the exact bug.
- Run the test suite to confirm it fails.
- Implement the minimal fix.
- Re-run the full test suite.
- If any test fails, analyze the failure, adjust the code, and re-run—repeat until ALL tests pass.
- Then grep the codebase for related code paths that might have the same issue and add tests for those too.
- Summarize every change made and why.
- Do not ask me questions—make reasonable assumptions and document them.
EXPECTED OUTPUT
- Format
- structured_report
- Constraints
-
- Summarize every change made and why
- Document reasonable assumptions
- Include new tests and code changes
SUCCESS CRITERIA
- Reproduce the exact bug with a failing test
- Implement the minimal fix
- Ensure all tests pass after fix
- Add tests for related code paths
- Summarize every change made and why
FAILURE MODES
- Assumes access to source files, tests, and runnable test suite without provision
- May make incorrect assumptions due to no questions allowed
- Could fail to identify all related code paths
CAVEATS
- Dependencies
-
- Bug description via ${bug}
- Relevant source files
- Existing tests
- Runnable test suite
- Full codebase for grepping
- Missing context
-
- Codebase source files and existing tests
- Programming language and test framework
- Method to run test suite and grep codebase
QUALITY
- OVERALL
- 0.90
- CLARITY
- 0.90
- SPECIFICITY
- 0.90
- REUSABILITY
- 0.90
- COMPLETENESS
- 0.85
IMPROVEMENT SUGGESTIONS
- Add placeholders for source code, tests, and codebase root: e.g., 'Codebase: ${codebase}'
- Specify expected output format: e.g., 'Output failing test, fix as code diffs, and summary in markdown.'
- Clarify simulation of 'running tests' if no real execution: 'Simulate test runs based on code analysis.'
USAGE
Copy the prompt above and paste it into your AI of choice — Claude, ChatGPT, Gemini, or anywhere else you're working. Replace any placeholder sections with your own context, then ask for the output.
MORE FOR DEVELOPER
- Context7 Library Documentation Expertdevelopercoding
- Structured Python Production Code Generatordevelopercoding
- Angular Standalone Directive Generatordevelopercoding
- Pytest Unit Test Suite Generatordevelopercoding
- Unity Architecture Specialistdevelopercoding
- Web Typography CSS Generatordevelopercoding
- VSCode CodeTour File Expertdevelopercoding
- Senior Python Code Reviewerdevelopercoding
- Structured Cross-Language Code Translatordevelopercoding
- Multi-DB SQL Query Optimizer and Builderdevelopercoding