model evaluation system risk: high
Repository Performance Audit Engineer
The prompt instructs the model to act as an expert Performance Engineer and QA Specialist to conduct a comprehensive technical audit of a repository, including codebase profiling f…
- Policy sensitive
- Human review
- External action: high
PROMPT
Act as an expert Performance Engineer and QA Specialist. You are tasked with conducting a comprehensive technical audit of the current repository, focusing on deep testing, performance analytics, and architectural scalability. Your task is to: 1. **Codebase Profiling**: Scan the repository for performance bottlenecks such as N+1 query problems, inefficient algorithms, or memory leaks in containerized environments. - Identify areas of the code that may suffer from performance issues. 2. **Performance Benchmarking**: Propose and execute a suite of automated benchmarks. - Measure latency, throughput, and resource utilization (CPU/RAM) under simulated workloads using native tools (e.g., go test -bench, k6, or cProfile). 3. **Deep Testing & Edge Cases**: Design and implement rigorous integration and stress tests. - Focus on high-concurrency scenarios, race conditions, and failure modes in distributed systems. 4. **Scalability Analytics**: Analyze the current architecture's ability to scale horizontally. - Identify stateful components or "noisy neighbor" issues that might hinder elastic scaling. **Execution Protocol:** - Start by providing a detailed Performance Audit Plan. - Once approved, proceed to clone the repo, set up the environment, and execute the tests within your isolated VM. - Provide a final report including raw data, identified bottlenecks, and a "Before vs. After" optimization projection. Rules: - Maintain thorough documentation of all findings and methods used. - Ensure that all tests are reproducible and verifiable by other team members. - Communicate clearly with stakeholders about progress and findings.
REQUIRED CONTEXT
- repository
TOOLS REQUIRED
- code_execution
ROLES & RULES
Role assignments
- Act as an expert Performance Engineer and QA Specialist.
- Maintain thorough documentation of all findings and methods used.
- Ensure that all tests are reproducible and verifiable by other team members.
- Communicate clearly with stakeholders about progress and findings.
EXPECTED OUTPUT
- Format
- structured_report
- Constraints
-
- detailed Performance Audit Plan first
- final report with raw data, bottlenecks, and optimization projections
- thorough documentation
- reproducible tests
SUCCESS CRITERIA
- Identify performance bottlenecks
- Propose and execute benchmarks
- Design and implement rigorous tests
- Analyze architecture scalability
- Provide audit plan and final report
FAILURE MODES
- May attempt to clone or access non-existent repository
- May hallucinate benchmark results without real execution
- May assume unavailable VM environment
- Projections may lack realistic data
CAVEATS
- Dependencies
-
- Requires access to the current repository
- Requires approval of the Performance Audit Plan
- Requires isolated VM for test execution
- Missing context
-
- Repository URL or name.
- Primary programming language(s) of the repo.
- Environment setup details (e.g., dependencies, Docker config).
- Definition of 'approval' process.
- Simulated workload parameters for benchmarks.
- Ambiguities
-
- 'The current repository' is not specified or linked.
- Assumes AI can 'clone the repo' and 'execute tests in isolated VM', which is unrealistic for LLMs.
- 'Once approved' implies interactive process not defined.
- Tools like 'go test -bench', 'k6', 'cProfile' assume specific languages without confirmation.
QUALITY
- OVERALL
- 0.65
- CLARITY
- 0.85
- SPECIFICITY
- 0.75
- REUSABILITY
- 0.30
- COMPLETENESS
- 0.60
IMPROVEMENT SUGGESTIONS
- Add a placeholder like '{repo_url}' for the target repository.
- Specify or parameterize the programming language and adapt tools accordingly.
- Replace real execution with simulation or pseudo-code for LLM feasibility, e.g., 'Simulate cloning and describe steps'.
- Define non-interactive flow, e.g., 'Assume approval and proceed'.
- Specify output format for the audit plan and final report, e.g., Markdown sections with tables.
USAGE
Copy the prompt above and paste it into your AI of choice — Claude, ChatGPT, Gemini, or anywhere else you're working. Replace any placeholder sections with your own context, then ask for the output.
MORE FOR MODEL
- AI Process Feasibility Interviewermodelevaluation
- Web UI QA Audit Specialistmodelevaluation
- Entropy MDPI Journal Peer Reviewermodelevaluation
- Multi-Agent Fact-Checking Systemmodelevaluation
- Question Quality Lab Game Evaluatormodelevaluation
- Prompt Analysis Optimization Validatormodelevaluation
- Prompt Quality Audit Engineermodelevaluation
- Prompt Quality Audit Compliance Checkermodelevaluation
- Strict Yes/No Question Answerermodelevaluation
- Software QA Tester for Login Functionalitymodelevaluation