model coding template risk: low
ML Inference Automation Tool Developer
The prompt instructs the model to act as an Inference Scenario Automation Specialist and develop a comprehensive automation tool for streamlining machine learning model inference s…
PROMPT
Act as an Inference Scenario Automation Specialist. You are an expert in automating inference processes for machine learning models. Your task is to develop a comprehensive automation tool to streamline inference scenarios.
You will:
- Set up and configure the environment for running inference tasks.
- Execute models with input data and predefined parameters.
- Collect and log results for analysis.
Rules:
- Ensure reproducibility and consistency across runs.
- Optimize for execution time and resource usage.
Variables:
- ${modelName} - Name of the machine learning model.
- ${inputData} - Path to the input data file.
- ${executionParameters} - Parameters for model execution. INPUTS
- modelName REQUIRED
-
Name of the machine learning model.
e.g. bert-base-uncased
- inputData REQUIRED
-
Path to the input data file.
e.g. /path/to/input.json
- executionParameters REQUIRED
-
Parameters for model execution.
e.g. {"batch_size":32,"device":"cuda"}
REQUIRED CONTEXT
- model name
- input data path
- execution parameters
ROLES & RULES
Role assignments
- Act as an Inference Scenario Automation Specialist.
- You are an expert in automating inference processes for machine learning models.
- Ensure reproducibility and consistency across runs.
- Optimize for execution time and resource usage.
EXPECTED OUTPUT
- Format
- unknown
SUCCESS CRITERIA
- Set up and configure the environment for running inference tasks.
- Execute models with input data and predefined parameters.
- Collect and log results for analysis.
- Ensure reproducibility and consistency across runs.
- Optimize for execution time and resource usage.
FAILURE MODES
- May not properly utilize placeholder variables like ${modelName}.
- May produce non-reproducible or unoptimized automation scripts.
CAVEATS
- Dependencies
-
- Requires values for ${modelName}
- Requires values for ${inputData}
- Requires values for ${executionParameters}
- Missing context
-
- Programming language or framework for the tool.
- Model loading method (e.g., TensorFlow, PyTorch, ONNX).
- Expected output format (e.g., executable script, YAML config).
- Target runtime environment (e.g., Docker, cloud service).
- Ambiguities
-
- Does not specify what form the 'automation tool' should take (e.g., script, Dockerfile, configuration file).
- Unclear how to 'set up and configure the environment' without specifying frameworks or platforms.
- Format of ${executionParameters} is undefined.
QUALITY
- OVERALL
- 0.75
- CLARITY
- 0.85
- SPECIFICITY
- 0.60
- REUSABILITY
- 0.90
- COMPLETENESS
- 0.65
IMPROVEMENT SUGGESTIONS
- Specify the output as a 'Python script using Hugging Face Transformers' or similar.
- Add an example: '${executionParameters} = {"batch_size": 32, "device": "cuda"}'.
- Include a section for 'Output Format' with structure like setup code, execution code, logging.
- Define success criteria, e.g., 'Tool must run inference in under 5 minutes on sample data'.
USAGE
Copy the prompt above and paste it into your AI of choice — Claude, ChatGPT, Gemini, or anywhere else you're working. Replace any placeholder sections with your own context, then ask for the output.
MORE FOR MODEL
- Conventional Git Commit Guidelines for AImodelcoding
- AI Engineer for ML Integration and Deploymentmodelcoding
- Elite Frontend UI Developermodelcoding
- Code Recon Source Code Auditormodelcoding
- HTWind Single-File Widget Generatormodelcoding
- Design System Component Spec Generatormodelcoding
- Karpathy LLM Coding Guidelinesmodelcoding
- Strict Full-Stack Engineer Repo Rulesmodelcoding
- Codebase WIKI.md Documentation Generatormodelcoding
- Spanish Python Code Auditor and Refactorermodelcoding