model analysis template risk: low
Multi-LLM Prompt Improver in Arabic
Directs the AI to act as an expert prompt engineer, analyze and improve a specified prompt for better accuracy, and create four tailored versions for ChatGPT, Claude, Gemini, and C…
PROMPT
Act as a certified and expert AI prompt engineer Analyze and improve the following prompt to get more accurate and best results and answers. Write 4 versions for ChatGPT, Claude , Gemini, and for Chinese LLMs (e.g. MiniMax, GLM, DeepSeek, Qwen). <prompt> ... </prompt> Write the output in Standard Arabic.
INPUTS
- prompt REQUIRED
-
The original prompt text to analyze and improve
REQUIRED CONTEXT
- original prompt enclosed in <prompt> tags
ROLES & RULES
Role assignments
- Act as a certified and expert AI prompt engineer
- Analyze and improve the following prompt to get more accurate and best results and answers.
- Write 4 versions for ChatGPT, Claude, Gemini, and for Chinese LLMs (e.g. MiniMax, GLM, DeepSeek, Qwen).
- Write the output in Standard Arabic.
EXPECTED OUTPUT
- Format
- markdown
- Constraints
-
- 4 versions: one for ChatGPT, one for Claude, one for Gemini, one for Chinese LLMs (e.g. MiniMax, GLM, DeepSeek, Qwen)
- entire output in Standard Arabic
SUCCESS CRITERIA
- Analyze and improve the prompt for accuracy.
- Write 4 tailored versions for specified LLMs.
- Output entirely in Standard Arabic.
FAILURE MODES
- Placeholder '...' in <prompt> may cause generic analysis.
- Lacks explicit structure for versions, leading to inconsistent output.
- No defined metrics for 'more accurate and best results'.
- May produce non-Standard Arabic output.
CAVEATS
- Dependencies
-
- Content of the <prompt>...</prompt> section.
- Missing context
-
- Criteria for 'more accurate and best results' (e.g., metrics like accuracy, coherence).
- Expected output structure (e.g., analysis section, then labeled versions).
- Details on LLM differences (e.g., token limits, preferred prompting styles).
- Ambiguities
-
- Unclear what 'analyze and improve' specifically entails (e.g., structure review, scoring, or just rewriting).
- Unspecified how to tailor versions to each LLM (e.g., model-specific quirks or formats).
- Placeholder '...' in <prompt> assumes content will be inserted, but no guidance on handling empty cases.
- Output in 'Standard Arabic' – unclear if analysis or only versions should be in Arabic.
QUALITY
- OVERALL
- 0.75
- CLARITY
- 0.80
- SPECIFICITY
- 0.60
- REUSABILITY
- 0.90
- COMPLETENESS
- 0.70
IMPROVEMENT SUGGESTIONS
- Add explicit output format: '1. Analysis summary. 2. Improved version for ChatGPT. 3. For Claude, etc., all in Modern Standard Arabic.'
- Specify analysis components: 'Evaluate clarity, specificity; suggest fixes.'
- Include instructions: 'Tailor versions to LLM strengths, e.g., Claude for reasoning, Gemini for multimodality.'
- Replace 'Standard Arabic' with 'Modern Standard Arabic (MSA)' and confirm it applies to entire output.
USAGE
Copy the prompt above and paste it into your AI of choice — Claude, ChatGPT, Gemini, or anywhere else you're working. Replace any placeholder sections with your own context, then ask for the output.
MORE FOR MODEL
- Travel Website SEO UX CRO Auditormodelanalysis
- Multi-Dimensional 5 Whys Root Cause Guidemodelanalysis
- Lazy AI Email Detectormodelanalysis
- Visual Media Cinematic Forensics Analyzermodelanalysis
- AI Computer Vision Algorithm Analyzermodelanalysis
- Comprehensive Repository Bug Audit and Fixermodelanalysis
- Codebase Pattern Skill File Generatormodelanalysis
- DeepThinker-CA Recursive Thinking Analyzermodelanalysis
- Unified Image Style Extractormodelanalysis
- Bug Risk Analyst for Code Changesmodelanalysis