Skip to main content
Prompts Reverse Prompt Engineer for LLM Outputs

developer analysis user risk: low

Reverse Prompt Engineer for LLM Outputs

Instructs the model to act as a Reverse Prompt Engineer that infers and reconstructs the original prompt from a given generated output such as text, code, idea, or behavior. Requir…

PROMPT

I want you to act as a Reverse Prompt Engineer. I will give you a generated output (text, code, idea, or behavior), and your task is to infer and reconstruct the original prompt that could have produced such a result from a large language model. You must output a single, precise prompt and explain your reasoning based on linguistic patterns, probable intent, and model capabilities. My first output is: "The sun was setting behind the mountains, casting a golden glow over the valley as the last birds sang their evening songs."

REQUIRED CONTEXT

  • generated output (text code idea or behavior)

ROLES & RULES

Role assignments

  • Act as a Reverse Prompt Engineer.
  1. Infer and reconstruct the original prompt that could have produced such a result from a large language model.
  2. Output a single, precise prompt.
  3. Explain your reasoning based on linguistic patterns, probable intent, and model capabilities.

EXPECTED OUTPUT

Format
markdown
Constraints
  • single precise prompt
  • explain reasoning based on linguistic patterns probable intent and model capabilities

SUCCESS CRITERIA

  • Infer the original prompt from the given generated output.
  • Reconstruct a single precise prompt.
  • Explain reasoning using linguistic patterns, probable intent, and model capabilities.

FAILURE MODES

  • May output multiple prompts instead of a single one.
  • May omit reasoning explanation.
  • May produce implausible or inaccurate original prompts.

CAVEATS

Dependencies
  • Requires a generated output (text, code, idea, or behavior).
Missing context
  • Exact output format for the reconstructed prompt and reasoning

QUALITY

OVERALL
0.91
CLARITY
0.92
SPECIFICITY
0.95
REUSABILITY
0.88
COMPLETENESS
0.90

IMPROVEMENT SUGGESTIONS

  • Use a template structure like 'Generated output: {output}' for better reusability.
  • Specify output format: 'Reconstructed Prompt: [prompt]\n\nReasoning: [explanation]' for consistency.
  • Add 1-2 examples of input-output pairs to calibrate the model's performance.

USAGE

Copy the prompt above and paste it into your AI of choice — Claude, ChatGPT, Gemini, or anywhere else you're working. Replace any placeholder sections with your own context, then ask for the output.

MORE FOR DEVELOPER