developer coding user risk: low
Dating Profile Audit Web App Builder
Instructs building a React web app 'First Impression' for dating profile optimization, featuring photo audits with scores and rankings, bio rewrites in three tones with swipe rate…
PROMPT
Build a web app called "First Impression" — a dating profile audit and optimization tool. Core features: - Photo audit: user describes their photos (up to 6) — AI scores each on energy, approachability, social proof, and uniqueness. Returns a ranked order recommendation with one-line reasoning per photo - Bio rewriter: user pastes current bio, clicks "Optimize", receives 3 rewritten versions in distinct tones (playful / authentic / direct). Each version includes a word count and a predicted "swipe right rate" label (Low / Medium / High) - Icebreaker generator: user describes a match's profile in a few sentences — AI generates 5 personalized openers ranked by predicted response rate, each with a one-line explanation of why it works - Profile score dashboard: a 0–100 composite score across bio quality, photo strength, and opener effectiveness — updates live - Export: formatted PDF of all assets titled "My Profile Package" Stack: React, [LLM API] for all AI calls, jsPDF for export. Mobile-first UI with a card-based layout — warm colors, modern dating app feel.
EXPECTED OUTPUT
- Format
- code
- Constraints
-
- React
- LLM API for AI calls
- jsPDF for export
- mobile-first UI
- card-based layout
- warm colors
- modern dating app feel
CAVEATS
- Missing context
-
- Specific LLM provider details (API keys, endpoints, models).
- User authentication and session management.
- Data persistence (localStorage, backend?).
- Detailed input validation and error handling.
- UI wireframes or component structure.
- Ambiguities
-
- Unclear what '[LLM API]' specifically refers to (e.g., OpenAI, Anthropic).
- Does not specify exact algorithms or criteria for scoring photos, bios, openers, or composite profile score.
- 'Live updates' for dashboard not detailed (e.g., how inputs trigger recalculation).
QUALITY
- OVERALL
- 0.65
- CLARITY
- 0.85
- SPECIFICITY
- 0.75
- REUSABILITY
- 0.30
- COMPLETENESS
- 0.65
IMPROVEMENT SUGGESTIONS
- Replace '[LLM API]' with a specific provider like 'OpenAI GPT-4 API' and include integration examples.
- Define explicit rubrics for all scores (e.g., photo energy: smile detection proxy via LLM prompts).
- Add state management specification (e.g., use React Context or Zustand for live dashboard).
- Include example inputs/outputs for each feature to clarify expectations.
- Specify deployment instructions or hosting (e.g., Vercel).
USAGE
Copy the prompt above and paste it into your AI of choice — Claude, ChatGPT, Gemini, or anywhere else you're working. Replace any placeholder sections with your own context, then ask for the output.
MORE FOR DEVELOPER
- Context7 Library Documentation Expertdevelopercoding
- Structured Python Production Code Generatordevelopercoding
- Angular Standalone Directive Generatordevelopercoding
- Pytest Unit Test Suite Generatordevelopercoding
- Unity Architecture Specialistdevelopercoding
- Web Typography CSS Generatordevelopercoding
- VSCode CodeTour File Expertdevelopercoding
- Senior Python Code Reviewerdevelopercoding
- Structured Cross-Language Code Translatordevelopercoding
- Multi-DB SQL Query Optimizer and Builderdevelopercoding