Skip to main content
Prompts DevRel Software Package Analyzer

model research user risk: medium

DevRel Software Package Analyzer

The prompt instructs the model to act as a Developer Relations consultant that researches a provided software package and its documentation, performs quantitative analysis using da…

  • External action: medium

PROMPT

I want you to act as a Developer Relations consultant. I will provide you with a software package and it's related documentation. Research the package and its available documentation, and if none can be found, reply "Unable to find docs". Your feedback needs to include quantitative analysis (using data from StackOverflow, Hacker News, and GitHub) of content like issues submitted, closed issues, number of stars on a repository, and overall StackOverflow activity. If there are areas that could be expanded on, include scenarios or contexts that should be added. Include specifics of the provided software packages like number of downloads, and related statistics over time. You should compare industrial competitors and the benefits or shortcomings when compared with the package. Approach this from the mindset of the professional opinion of software engineers. Review technical blogs and websites (such as TechCrunch.com or Crunchbase.com) and if data isn't available, reply "No data available". My first request is "express https://expressjs.com"

INPUTS

package_request REQUIRED

String specifying the software package and its URL, like 'express https://expressjs.com'

e.g. express https://expressjs.com

REQUIRED CONTEXT

  • software package name and URL

OPTIONAL CONTEXT

  • related documentation

TOOLS REQUIRED

  • web_search

ROLES & RULES

Role assignments

  • act as a Developer Relations consultant.
  1. Research the package and its available documentation, and if none can be found, reply "Unable to find docs".
  2. Include quantitative analysis using data from StackOverflow, Hacker News, and GitHub of issues submitted, closed issues, stars, and StackOverflow activity.
  3. If there are areas that could be expanded on, include scenarios or contexts that should be added.
  4. Include specifics of the provided software packages like number of downloads, and related statistics over time.
  5. Compare industrial competitors and the benefits or shortcomings when compared with the package.
  6. Approach this from the mindset of the professional opinion of software engineers.
  7. Review technical blogs and websites such as TechCrunch.com or Crunchbase.com and if data isn't available, reply "No data available".

EXPECTED OUTPUT

Format
structured_report
Constraints
  • include quantitative analysis from specified sources
  • reply 'Unable to find docs' if no documentation
  • reply 'No data available' if data missing
  • approach from software engineers' professional opinion

SUCCESS CRITERIA

  • Research package documentation and statistics from specified sources.
  • Provide quantitative analysis including issues, stars, downloads.
  • Suggest documentation expansions with scenarios.
  • Compare with industrial competitors highlighting benefits and shortcomings.
  • Offer professional software engineer opinion.

FAILURE MODES

  • May hallucinate or fabricate unavailable real-time data from external sources.
  • Could miss key competitors or incomplete comparisons.
  • Risk of generic feedback without specific data access.

CAVEATS

Dependencies
  • Software package name and URL provided in user request.
  • Related documentation if provided by user.
Missing context
  • Output format or structure for the feedback.
  • Criteria for selecting competitors.
  • Time frames for statistics like downloads or issues.
  • Handling of packages with multiple repositories.
Ambiguities
  • 'Overall StackOverflow activity' is vague and undefined.
  • 'Related statistics over time' does not specify which statistics or time periods.
  • 'Industrial competitors' lacks criteria for selection.
  • 'Areas that could be expanded on' unclear if referring to docs, package features, or something else.

QUALITY

OVERALL
0.75
CLARITY
0.75
SPECIFICITY
0.80
REUSABILITY
0.65
COMPLETENESS
0.70

IMPROVEMENT SUGGESTIONS

  • Add a structured output format with sections like 'Documentation Summary', 'Quantitative Metrics', 'Competitor Comparison', 'Recommendations'.
  • Replace the specific request with a placeholder like '{package_name} {package_url}' for reusability.
  • Explicitly list key metrics (e.g., stars, forks, open/closed issues, weekly downloads).
  • Clarify competitor selection: 'top 3 similar packages by popularity'.

USAGE

Copy the prompt above and paste it into your AI of choice — Claude, ChatGPT, Gemini, or anywhere else you're working. Replace any placeholder sections with your own context, then ask for the output.

MORE FOR MODEL