[prompt-analysis] Copilot PR Prompt Analysis - November 18, 2025 #4273
Closed
Replies: 1 comment
-
|
This discussion was automatically closed because it was created by an agentic workflow more than 1 week ago. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🤖 Copilot PR Prompt Pattern Analysis - November 18, 2025
Summary
Over the last 30 days, Copilot generated 1,000 pull requests in the githubnext/gh-aw repository. Of the completed PRs, 77.0% were successfully merged, indicating strong overall performance. However, analysis reveals significant differences in success rates across prompt categories and characteristics.
The most successful prompts share common traits: they are concise (20-100 words), include specific file or code references, and focus on testing, updates, or bug fixes. Prompts that are overly verbose (>300 words), lack specificity, or attempt complex refactoring show lower merge rates.
Full Analysis Report
Overview Statistics
Analysis Period: Last 30 days
Total PRs: 1,000
Completed PRs: 999
Merged: 769 (77.0%)
Closed (not merged): 230 (23.0%)
Open: 1
Prompt Categories and Success Rates
Key Finding: Testing prompts have the highest success rate (81.0%), while refactoring and uncategorized prompts show the lowest rates (~70%).
Prompt Characteristics Analysis
Specificity Indicators
compiler.go,README.md)Key Finding: Prompts with specific file or code references have significantly higher merge rates. Being specific pays off!
Prompt Length Analysis
Key Finding: Overly verbose prompts (>300 words) correlate with lower success rates. The most successful prompts in our sample were exactly 20 words.
✅ Successful Prompt Patterns
Common Characteristics in Merged PRs
Top Action Verbs in Successful PRs
Example Successful Prompts
1. PR #4247 - MERGED ✅ (20 words)
Why it succeeded: Concise, specific target (copilot-* workflows), clear action (update), specific change (use shared workflow)
2. PR #4239 - MERGED ✅ (20 words)
Why it succeeded: Specific file (Makefile), specific change (version number), includes evidence (log link)
3. PR #4129 - MERGED ✅ (20 words)
Why it succeeded: Technical specificity (workflow_run, zizmor), context provided (existing validation), clear goal
❌ Unsuccessful Prompt Patterns
Common Characteristics in Closed PRs
Example Unsuccessful Prompts
1. PR #2912 - CLOSED ❌ (630 words)
Why it failed: Extremely verbose (630 words), complex multi-part task, creating entirely new workflow rather than focused change
2. PR #3188 - CLOSED ❌ (603 words)
Why it failed: Multiple objectives (generate notes + compare binaries + compare schema), overly complex, too many moving parts
3. PR #3994 - CLOSED ❌ (594 words)
Why it failed: Verbose boilerplate, actual task buried in template, unclear core objective
🎯 Key Insights
1. Specificity Drives Success
Prompts that reference specific files have a 9.7% higher merge rate than generic prompts. Including code references (functions, classes) adds another 6.8% advantage.
Recommendation: Always mention the exact file, function, or component you want modified.
2. Conciseness is Critical
The most successful prompts averaged 20 words. While 145 words is acceptable, prompts over 300 words show significantly lower success rates.
Recommendation: Aim for 20-100 words. Be direct and eliminate unnecessary context.
3. Testing and Updates Win
Prompts focused on testing (81% success) and updates (79% success) dramatically outperform refactoring tasks (70% success).
Recommendation: Frame prompts as tests or updates rather than complex refactors when possible.
4. One Task, One PR
Prompts attempting multiple objectives (e.g., "create workflow + compare binaries + generate notes") have lower success rates.
Recommendation: Break complex tasks into multiple, focused PRs.
5. Action Verbs Matter
Start with clear action verbs: "Update", "Fix", "Add", "Create". Avoid vague verbs like "improve" or "enhance" without specifics.
Recommendation: Use concrete action verbs that clearly describe the change.
📋 Recommendations for Writing Effective Prompts
✅ DO:
Be specific - Mention exact files, functions, or components
.github/workflows/build.ymlto use Node 20"Keep it concise - Aim for 20-100 words maximum
validateInput()function invalidator.go"Include references - Link to commits, issues, or documentation
Use action verbs - Start with "Update", "Fix", "Add", "Create"
Focus on testing/updates - These have highest success rates
❌ AVOID:
📈 Prompt Template (High Success Pattern)
Characteristics:
🔍 Additional Analysis
Most Common Keywords in Successful Prompts
Success Rate by Prompt Element
💡 Quick Takeaways
Analysis based on 1,000 Copilot-generated PRs from the last 30 days in githubnext/gh-aw
Beta Was this translation helpful? Give feedback.
All reactions