Dashboard Overview
Understanding your GitScope dashboard and what each metric means
Dashboard Overview
The GitScope dashboard transforms your GitHub repository data into actionable insights. This guide explains exactly what each metric means and how to interpret the data.
Overview Cards (Top Metrics)
These four key metrics provide an instant snapshot of your project's current state:
Total Issues
What it measures: The cumulative count of all issues across your connected repositories
Why it matters: Shows the overall scale of your project's activity and community engagement
How to interpret: Higher numbers indicate active projects, but watch the trend—rapidly growing backlogs may signal resource constraints
Issues This Week
What it measures: New issues created in the last 7 days
Why it matters: Indicates current project momentum and community activity levels
How to interpret: Consistent weekly numbers suggest steady engagement; sudden spikes may indicate viral growth or critical problems
Average Response Time
What it measures: Time from issue creation to first human response (excludes bot responses)
Why it matters: Key indicator of maintainer responsiveness and community health
How to interpret:
- < 24 hours: Excellent community engagement
- 1-3 days: Good for most projects
- > 1 week: May indicate maintainer burnout or understaffing
Active Contributors
What it measures: Unique users who have commented, opened issues, or contributed code in the last 30 days
Why it matters: Shows community vitality and project sustainability
How to interpret: Growing numbers indicate healthy community; declining contributors may signal project health issues
Issue Analytics Section
Category Distribution Chart
What it measures: Percentage breakdown of issue types using AI classification
Categories explained:
- Bug Reports: Issues describing software defects or unexpected behavior
- Feature Requests: Suggestions for new functionality or enhancements
- Documentation: Issues related to docs, tutorials, or explanations
- Security: Vulnerability reports or security-related concerns
- Support: User questions or help requests
How to interpret:
- High bug percentages may indicate quality issues
- Many feature requests suggest active user engagement
- Low documentation issues often mean good existing docs
Priority Analysis
What it measures: AI-assigned priority levels based on issue content and context
Priority levels explained:
- Critical: Security vulnerabilities, data loss, or service outages
- High: Major functionality broken, user-blocking issues
- Medium: Minor bugs, standard feature requests
- Low: Cosmetic issues, nice-to-have features
How to interpret: Most issues should be Medium/Low priority; high Critical percentages indicate serious problems
Trend Analysis Charts
What it measures: Time-series data showing patterns over weeks/months
Key trends:
- Issue Creation Rate: How many new issues appear over time
- Resolution Velocity: How quickly issues get closed
- Backlog Growth: Whether your issue backlog is growing or shrinking
How to interpret:
- Resolution rate should match or exceed creation rate
- Seasonal patterns are normal (holidays, academic calendars)
- Sudden trend changes warrant investigation
Team Productivity Section
Response Time by Team Member
What it measures: Average time each team member takes to respond to issues
Why it matters: Identifies workload distribution and potential burnout
How to interpret:
- Large variations may indicate uneven workload distribution
- Gradually increasing times may signal burnout
- Consistent fast responders are valuable team assets
Issue Resolution Rates
What it measures: How many issues each team member closes per week/month
Why it matters: Shows productivity and identifies top performers
How to interpret: Consider issue complexity—one person may handle harder issues with lower counts
Workload Balance
What it measures: Distribution of assigned issues across team members
Why it matters: Prevents burnout and ensures sustainable maintenance
How to interpret:
- Uneven distribution is normal but extreme imbalances are concerning
- Consider each person's availability and expertise
Community Health Section
Sentiment Analysis Score
What it measures: AI analysis of emotional tone in issue comments and discussions
Scale: -100 (very negative) to +100 (very positive)
How to interpret:
- 70-100: Excellent community atmosphere
- 30-70: Generally positive with some friction
- 0-30: Neutral to slightly negative
- Below 0: Serious community health problems requiring intervention
Engagement Quality Metrics
What it measures: Depth and helpfulness of community interactions
Factors include:
- Discussion Length: Longer threads often indicate thorough problem-solving
- Follow-up Rate: Whether issues get resolved or abandoned
- Cross-referencing: How often issues reference other issues or documentation
How to interpret: High engagement quality suggests a collaborative, helpful community
Contributor Retention
What it measures: Percentage of new contributors who make a second contribution within 90 days
Why it matters: Measures how welcoming and accessible your project is
How to interpret:
- > 40%: Excellent onboarding and community
- 20-40%: Good for most projects
- < 20%: May indicate barriers to contribution
Health Score Calculation
The overall Health Score (0-100) combines four weighted factors:
Issue Resolution Efficiency (30% weight)
- Time to close issues
- Percentage of issues resolved vs. abandoned
- Balance between new and resolved issues
Community Engagement (25% weight)
- Response rates to new contributors
- Discussion quality and depth
- Contributor retention rates
Code Quality Indicators (25% weight)
- Ratio of bug reports to feature requests
- Security issue frequency
- Documentation completeness
Security Posture (20% weight)
- Response time to security issues
- Percentage of security issues properly addressed
- Presence of security documentation
Interpreting Alerts and Thresholds
When to Be Concerned
- Response time > 1 week: May indicate maintainer capacity issues
- Sentiment score < 0: Community health problems requiring attention
- Resolution rate < 50% of creation rate: Growing backlog that needs addressing
- Contributor retention < 15%: Onboarding or community issues
Taking Action
- Declining metrics: Review recent changes, consider adding maintainers
- Sudden spikes: Check for viral content, security issues, or breaking changes
- Negative sentiment: Review recent controversial decisions or community interactions
Best Practices for Dashboard Use
- Weekly Reviews: Check trends weekly rather than daily fluctuations
- Context Matters: Consider external factors (holidays, major releases, market events)
- Combine Metrics: Don't rely on single metrics—look for patterns across multiple indicators
- Set Baselines: Track your project's normal ranges to identify meaningful changes
- Share Insights: Discuss dashboard findings with your team for collective understanding
Getting the Most Value
The dashboard is most valuable when you:
- Establish baseline metrics for your project
- Set up alerts for meaningful threshold changes
- Use data to guide maintenance prioritization
- Share insights with stakeholders to demonstrate project health
- Track the impact of community initiatives and policy changes