Writing performance reviews is tedious. You need to remember months of work, tie accomplishments to organizational goals, and present it all coherently. By review time, you’ve forgotten half the details from earlier in the period.

I use AI with MCP servers to generate my performance reviews and quarterly check-ins. The AI pulls data from task management, documentation, code contributions, and journal entries, then writes answers aligned with company strategic goals. What used to take days now takes hours—not by inventing accomplishments, but by gathering and organizing real data. Combined with daily journaling, I generate comprehensive reviews in a fraction of the time.

Critical: Always manually review AI-generated content before submission. The AI gathers data efficiently, but you’re responsible for accuracy.

I’ve used this with both Claude Code and GitHub Copilot. Any AI tool that integrates with MCP servers should work.

The Foundation: Regular Documentation

This needs regular documentation throughout the review period. I use Obsidian and create dated notes (YYYY-MM-DD.md) in a journal/ folder documenting:

  • Technical work completed
  • Challenges and solutions
  • Meetings and collaborations
  • Learning moments and critical incidents
  • Ideas and decisions

Journaling cadence: Daily entries work best—just 5-10 minutes per day.

For longer review periods: The AI generates summaries automatically. For quarterly check-ins, it processes three months of daily entries directly. For semi-annual or annual reviews, have the AI first summarize each quarter, then use those quarterly summaries as inputs for the longer review. No manual work required—the AI handles the summarization at whatever intervals you need.

These entries don’t need polish—they’re working notes. But they’re incredibly useful at review time. Instead of reconstructing months of work from memory, you have a detailed record.

Note: Any system works—Obsidian, Notion, markdown files, private GitHub repo. The key is consistency and AI-accessible format.

MCP Servers for Data Collection

I configured my AI tool with Model Context Protocol (MCP) servers to gather data automatically:

Project Management (Jira, Confluence, Linear, Asana): Completed tickets, documentation, work descriptions

Code Hosting (GitHub, GitLab, Bitbucket): Pull requests, issues, code contributions

File System: Journal entries and local documentation

The Five-Step Process

Step 1: Gather Data

The AI queries tasks, documentation, code contributions, and journal entries. Minutes instead of hours.

Step 2: Cross-Reference and Organize

Links pull requests to tickets and groups work by project/initiative rather than by tool or repository.

Step 3: Validation & Enrichment

Reviews journal entries again for cross-team collaborations, infrastructure work, critical incidents, knowledge sharing, and pending work.

Step 4: Strategic Alignment

Reads your organization’s strategic goals and cultural values (exported to markdown) and connects your work to those priorities.

Step 5: Conciseness Pass

Combines related accomplishments, removes redundancy, keeps technical details that demonstrate depth.

The Prompt

Here’s the complete prompt template you can customize for your organization:

## Initial Request
I need your help writing my [quarterly check-in / annual performance review]. This review covers the period from [START_DATE] to [END_DATE].

## Data Collection Process
Follow this systematic approach to gather comprehensive context:

### Step 1: Gather Data from All Sources
1. **Project Management System**: Query completed tasks/tickets during the review period
2. **Documentation System**: Search documentation created/updated during the review period (design docs, blog posts, investigations, technical evaluations)
3. **Code Hosting**: Search for merged pull requests/merge requests during the review period
4. **Journal Entries**: Read ALL journal entries from the review period (located in `journal/` folder, named `YYYY-MM-dd.md` or `YYYY-MM.md` for monthly summaries)

### Step 1.5: Cross-Reference and Organize by Project
After gathering all data:
1. **Analyze contributions** to identify which tasks/tickets they're associated with
2. **Group work by project/initiative** instead of by system or repository
3. **Identify project themes** that tell a cohesive story about related work
4. **Create project groupings** that demonstrate how distributed efforts contributed to complete outcomes

### Step 2: Initial Document Creation
Create an initial draft covering all required sections based on the collected data.

### Step 3: Validation & Enrichment
After creating the initial draft:
1. Review ALL journal entries again to identify any missed accomplishments, learnings, or context
2. Look for cross-team collaborations, infrastructure improvements, critical incidents handled, knowledge sharing
3. Identify work that's still pending completion (frame appropriately)
4. Ensure all major themes from journals are reflected in the document

### Step 4: Strategic Alignment
Read the organizational strategic goals document from `strategic-goals.md` and explicitly align accomplishments with:
- **Strategic Initiatives**: Link specific work to organizational priorities
- **Company Values**: Call out where work embodies each value
- Use bold formatting to highlight connections: **"This advances [Strategic Initiative] by..."**

**Note**: Export your organization's strategic goals, OKRs, values, or annual objectives to a markdown file that the AI can access.

### Step 5: Conciseness Pass
Review the document for conciseness while preserving impact:
- Combine related accomplishments where appropriate
- Remove redundant phrases
- Keep specific technical details that demonstrate depth
- Maintain clear connection to business value

## Questions to Answer
Customize these sections based on your organization's review template:

### Key Accomplishments
What were your most significant contributions during this period?

**Requirements:**
- Focus on 2-4 major accomplishments
- Include specific details (technologies, approaches, challenges solved)
- Explicitly connect to organizational strategic goals
- Highlight business impact and outcomes
- Mention cross-team collaborations and knowledge sharing
- Call out relevant organizational values in bold

### Learning & Growth
What were your biggest learnings during this period, and how are you applying those insights?

**Requirements:**
- Include technical learnings and pragmatic decision-making
- Discuss challenges and how they informed future work
- Cover performance optimization learnings
- Mention professional development activities
- Show how insights were applied to subsequent work

### Impact & Metrics
What measurable impact did your work have?

**Requirements:**
- Quantify impact where possible (performance improvements, cost savings, user growth, etc.)
- Describe qualitative improvements (team efficiency, code quality, documentation)
- Connect metrics to organizational objectives
- Be honest about works in progress vs. completed outcomes

### [Additional sections as needed by your organization]

## Organizational Context

### Strategic Goals and Values
Read the strategic goals document from `strategic-goals.md` to identify current organizational priorities and explicitly connect work to them.

Reference these throughout the document, calling them out in bold when work clearly advances them.

## Output Format & Style

### Writing Guidelines
- **First person**: "I accomplished" never "[Your Name] accomplished"
- **Evidence-based**: Only include what's documented in your systems or journal entries
- **Paragraph format**: No bullet points in main narrative (bullets OK for references)
- **Blank lines**: Between all paragraphs and sections
- **Professional but authentic**: Technical but accessible
- **Balanced tone**: Confident without exaggeration
- **Bold for emphasis**: Highlight strategic connections and values

### Document Structure
Adapt this structure to match your organization's review format:
1. # Key Accomplishments (2-4 paragraphs)
2. # Learning & Growth (2-3 paragraphs)
3. # Impact & Metrics (1-2 paragraphs)
4. # [Additional sections as required]
5. # References (comprehensive lists)

### References Section
Create detailed subsections:
- ## Tasks/Tickets (with links and titles)
- ## Documentation (with links, titles, and brief descriptions)
- ## Code Contributions (organized by project/initiative, NOT by repository)
  - Group contributions by the project or initiative they support
  - Use task associations to determine project groupings
  - Include cross-repository work under the same project when related
  - Each project group should have a descriptive heading
  - Format: `[repo#PR](link) - Description`
- ## Journal Entries Referenced (organized by month, listing ALL entries from the period)

### File Management
- Save to: `reviews/[PERIOD].md`
- Always replace existing file completely (never append)
- Example: Q3 of 2025 → `2025-Q3.md` or Annual 2025 → `2025-Annual.md`

## Quality Checklist
Before finalizing, verify:
- [ ] All required sections are complete and well-developed
- [ ] Strategic goals and values are explicitly called out in bold
- [ ] All journal entries from the period were reviewed
- [ ] Technical depth is preserved while maintaining readability
- [ ] Cross-team collaborations are highlighted
- [ ] Critical incidents and infrastructure work are included
- [ ] Knowledge sharing activities are mentioned
- [ ] References section is comprehensive and well-organized
- [ ] Code contributions are organized by project/initiative (not by repository)
- [ ] Contributions grouped to tell a cohesive story
- [ ] Document is concise but preserves all key accomplishments
- [ ] Tone is professional, confident, and evidence-based

Key Takeaways

Daily journaling is essential: Without consistent notes, the AI has nothing. The AI handles summarization for longer periods.

MCP servers work with real data: Direct integration means every accomplishment is verifiable—no hallucinations.

The prompt is reusable: Once customized, the same approach works for quarterly, annual, promotion packets, and retrospectives.

Getting Started

  1. Start journaling: Daily entries, 5-10 minutes. Any system works—plain text files, markdown files.

  2. Set up MCP servers: Configure your AI tool with servers for project management, code hosting, and file system.

  3. Export strategic goals: Convert your OKRs or objectives to markdown.

  4. Customize the prompt: Adapt to your organization’s review format.

  5. Run at review time: For semi-annual or annual reviews, generate quarterly summaries first, then use those for the full review.

Initial setup takes a few hours. What used to take days now takes hours.

Adapting for Different Time Periods

Quarterly: Process 3 months of daily entries directly.

Semi-annual: Summarize each quarter first, then combine the two summaries.

Annual: Summarize each quarter, then combine the four summaries.

Promotion packets: Same approach for 12-18 month periods.

Manual Review and Validation

Never submit without review. AI can misinterpret context. Always validate:

What to verify: Factual accuracy, attribution to teammates, strategic alignment, tone, completeness, context.

Red flags: Work you didn’t do, overstated impact, misattributed accomplishments, incorrect details, weak strategic connections, missing context.

The AI saves hours of data gathering and drafting, but your judgment ensures the final document is accurate and honest. Think of it as a thorough research assistant—it collects and organizes, but you’re the author responsible for final content.

Tools and Resources

AI Tools with MCP Support:

MCP Servers:

Note: MCP is rapidly evolving. Check the Model Context Protocol documentation for latest servers and capabilities.

See you in the next post.