Skip to main content

๐Ÿค– AI-Powered Development Analysis

Transform your development workflow with intelligent analysis โ€“ Leverage your local Ollama installation to generate smart changelogs, analyze project health, and gain deep insights into development progress with complete privacy.

Why AI Development Analysis?

๐Ÿ”’ 100% Local - Uses your local Ollama, no data sent to external services
๐Ÿค– Smart Insights - AI-powered project health scoring and recommendations
๐Ÿ“ Intelligent Changelogs - Context-aware release notes that users actually read
โšก Fast Analysis - Comprehensive project analysis in seconds

๐ŸŽฏ Overviewโ€‹

Libre WebUI includes intelligent analysis tools that leverage your local Ollama installation to provide deeper insights into development progress, code changes, and project evolution. This privacy-first system provides three main capabilities:

  1. ๐Ÿค– AI-Enhanced Changelog Generation - Smart release notes with contextual summaries
  2. ๐Ÿ“Š Comprehensive Development Analysis - Project health and technical insights
  3. โšก Automated Release Intelligence - AI-powered release process enhancement
Key Benefits
  • Zero External Dependencies - Everything runs locally on your machine
  • Privacy Guaranteed - Your code and data never leave your system
  • Multiple AI Models - Choose the right model for speed vs. quality
  • Developer Focused - Built by developers, for developers

๐Ÿš€ Quick Startโ€‹

Prerequisites Checklist
  • โœ… Ollama running locally: ollama serve
  • โœ… A suitable model installed: ollama pull llama3.2:3b (recommended for balanced performance)
  • โœ… Git repository with commits: The system analyzes your git history

๐Ÿƒโ€โ™‚๏ธ Get Started in 30 Secondsโ€‹

# 1. Install a balanced AI model (if not already done)
ollama pull llama3.2:3b

# 2. Generate AI-powered changelog for current changes
npm run changelog:ai

# 3. Get comprehensive development analysis
npm run analyze

๐ŸŽฏ Choose Your Analysis Typeโ€‹

# Generate AI-powered changelog for current changes
npm run changelog:ai

# Comprehensive development analysis
npm run analyze

# Quick metrics overview without AI
npm run analyze:quick

# AI analysis of development impact
npm run changelog:ai:impact
Pro Tip

Start with npm run changelog:ai to see AI-generated release notes, then use npm run analyze for deeper project insights!

๐Ÿ“‹ Available Commandsโ€‹

๐ŸŽจ Changelog Generationโ€‹

CommandDescriptionBest For
npm run changelog:aiUser-focused release notes with AI insights๐Ÿ“ Release notes
npm run changelog:ai:summaryDevelopment overview and patterns๐Ÿ“Š Development summaries
npm run changelog:ai:impactTechnical impact analysis๐Ÿ” Technical reviews

๐Ÿ” Development Analysisโ€‹

CommandDescriptionBest For
npm run analyzeFull project analysis with AI insights๐Ÿง  Comprehensive insights
npm run analyze:quickFast metrics without AI processingโšก Quick health checks
npm run analyze:metricsExport raw metrics as JSON๐Ÿ“ˆ Data integration

๐Ÿš€ Release Processโ€‹

Automated AI Integration

The standard release process (npm run release) now automatically includes AI-powered summaries when Ollama is available!

โš™๏ธ Configurationโ€‹

๐ŸŒ Environment Variablesโ€‹

# Ollama server configuration
OLLAMA_BASE_URL=http://localhost:11434

```bash
# AI model selection (smaller = faster, larger = more detailed)
CHANGELOG_AI_MODEL=llama3.2:3b # For quick changelog generation
ANALYSIS_AI_MODEL=llama3.1:latest # For development analysis

# Timeouts
OLLAMA_TIMEOUT=30000 # Standard operations (30s)
OLLAMA_LONG_OPERATION_TIMEOUT=900000 # Model loading (15min)

Choose the right model for your needs:

Use CaseRecommended ModelSizeSpeedQualityBest For
โšก Fast & Lightllama3.2:3b~2GBโšก FastGoodQuick changelogs
โš–๏ธ Balancedllama3.1:latest~4GB๐Ÿš€ MediumVery GoodDaily usage
๐ŸŽฏ Best Qualitygemma3:27b~16GB๐ŸŒ SlowExcellentDetailed analysis
Model Installation Guide
# Recommended installation order:

# 1. Essential: Fast and reliable for daily use
ollama pull llama3.2:3b

# 2. Advanced: Better quality for comprehensive analysis
ollama pull llama3.1:latest

# 3. Premium: Best quality for detailed reports (requires 16GB+ RAM)
ollama pull gemma3:27b
Model Selection Strategy
  • Start with llama3.2:3b - Perfect for getting started and daily changelog generation
  • Upgrade to llama3.1:latest - When you need better analysis quality
  • Use gemma3:27b - For the most detailed insights and comprehensive analysis
Model Selection
  • llama3.2:3b is perfect for quick changelog generation and daily use
  • llama3.1:latest provides better analysis quality while maintaining reasonable speed
  • gemma3:27b offers the best analysis quality but requires significant RAM (16GB+)

๐Ÿ”„ Quick Model Switchingโ€‹

Switch Models on the Fly
# Use fast model for quick changelog
CHANGELOG_AI_MODEL=llama3.2:3b npm run changelog:ai

# Use balanced model for better quality
CHANGELOG_AI_MODEL=llama3.1:latest npm run changelog:ai

# Use best model for detailed analysis
CHANGELOG_AI_MODEL=gemma3:27b npm run changelog:ai

๐Ÿ“Š What Gets Analyzedโ€‹

๐Ÿ“ Changelog Generationโ€‹

  • ๐Ÿ”„ Conventional Commits: Automatic categorization (feat, fix, docs, etc.)
  • ๐Ÿ’ฅ Change Impact: Breaking changes, new features, improvements
  • ๐Ÿ‘ฅ User Focus: Translates technical commits into user-friendly descriptions
  • ๐Ÿ“ˆ Release Context: Considers project history and patterns

๐Ÿ” Development Analysisโ€‹

  • ๐Ÿ“Š Repository Metrics: Commit frequency, contributor activity, branch health
  • ๐Ÿ—๏ธ Codebase Health: Lines of code, file organization, language distribution
  • ๐Ÿ›๏ธ Architecture Assessment: Technology stack, dependencies, project structure
  • โšก Development Velocity: Productivity indicators, development patterns
  • ๐Ÿ’ก Strategic Insights: Technical debt, improvement recommendations
Analysis Depth

The AI analyzes both quantitative metrics (commit counts, file changes) and qualitative patterns (development trends, architectural decisions) to provide actionable insights.

๏ฟฝ Example Outputsโ€‹

๐Ÿค– AI Changelog Generationโ€‹

Example AI-Generated Release Notes
๐Ÿค– AI-Generated Release Summary
=====================================

This release focuses on performance optimization and user experience improvements.
Key highlights include streaming response enhancements that eliminate the token
display slowdown, new artifact code viewing capabilities with syntax highlighting,
and improved auto-scroll behavior during AI responses.

### โœจ New Features
- Artifact code view toggle with syntax highlighting
- Auto-scroll during streaming responses
- Theme-aware code block rendering

### ๐Ÿ”ง Technical Improvements
- Debounced store updates for streaming performance
- Optimized React rendering for large responses
- Enhanced release automation workflow

๐Ÿ“ˆ Development Analysisโ€‹

Example Development Analysis
๐Ÿง  AI Development Analysis
============================

Project Health Score: 8.5/10

The Libre WebUI project shows strong development momentum with consistent
commit activity (3.2 commits/day average) and a well-architected full-stack
TypeScript application. The codebase demonstrates mature patterns with
proper separation of concerns between frontend/backend.

Technical Strengths:
- Modern tech stack (React + TypeScript + Express)
- Comprehensive Ollama API integration
- Docker containerization with development/production configs
- Active release automation and changelog generation

Recommendations:
- Consider implementing automated testing coverage
- Monitor bundle size as artifact features expand
- Plan for horizontal scaling as user base grows
Output Quality

The AI analyzes commit patterns, code structure, and project context to generate meaningful insights rather than just listing changes.

๐Ÿ”ง Advanced Usageโ€‹

๐ŸŽจ Custom Analysis Promptsโ€‹

You can modify the AI prompts in the script files to customize analysis focus:

Custom Analysis Example
// In ai-changelog-generator.js
const customPrompt = `
Analyze these commits for a WebUI focused on accessibility improvements...
${commitText}
Focus on user experience and accessibility improvements.
`;

๐Ÿ”„ Integration with CI/CDโ€‹

.github/workflows/ai-analysis.yml
# Example GitHub Action integration
- name: Generate AI Release Notes
run: |
ollama serve &
sleep 10
ollama pull llama3.2:3b
npm run changelog:ai > release-notes.md

- name: Development Analysis
run: |
npm run analyze:metrics > metrics.json
# Upload metrics for tracking

๐Ÿ“Š Batch Analysisโ€‹

Analyze Multiple Releases
# Analyze multiple releases
git tag -l | tail -5 | while read tag; do
git checkout $tag
echo "Analysis for $tag:" >> analysis.log
npm run analyze:quick >> analysis.log
done
Advanced Usage

Combine multiple commands and custom prompts to create sophisticated analysis workflows tailored to your project needs.

๐Ÿšจ Troubleshootingโ€‹

โ“ Common Issuesโ€‹

๐Ÿ”Œ "Ollama not available"

# Check if Ollama is running
curl http://localhost:11434/api/version

# Start Ollama if needed
ollama serve

# Verify model is available
ollama list

โฐ "AI generation timeout"

# Use a faster model
export CHANGELOG_AI_MODEL=llama3.2:3b

# Or increase timeout
export OLLAMA_TIMEOUT=60000

๐Ÿค– "Model not found"

# Install the recommended models
ollama pull llama3.2:3b # Fast and reliable
ollama pull llama3.1:latest # Balanced performance

# Or use an available model
ollama list
export CHANGELOG_AI_MODEL=your-available-model

โšก Performance Optimizationโ€‹

Performance Best Practices
  1. Use llama3.2:3b for frequent operations (fast changelog generation)
  2. Use llama3.1:latest for comprehensive analysis (balanced performance)
  3. Use gemma3:27b for detailed analysis (best quality but requires 16GB+ RAM)
  4. Keep Ollama warm by running a test query periodically
  5. Batch operations when possible to avoid model loading overhead

๐ŸŒŸ Best Practicesโ€‹

๐Ÿ‘ฅ For Teamsโ€‹

Team Collaboration
  • Standardize models across the team for consistent output quality
  • Include AI summaries in pull request descriptions
  • Run analysis before major releases to identify potential issues
  • Track metrics over time to monitor project health trends

๐Ÿ”ง For Maintainersโ€‹

Maintainer Benefits
  • Regular analysis to catch technical debt early
  • AI-generated release notes save significant time
  • Metrics tracking helps with project planning and resource allocation
  • Strategic insights guide long-term technical decisions

๐Ÿค For Contributorsโ€‹

Contributor Guidelines
  • Review AI analysis before submitting major changes
  • Use impact analysis to understand the scope of your contributions
  • Check development patterns to align with project conventions

๐Ÿ”ฎ Future Enhancementsโ€‹

Planned improvements for the AI analysis system:

  • ๐Ÿงช Code Quality Assessment: Static analysis integration with AI insights
  • ๐Ÿ” Predictive Analysis: Forecast development trends and potential issues
  • ๐Ÿ’ฌ Interactive Analysis: Chat-based exploration of project metrics
  • ๐Ÿ“Š Custom Dashboards: Web-based visualization of development insights
  • ๐Ÿ”Œ Integration APIs: Webhook support for external tools and services
Roadmap

These features are actively being developed. Check our release notes for the latest updates!


Ready to Get Started?

Run npm run analyze to get started with AI-powered development insights!

Your first analysis will provide a comprehensive overview of your project's health and development patterns.