Skip to main content

Project Quality & Authenticity Review

Score: 6.5/10 Potential: 9/10 Gap: 2.5 points

Executive Summary

You have volume (15 projects) but questionable depth. The critical finding: 0 out of 15 projects have working demo URLs. This creates a credibility gap—viewers can’t evaluate functionality, only read descriptions. Additionally, 8 projects marked “Active Development” with vague completion status raises questions about follow-through.

Bottom line: Show, don’t just tell. Deploy demos, clarify status, and add evidence of real-world impact.

The Critical Problem: Zero Live Demos

The Data

From _data/ai_projects.yml:

# Project 1
github_url: https://github.com/bjpl/describe_it
demo_url:

# Project 2
github_url: https://github.com/bjpl/subjunctive_practice
demo_url:

# Project 3
github_url: https://github.com/bjpl/conjugation_gui
demo_url:

# ... Pattern repeats for all 15 projects

Status breakdown:

  • 15 projects total
  • 15 have GitHub URLs
  • 0 have demo URLs

Status labels:

  • 8 “Active Development” (ambiguous)
  • 2 “Live” (but no demo_url provided?)
  • 4 “Completed” (but also no demos)
  • 1 “Active Development” (redundant label)

Why This Matters

What “no demo” signals to viewers:

  1. Projects may not work - Perhaps they’re broken, incomplete, or just ideas
  2. Hard to evaluate quality - Can’t actually test functionality
  3. Follow-through questions - Do you finish what you start?
  4. Less impressive - GitHub repo < working demo
  5. Barrier to engagement - Must clone + setup locally = high friction

Industry standard: Portfolio projects should have live demos whenever technically feasible.

Which Projects Should Have Demos

✅ Can and Should Have Demos (High Priority)

1. describe_it (Next.js app)

  • Tech: Next.js 14, Supabase, Vercel KV
  • Deploy to: Vercel (free tier, 2-click deployment)
  • Effort: 30-60 minutes
  • Impact: Immediate credibility boost

2. subjunctive-practice (Next.js app)

  • Tech: Next.js 14, React 18, OpenAI API
  • Deploy to: Vercel
  • Consideration: May need API key management
  • Effort: 1-2 hours (including env var setup)

3. Internet Infrastructure Map (3D WebGL)

  • Tech: Three.js, WebGL, Static files
  • Deploy to: GitHub Pages, Netlify, Vercel
  • Effort: 30 minutes (build + deploy)
  • Impact: High (visual wow factor)

4. Letratos (Jekyll site)

  • Tech: Jekyll static site
  • Deploy to: GitHub Pages, Netlify
  • Effort: 30 minutes
  • Why: Showcases your photography + creative work

5. Fancy Monkey (E-commerce)

  • Tech: GitHub Pages + Vercel serverless
  • Status: Already marked “Live” but no demo_url?
  • Effort: 10 minutes (just add the URL)
  • Why: Working e-commerce is impressive

6. Open Learn Colombia (Python + News aggregation)

  • Tech: Python, Jinja2, RSS parsing
  • Deploy to: Heroku, Railway, PythonAnywhere
  • Effort: 2-3 hours (slightly more complex)
  • Why: Shows backend + data aggregation skills

⚠️ Could Have Demos (Medium Priority)

7. learning-agentic-engineering (Web docs)

  • Tech: Markdown → HTML, Static
  • Deploy to: GitHub Pages
  • Effort: 1 hour

8. Algorithms & Data Structures CLI (Marked “Completed”)

  • Tech: Python CLI
  • Challenge: CLI doesn’t translate to web easily
  • Solution: Record demo video or convert to web interface
  • Effort: 4-8 hours (web conversion) OR 30 min (video)

❌ Can’t Easily Demo (Understandable)

9. Aves (TypeScript/Next.js)

  • Similar to #1, 2—should have demo
  • Unless incomplete (then update status)

10. GHD (CLI tool)

  • Personal utility, works locally
  • Alternative: Add comprehensive README with GIF demos

11. git_analysis (CLI/Jupyter)

  • Similar to #10
  • Alternative: Screenshots of output in README

12. conjugation_gui (Windows GUI)

  • Desktop app, can’t deploy to web
  • Alternative: Demo video + screenshots

13. learning_voice_agent (Voice interface)

  • May have privacy/API key concerns
  • Alternative: Demo video with explanation

14. agentic_learning (Research/Framework)

  • Described as “not so much built yet”
  • Honest status update more important than demo

15. corporate_intel (Python + PostgreSQL)

  • Backend-heavy, may have data privacy concerns
  • Alternative: Screenshots, sample outputs

The Demo Priority List

Do these FIRST (highest impact, lowest effort):

  1. Fancy Monkey - 10 minutes (just add the URL!)
  2. Internet Infrastructure Map - 30 minutes
  3. Letratos - 30 minutes
  4. describe_it - 1 hour
  5. subjunctive-practice - 2 hours

5 demos in ~5 hours of work = massive credibility boost

The “Active Development” Problem

What This Status Actually Means

Current usage: 8 projects marked “Active Development”

What viewers interpret:

  • Somewhere between 10% and 90% complete?
  • Abandoned but you’re being optimistic?
  • Working but not production-ready?
  • Prototype stage?

The problem: “Active Development” is too ambiguous.

Better Status Taxonomy

Replace “Active Development” with honest, specific statuses:

✅ Use These Instead

“MVP Complete”

  • Core functionality works
  • Not all features implemented
  • Usable, but rough edges
  • Example: “MVP Complete - Core image description works, Q&A in progress”

“Prototype”

  • Proof of concept works
  • Not production-ready
  • Demonstrates feasibility
  • Example: “Prototype - Validates content-based learning approach”

“In Production”

  • Live and being used
  • May still be iterating
  • Real users (even if just you)
  • Example: “In Production - Using with tutors, 3 active users”

“On Hold”

  • Not actively working on it
  • Learned what you needed
  • May or may not return
  • Example: “On Hold - Achieved learning goals, may resume”

“Seeking Testers”

  • Ready for others to try
  • Need feedback to improve
  • Working but wants validation
  • Example: “Seeking Testers - Core works, need Spanish learner feedback”

“Maintenance Mode”

  • Works, done adding features
  • Still fixing bugs if found
  • Considered “complete enough”
  • Example: “Maintenance Mode - Feature complete, occasional updates”

Based on your descriptions, here’s what I think the honest statuses are:

# describe_it
status: MVP Complete - Core features work, seeking user feedback

# subjunctive-practice
status: MVP Complete - Grammar practice functional, expanding scenarios

# conjugation_gui
status: Prototype - Windows-only, validates approach

# aves
status: In Progress - 40% complete, bird data integration ongoing

# internet-infrastructure-map
status: Complete - Visualization fully functional, adding minor enhancements

# letratos
status: Live - Portfolio site operational, adding content regularly

# fancy_monkey
status: Live - E-commerce operational with $0 hosting

# open-learn-co
status: Prototype - News aggregation works, UI needs polish

# ghd
status: Personal Tool - Works for my needs, not public-ready

# git_analysis
status: Experimental - Early exploration of patterns

# algorithms-data-structures
status: Complete - CLI course finished, no plans for updates

# learning_voice_agent
status: Early Prototype - Concept validation stage

# agentic_learning
status: Research Phase - Framework design, minimal implementation

# corporate_intel
status: Data Collection Phase - Backend works, frontend planned

# learning-agentic-engineering
status: Documentation - Content transformation complete

Why this is better:

  • Honest - Sets accurate expectations
  • Specific - Viewers know exactly what to expect
  • Professional - Shows self-awareness
  • Useful - Helps you track progress too

Evidence of Real-World Impact

What’s Missing: User Feedback & Metrics

For each project, ask:

  1. Has anyone else used it?
  2. What did they think?
  3. What metrics do you have?
  4. What problems did it solve?
  5. What would you do differently?

Current State: Minimal Evidence

Fancy Monkey description says: “It’s operational, profitable”

  • Great! Quantify it: “5 sales in 3 months, $200 revenue”
  • Add customer feedback if you have any
  • Show proof (screenshot of Stripe dashboard with sensitive data redacted)

describe_it mentions: “My tutors now use it with other students”

  • Powerful testimonial opportunity! Add:
    • “Used by 3 tutors with 12 students”
    • Quote: “This helps my students practice beyond basic descriptions” - Maria, ESL Tutor
    • Screenshot of tutor using it

Subjunctive practice says: “The app adapts to your level”

  • Show the data:
    • “Used for 50+ practice sessions”
    • “Average accuracy improved from 60% to 85%”
    • Graph of your own progress

Adding Evidence

For projects with users:

# Example: describe_it with evidence
description: >-
  [Your story here]

impact:
  users: 3 tutors, 12 students
  testimonials:
    - quote: "Finally, a tool that adapts to student level naturally"
      author: Maria S., Spanish Tutor
  metrics:
    practice_sessions: 156
    average_rating: 4.7/5
    vocabulary_learned: 500+ words

Display this in your project cards:


This transforms your projects from “things I built” to “problems I solved for real people.”

The Completion Spectrum

Not All Projects Need to Be “Done”

It’s okay to have:

  • Experiments - Things you tried and learned from
  • Prototypes - Proof-of-concepts that validated ideas
  • Personal tools - Things that work for you, not polished for others
  • Research - Exploration without code

What’s NOT okay:

  • Implying something is more complete than it is
  • Vague status that hides incompleteness
  • No differentiation between finished and unfinished

The Honest Portfolio Approach

Inspired by Dan Abramov’s blog:

He writes about things that didn’t work, bugs he created, concepts he struggled with. This makes him more credible, not less.

Apply this to your projects:

Example: agentic_learning

Current description:

“Not so much built in terms of tools yet, but quite a bit of research into autonomous learning systems”

Better (honest and specific):

status: Research Phase - Extensive reading, minimal code
description: >-
  I spent 3 months diving deep into agent architectures and self-directed
  learning frameworks—reading papers, testing LangChain/LangGraph, sketching
  system designs. I learned a ton about what makes agents work (and fail).

  The code isn't ready to show, but the research informed my other projects.
  I wrote a 15-page design document exploring different architectures, which
  helped me avoid common pitfalls in my Spanish learning tools.

  Status: Parking this for now. Achieved my learning goals. May build it when
  I have a compelling use case.

what_i_learned:
  - Agent memory is the hardest part, not the LLM calls
  - Most "agent frameworks" are just LLM wrappers with loops
  - Self-correction requires good evaluation metrics
  - Token costs add up fast in multi-agent systems

resources:
  design_doc: /docs/agentic-learning-design.pdf
  notes: /blog/2025/01/what-i-learned-about-ai-agents

Why this is better:

  • Honest about current state (research phase)
  • Shows value (informed other projects)
  • Demonstrates learning (what_i_learned section)
  • Provides resources (design doc, notes)
  • Clear future (parking for now)

Viewers think: “This person is thoughtful, honest, and learns from exploration”

Not: “This person starts projects and doesn’t finish them”

Project Quality Tiers

Create Visual Differentiation

Not all projects should be presented equally. Create tiers:

Criteria:

  • Working demo available
  • Real-world users (even if just 2-3)
  • Complete or MVP-level functional
  • Best represents your skills
  • Has compelling story

Candidates:

  1. Fancy Monkey - Live e-commerce with $0 hosting (technical + business)
  2. describe_it - Used by tutors, solves real problem (education + AI)
  3. Internet Infrastructure Map - Visual wow factor, complete (visualization + data)

Presentation:

  • Full-width cards with multiple screenshots
  • Detailed case study
  • Metrics prominently displayed
  • Testimonials if available

Tier 2: Active Projects (5-7)

Criteria:

  • MVP complete or in active development
  • Demonstrates skills you want to highlight
  • Has clear purpose and progress

Presentation:

  • Standard cards with good descriptions
  • Link to GitHub + progress indicators
  • Honest status badges

Tier 3: Experiments & Prototypes (Remaining)

Criteria:

  • Early stage or personal tools
  • Learning-focused rather than product-focused
  • Still worth showing for skill demonstration

Presentation:

  • Compact cards or list format
  • Clearly labeled as experiments/prototypes
  • Focus on “what I learned” not “what it does”

Visual Hierarchy Example

╔══════════════════════════════════════════════════╗
║  FEATURED: Fancy Monkey                          ║
║  ┌──────────┐ E-commerce with $0 hosting         ║
║  │          │ • 5 sales, $200 revenue             ║
║  │  Image   │ • GitHub Pages + Vercel             ║
║  │ Gallery  │ • Real Stripe integration           ║
║  │          │                                     ║
║  └──────────┘ [View Site] [Case Study] [Code]    ║
╚══════════════════════════════════════════════════╝

┌─────────────────────┐ ┌─────────────────────┐
│ describe_it         │ │ Internet Infra Map  │
│ [Image]             │ │ [Image]             │
│                     │ │                     │
│ Spanish practice    │ │ 3D cable viz        │
│ Used by 3 tutors    │ │ WebGL + Three.js    │
│                     │ │                     │
│ [Demo] [Code]       │ │ [Demo] [Code]       │
└─────────────────────┘ └─────────────────────┘

EXPERIMENTS & LEARNING
├─ agentic_learning (Research phase - design doc available)
├─ git_analysis (Personal tool - pattern exploration)
├─ learning_voice_agent (Prototype - concept validation)
└─ corporate_intel (Data phase - backend works)

Action Plan: Project Quality

Week 1: Deploy 5 Demos (8-10 hours)

Day 1: Quick wins (2 hours)

  • Find Fancy Monkey URL, add to demo_url (10 min)
  • Deploy Internet Infrastructure Map to Vercel (30 min)
  • Deploy Letratos to Netlify (30 min)
  • Deploy learning-agentic-engineering to GitHub Pages (30 min)

Day 2: Next.js apps (4 hours)

  • Deploy describe_it to Vercel (2 hours including env setup)
  • Deploy subjunctive-practice to Vercel (2 hours)

Day 3: Documentation (2 hours)

  • Update README for each deployed project with demo link
  • Add deployment instructions
  • Screenshot each demo for project cards

Week 2: Status & Evidence (6-8 hours)

Day 4: Honest status updates (2 hours)

  • Review all 15 projects
  • Update status from “Active Development” to specific statuses
  • Add completion percentage where appropriate

Day 5: Gather evidence (3 hours)

  • Contact tutors using describe_it for testimonials
  • Collect Fancy Monkey sales metrics (Stripe dashboard)
  • Screenshot your own practice data from Spanish tools
  • Document improvements in your Spanish level

Day 6: Add impact data (2 hours)

  • Create impact sections in ai_projects.yml
  • Add metrics, testimonials, user counts
  • Update project cards to display impact data

Month 1: Project Tiers & Case Studies (12-16 hours)

Week 3: Create tiers (4 hours)

  • Identify your top 3 featured projects
  • Redesign featured project cards (full-width)
  • Create “Experiments” section for prototypes
  • Update project layout with tiers

Week 4: Write case studies (12 hours, 4 each)

  • Choose top 3: Fancy Monkey, describe_it, Internet Map
  • For each, write:
    • Problem statement
    • Solution approach
    • Technical challenges
    • Iterations (what failed, what worked)
    • Impact & lessons learned
    • Screenshots/process images

Quality Checklist

For each project before adding to portfolio:

Minimum Requirements

  • Clear, honest status (not just “Active Development”)
  • Compelling description (problem → solution → impact)
  • GitHub repo is public and has README
  • At least 1 screenshot or demo video

Strong Projects Should Have

  • Working demo URL (if technically feasible)
  • Evidence of use (metrics, testimonials, or personal data)
  • Clear tech stack listed
  • “What I learned” section
  • Live demo
  • Real users (even if just 2-3)
  • Quantified impact (users, metrics, improvements)
  • Full case study with process images
  • Testimonial or proof of value

Success Metrics

Before (Current State)

  • 0 live demos
  • 0 project testimonials
  • 0 quantified metrics
  • 8 ambiguous statuses ⚠️
  • 15 projects all presented equally 😕

After (Target State)

  • 5-7 live demos
  • 3-5 project testimonials
  • 10+ specific metrics
  • 15 clear, honest statuses
  • 3 tiers of projects (featured, active, experiments)

The Impact Test

Current state: Someone visits your portfolio

  • Sees project descriptions
  • Clicks GitHub links
  • Maybe clones a repo
  • Tries to run locally (probably fails)
  • Leaves with vague impression

Target state: Someone visits your portfolio

  • Sees featured projects with demos
  • Clicks demo link
  • Uses working app immediately
  • Reads testimonial from actual tutor
  • Thinks: “This person builds things that work and people use”

That’s the difference between 6.5/10 and 9/10.

Final Thought

You have 15 projects. That’s impressive.

But quantity without evidence of quality creates skepticism.

Priority #1: Deploy 5 demos this week. Priority #2: Add honest statuses to all projects. Priority #3: Gather evidence of impact (testimonials, metrics, proof).

Do these three things, and your portfolio instantly becomes significantly more credible.

The projects are good. Now prove it with working demos and real-world impact.

Start today. Deploy Fancy Monkey’s URL. That’s 10 minutes to your first demo.