AI Power User → Builder
Fast-track path from AI integration mastery to building production AI systems. Start where most developers aren't — then go where almost nobody is.
OpenClaw
Persistent AI agents, cloud deployment, and advanced agent patterns.
Enter →AI Atlas
Interactive map of AI tools I've charted on my journey.
Explore →Handbook
Human-AI collaboration principles. Quick reference for teams and individuals.
Enter →Program roadmap
Module 0 Fast Track Setup
- Running inference locally
Easily setup & run an LLM on your own machine.
- Agent notifications
Get notified when your agent finishes using hooks and ntfy.
- Access & secrets
Configure API keys and secrets securely for model access.
- Safety baseline
Practical safety for AI infrastructure — firewalls, audits, and access control.
- Visit module page
Full overview, resources, and artifacts for this module.
Module 1 AI Power User
- Multi-tool AI workflows
Running 3+ AI tools in parallel, choosing the right tool for each task.
- Prompt mastery
Beyond basics — structured prompting for autonomous systems.
- Model selection & economics
When to use Opus vs Sonnet, quota management, and cost-per-output thinking.
- Voice & multimodal workflows
Voice-to-AI pipelines, TTS output, and audio as a primary instruction method.
- Human-AI collaboration handbook
A quick-reference guide to principles for human-AI collaboration, from mindset to workflow.
- Visit module page
Full overview, resources, and artifacts for this module.
Module 2 AI Integration & Orchestration
- Persistent AI agents
24/7 agents on cloud infrastructure — always-on AI assistance.
- Automation pipelines
Cron-driven AI workflows, multi-stage processing, and overnight analysis.
- AI-first lifestyle
Morning briefings, memory systems, and making AI work while you sleep.
- Content & distribution
AI-augmented content creation, cross-platform publishing, analytics.
- Building a knowledge-base MCP server
I built an MCP server that exposes 51 articles to any AI client. Two tools, zero config, no API keys. Here's how it works.
- Living institutional memory
How organizations can use AI to ensure no knowledge ever degrades and no lesson is ever relearned.
- Visit module page
Full overview, resources, and artifacts for this module.
Module 3 Builder Foundations
- LLM interfaces
Client libraries, request/response patterns, streaming.
- Inputs & outputs
Structured prompts, JSON/tool outputs, function calling.
- Embeddings & vector basics
Understanding embeddings, similarity search, when you need vectors.
- Prompting & reasoning patterns
Advanced prompting, CoT, evaluation, micro-tools.
- Visit module page
Full overview, resources, and artifacts for this module.
Module 4 RAG & Retrieval Systems
- Vector databases
ChromaDB, pgvector, Pinecone — setup and comparison.
- Document indexing & chunking
Processing your own content for semantic search.
- RAG pipeline design
End-to-end retrieval-augmented generation.
- Observability
LangSmith, Langfuse — tracing, evaluation, cost tracking.
- Visit module page
Full overview, resources, and artifacts for this module.
Module 5 Agent Architecture
- Agent frameworks
LangGraph, CrewAI — multi-agent orchestration.
- MCP protocol
Model Context Protocol — building tool servers for AI.
- Control flow & state
State machines, reasoning chains, tool use patterns.
- Resilience & reliability
Failures, retries, graceful fallbacks, health monitoring.
- Visit module page
Full overview, resources, and artifacts for this module.
Module 6 Production AI Systems
- Local model serving
Ollama, vLLM — cost optimization, privacy, fine-tuning experiments.
- Packaging & deployment
CLI tools, web UIs (Streamlit/FastAPI), Docker.
- Evaluation frameworks
Systematic prompt testing, A/B evaluation, quality scoring.
- Shipping AI products
From personal infrastructure to products others use.
- Visit module page
Full overview, resources, and artifacts for this module.