From discovering Superpowers to SVPG with Rocket Skates
The App: Team management tool to replace the many Excel ‘people planning model’ spreadsheets I think all CTOs have.
Last year: Built with Cloudflare Workers / Hono / HTMX / Neon database. More about experimenting with the tech stack than solving the problem. Got “quite far, but not far enough for it to replace Excel.”
This year: Same app, but I put myself fully into the hands of Claude Code Max Plan, and rebuilt it as using React / Tailwind for the FE.
The Mission: Push as far as I could in as short a time possible. See what agentic AI can really do in the hands of an experienced engineer who knows what they want.
25-30 hours total. Built more complex product features than I’d ever built in that timeframe in my entire career.
“Even when I was 21 years old coding every hour I could in my first startup!”
Link: https://github.com/cliftonc/drizzle-cube
This summer I have built more complex product features than I’ve ever built in that timeframe in my entire career.
Things I could envisage (esp. front end) but not build in any reasonable time scale become possible.
“Even when I was 21 years old coding every hour I could in my first startup!”
Common reactions:
Google’s definition: “giving in to the vibes” and letting LLMs handle technical details for throwaway projects.
My experience: Completely different.
I never “gave into the vibes.” Instead: methodical, systematic practice to explore what worked.
Key insight: These tools are powerful, but you don’t get the best by giving into vibes. You have to actively guide them.
AI tools are like humans - they like things well documented, consistent, discoverable.
The investment:
agents.md
files - across your codebase.If you don’t document architectural intent, how can anyone (AI or otherwise) discover it?
Claude Code defaults to: “Use Opus to plan, then Sonnet to execute.”
TL;DR: Use it.
Example prompt for major refactor:
“I have a major problem. The query engine has fan-out when joining cubes. Need to detect and pre-aggregate. Review implementation, write recommendations to docs folder for junior engineer to execute.”
Result: Detailed implementation plan in markdown.
Critical step: After planning, /clear
your session.
Seems counterintuitive, but with too much context the quality drops. The plan is written in phases - you’ll clear after every phase.
Why: Too much context → compaction events → lower quality output.
Switch to edit mode, simple prompt:
“There’s a plan in /docs/plan.md. Execute phase 1. Create branch first.”
Key: Finger hovering over ESC key. Watch the CLI in real-time.
When it diverges → ESC → “The file is in /src/server/query-planner.ts”
Short, sharp, direct interventions. You’re actively steering.
This isn’t just about coding faster. It’s about rethinking how product teams work and how we can accelerate our adoption of SVPG.
The bottleneck shift: Engineering isn’t the constraint anymore. It’s discovery, alignment, validation.
With AI: Weeks of discovery → Hours/days to build.
Traditional: “How might we solve this?” (convergent thinking)
SVPG: “Which of these solutions best solves the customer problem?” (divergent execution)
Build multiple working solutions rapidly, then validate with real so ftware, not prototypes.
This isn’t magic. You need strong foundations:
With proper guard rails, there’s can be no difference between AI-generated code (guided properly) and the output of an engineer, and the impact increase from one [engineer | designer | product manager] is dramatically multiplied.
Next: Exploring how this can work in the real world - an experiment with the Instruction Team.