Hi, I’m Timo. Welcome to my Blog and hope you get a few dry chuckles out of this.

  • (Personal note: This one is more of a practical SuperClaude guide based on my own experiences and understanding – do let me know if anything is bastardized… preferably not publicly. This is definitely not paid or sponsored, especially since SuperClaude is an open-source framework.)

    Having owned Clay in my current role for over a year and a half (focusing primarily on the AI side of things, not just the enrichment flows), I’ve become accustomed to typing the same “You are a [insert title] analyst at…” stage-setting prompts. Honestly, “accustomed” might be the wrong word… It’s more like an eye-roll-inducing amount of redundancy that is frustrating beyond belief. 

    This is not a criticism of Clay in any way – I’m personally a fan of it. But my point is that I used to spend a lot of time rewriting functionally the same prompt in different words for different LLMs. That improved over time as I started saving the prompts that I liked (i.e., those that produced desirable results and could be reused in the future). However, I couldn’t help but think “there’s got to be a better way to do this… right?” 

    Enter SuperClaude – an open-source framework built by folks in the broader Claude Code community. Think of it as a collection of powerful, reusable shortcuts and expert personas for your AI, saving you time and dramatically improving the quality of its output.

    It’s not perfect (few things are), but it’s certainly a massive timesaver both in reducing the prompt writing time and the number of debugging cycles.

    What is SuperClaude?

    I wrote the following in my last post, which I wanted to expand on:

    With the rise of “context engineering” as the successor to “prompt engineering,” people have increasingly looked for ways to optimize the inputs, memory, tools, and data that enter AI tools. For those unfamiliar, “prompt engineering” can be thought of as the optimization toward asking the perfect question to get the perfect response. On the other hand, “context engineering” is more like giving your AI all of the necessary context (yes, I’m defining a term with itself – forgive me), including product requirements, rules, instructions, limitations, etc., upfront so that it can do its job with minimal interpolations, errors, and hallucinations.

    To do this, the community has converged on simple text files (the markdown files mentioned in my last post) that act as recyclable cheat sheets. 

    Naturally, people are also looking for ways to reduce redundant work (including myself). For non-technical builders like myself, the easiest way to get started isn’t to create a whole set of prompts, agents, workflows, and so on from scratch. Rather, the highest ROI on time would be to leverage the GitHub repos and community that has already emerged around Claude Code.

    The one that I have experience with (although many others exist) is SuperClaude (16K stars on GitHub as of writing). Funnily enough, this was yet another instance of my Google app being weirdly tailored to whatever I’m interested in at the current time, since it first surfaced SuperClaude to me several months ago in the form of a Reddit thread introducing SuperClaude v2 (back when it had ~5K stars, if I remember correctly). 

    I’m guessing that for many experienced engineers, especially those fluent in AI, a prompt library might not be the most helpful. I personally found it to be a tad intimidating at first, but I found it helpful to think of it as a prompt library and a system for building / managing your instructions for AI. If you’re a non-technical user looking to effectively use Claude Code ASAP, SuperClaude is functionally a level 99 starter kit (oxymoron, I know, but hopefully you get the metaphor). Technically, it’s incorrect to refer to SuperClaude as a prompt library since they describe themselves as a “meta-programming configuration framework,” but c’est la vie

    The above is actually a double-shoutout to i) my eight years of learning French as a kid, and ii) my belief that Marketing is about external perception, not internal mantras/positioning (e.g., you can call it a paddle-shaped cutting board, but if everyone uses it exclusively for charcuterie spreads, it’s a serving tray — not a cutting board).


    Please use at your own discretion if you’re just dabbling and are mindful of your usage. SuperClaude’s detailed prompts are longer, which means they use more tokens (think of tokens as the ‘words’ you’re charged for when using an AI), so it can affect your billing. On the other hand, I do believe the resulting output is significantly higher quality. This is especially the case if you’re already running into usage limits before trying SuperClaude or equivalent solutions (TechCrunch article on usage limits, for reference).

    Installing SuperClaude – necessary chore, but here’s the tl;dr:

    1. If you haven’t already, install an IDE – I use Cursor, but there are many options out there (e.g., VS Code, Windsurf, Eclipse, etc.)
    2. If you haven’t already, install Claude Code (Set up Claude Code, Anthropic)
      1. If you need help with installing and using Claude Code with Cursor, I would recommend this guide (which I shared in a previous post)
    3. Refer to SuperClaude README.md for straightforward installation instructions.
      1. If you need assistance installing pipx or pip (mentioned in the README), check out here for pipx and here for pip

    Just for convenience’s sake, I also set Cursor as my default application for opening markdown files (.md files) since the auto-formatting makes reading a tad easier than TextEdit, Notepad, or equivalent. My -9.50 prescription eyes thank me for it (while I thank my genes for this affliction).

    Why Should I Care About SuperClaude? How’s it Actually Helpful?

    Before elaborating further, I want to clarify that SuperClaude, despite its name, does not turn Claude Code into a bulletproof AI coding man of steel. You should still employ the same amount of diligence and care when working with it as you do with naked Claude Code – blind trust is not recommended. 

    I found the SuperClaude repo to be incredibly helpful for several reasons:

    1. Prompting time savings – Reducing redundancies that’d stay the same across tasks, projects, and LLMs (e.g., the sections like “You are a [X] analyst responsible for [Y]”).
    2. Diction / terminology error prevention – In case you’re like me and end up saying something that you believe to mean one thing, but really it carries a different connotation or meaning in the coding world (resulting in a confused AI and, likely, botched code).
    3. In-session context management – AI tends to perform worse the bigger their context windows get, so you want to be able to manage this effectively within each session to get the highest quality output.
    4. Session memory management – Speaking of sessions, if you start a new conversation (or your chosen IDE crashes in the middle of an implementation), there will be almost no context retained from the prior session. As of this writing, SuperClaude comes with Serena (a built-in MCP server for session memory), which addresses the exact issue of maintaining context across Claude Code sessions.
    5. Documentation tracking throughout development – If you’re vibe-coding, I imagine the last thing you want to do is create technical documentation (including commit messages, formal API docs, or even documentation that serves as context for Claude Code to reference in the future). Thankfully, SuperClaude can handle a lot of this in a systematic way that produces better commit messages and technical documentation than I’ll ever be able to.

    The Meat of SuperClaude (a Non-Technical Interpretation & Guide)

    Seeing SuperClaude for the first time can be a tad overwhelming. Thankfully, it’s not as scary as it looks (after all, it’s designed to empower/enable you), and there are only three things you need to get familiar with before you start extracting value from it (nod back to the finance days):

    1. Commands (Actions) – think, Gordon Ramsay telling someone to cook a steak
    2. Agents (Domain Experts) – think, the 30-year line cook in charge of steaks
    3. Flags (Action Modifiers) – think, someone saying “not I don’t want the default medium rare, I’d prefer well-done because I want to be chewing each bite for 20+ seconds)

    If you want to understand the full capabilities in detail, I highly recommend reviewing the User Guide directory in the GitHub repo here. The following is a more layman’s interpretation, but hopefully it’s helpful as a quick and dirty starter guide.

    My final two cents is that when getting started, it’s valuable just to look through what exists in the SuperClaude markdown files themselves. You’ll quickly see and understand how SuperClaude is able to control the behavior of Claude Code. It’s essentially a free lesson in context engineering built on a generous foundation of examples.

    SuperClaude Commands (Actions Invoked with a / Symbol)

    Consider these an extension of the native slash commands that exist within Claude Code, prefixed by /sc: to denote that they are from SuperClaude as opposed to being the native ones. Funnily enough, these used to lack the “sc:” prefix, so I’d often get confused as to whether I was using a SuperClaude or not. To this day, I still have the SuperClaude Commands user guide pinned in my Arc Browser as a legacy reference tab.

    Similar to slash commands in Notion or Slack, these are used to trigger commonly used actions with a set of pre-configured prompts such as /sc:workflow for implementation planning (i.e., generating structured implementation plans from requirements) and /sc:analyze for code assessment (e.g., architecture audits, code reviews, etc.). These commands are especially helpful in reducing the number of prompt characters that you have to type while ensuring higher-quality output.

    There are a total of 21 commands that come pre-packaged with the SuperClaude repo (as of writing this), which is like Christmas came early for someone who’s looking to work smart, not (necessarily) hard. When you can shortcut typing out a massive prompt (or using another LLM to generate the prompt, as people often do) into less than a line or two of typing, it’s somewhat of a game-changer, no?

    For example, here is an overview of what’s included in the /sc:workflow command (which itself is a markdown file of 97 lines). These command markdown files serve as actual behavioral instructions to Claude Code when you invoke them:

    • Triggers – when you should be using sc:worfklow
    • Usage – which other flags you should combine and use with the command to maximize your value and better achieve the results you want from the command (more on that below)
    • Behavior Flow – now this is where the power starts to show, due to not having to literally type out each of the below in detail. The following is structured as steps 1 through 5 in the markdown file:
      • Analyze – “Parse PRD and feature specifications to understand implementation requirements”
      • Plan – “Generate comprehensive workflow structure with dependency mapping and task orchestration”
      • Coordinate – “Activate multiple personas for domain expertise and implementation strategy”
      • Execute – “Create structured step-by-step workflows with automated task coordination”
      • Validate – “Apply quality gates and ensure workflow completeness across domains”
    • Key Behaviors – won’t dive too much into this, but it’s more general guidelines around how Claude Code should behave throughout the whole execution process (e.g., multi-persona orchestration, dependency tracking, etc.)
    • MCP Integration – which MCP servers (functionally tools) are available during the command’s execution; SuperClaude comes with a set of them, like Context7 for retrieving the latest API documentation so you don’t have to manually find documentation and paste links into Claude Code
    • Tool Coordination – what the command can use (e.g., read/write/edit, WebSearch, TodoWrite, etc.)
    • Key Patterns – functionally a set of standard workflows that commonly appear when executing the workflow (e.g., for a PRD Analysis workflow it might include: “Document parsing → requirement extraction → implementation strategy development”)
    • Examples – sample, and often optimal, combinations of commands and flags (again, more on that below, but they’re functionally action modifiers if you want the command to focus on something in particular)
    • Boundaries – a clear set of “will” and “will not” items that help reign in Claude Code; e.g., this command will not “execute actual implementation tasks beyond workflow planning and strategy”

    Honestly, the SuperClaude .md files were a godsend. When I first read the “Behavior Flow” section inside this command, it felt like I’d stumbled upon the developer’s standard playbook. It showed me how a professional (at least, a junior professional) would approach a problem, breaking it down into clear steps with guidance on approach and clear scoping of responsibilities/actions.

    As you can imagine, the above is a little more efficient than crafting the same-same-but-different prompts repeatedly, only to have Claude Code take an unexpected turn, like beginning to implement when you were just looking for an implementation plan. 

    Doubling down, I actually learned how to structure my own vibe-coding workflow by reading the various .md files SuperClaude contains, focusing on what seemed to be best practices. Instead of blindly trusting Claude Code (which I did in the beginning), I realized how important it was to validate completion after each step (last step of “Behavior Flow” above). I also learned / re-realized that without validation, Claude Code tended to claim successful implementations even when only a partial implementation was successful. 

    Referencing back to my last blog on the importance of pre-development documentation and setting up CLAUDE.md, the first thing I do next is invoke sc:workflow. Here’s how the start of my projects all look:

    1. Ensure all pre-documentation files are in the project’s docs/ subdirectory
    2. Ensure CLAUDE.md is created (i.e., you’ve ran the /init command and pointed it toward the pre-development docs)
    3. Invoke something along the lines of the below:
    /sc:workflow “Please carefully review the @docs/ folder for our pre-development documentation files and create a thorough, phased implementation plan based on all your learnings. I want it in a checklist format.” --strategy systematic --seq --ultrathink

    The double hyphens denote flags (more on that below) and --ultrathink is one of my favorites when creating the initial implementation plan. While SuperClaude comes with a wealth of flags, Claude Code comes with innate flags as well. If you’re interested in learning more about --ultrathink, which pushes Claude Code to increase the “thinking budget” for the task, read more in this “Claude Code: Best practices for agentic coding” article from Anthropic.

    SuperClaude Agents (Domain Experts & Context Management Hacks)

    Think of this as the evolution of “You are an expert in…” system prompts that set the stage for Claude Code and other LLMs out there. Similar to the Commands above, it’s a set of pre-existing behavioral instructions that save you the typing time (or even speaking time, if you’re using tools like Wispr Flow) while likely being better in quality (at least, it’s better than what I could come up with before reviewing some of these .md files). 

    The most meaningful difference, at least to me, is that this utilizes the Claude Code Subagents functionality. According to Anthropic, “Custom subagents in Claude Code are specialized AI assistants that can be invoked to handle specific types of tasks. They enable more efficient problem-solving by providing task-specific configurations with customized system prompts, tools and a separate context window.”

    SuperClaude comes with 14 agents (as of this writing), which include the following and more (these are just the ones I use most often): backend-architect, frontend-architect, python-engineer, system-architect, root-cause-analyst, and refactoring-expert. Each of these comes with its own set of domain expertise and behavioral specifications that save an immense amount of time.

    For example, python-expert includes the following sections (not going to go too in-depth into them since this post is already getting long and the below is somewhat intuitive; additionally, some sections overlap with my the Commands section I covered above):

    1. Triggers
    2. Behavioral Mindset
    3. Focus Areas
    4. Key Actions
    5. Outputs
    6. Boundaries

    So why did I call out “separate context window” in Anthropic’s definition? Since a lot of my workflow (and what I advocate for) involves detailed documentation that I use as a guiding north star for Claude Code, rereading these files again between sessions is very context-consuming. This is even more the case as tasks and requests change throughout development, meaning only certain portions of the documentation files might be pertinent for any give task. As such, I often offload the documentation review to one of the various subagents so that they can review all the documentation, extract the most pertinent parts, and return only the relevant context to the main context window

    Since bloated context windows often lead to a degradation in performance (universal across all LLMs, not just Claude Code by any means), being able to distill only the necessary parts from my documentation files is absolutely critical. Since I also like firing off longer prompts to try and tackle multiple, but related, tasks at once, this has been immensely helpful in avoiding things like conversation auto-compacting in the middle of a longer task. Anecdotally, I have seen the main context window save anywhere between 10% to 40% from using subagents to review and extract from the pre-development documentation files instead of just dumping them directly into Claude Code’s main context window!

    One key thing to note is that SuperClaude comes with the ability to automatically invoke subagents (i.e., the “2. Auto-Activation (Behavioral Routing)” section of the User Guide). Thanks to this capability, it’s not critical that you memorize all of the available agents and when to use them. Instead, I would recommend leveraging the --agents flag (more on this flag below) when starting out with SuperClaude to see which agents are getting invoked for which types of tasks. That way, you start forming the mental associations and mappings.

    Properly leveraging subagents has been the single biggest productivity unlock for me, enabling development to occur much faster and more accurately, thanks to better management of the main context window.

    SuperClaude Flags (Action Modifiers to Hone Claude Code’s Focus)

    If Commands can be compared to “drive the car,” Flags can be equated to “…slowly / as reckless as Los Angeles drivers / so ‘safe’ to the point of being dangerous (like my father).” On their own, they aren’t helpful (imagine someone walking up to you in the street and saying “slowly”). When combined with other Commands, however, you suddenly have much more control over the behavior of Claude Code. 

    Flags can also be combined with additional modifiers (similar to a modifiers’ Inception), such as --focus security, providing you with even greater control. Certain flags support this (often a specific set of recommended additional modifiers, such as “security”, “performance”, “quality”, and “architecture” for the --focus flag), while others don’t seem to support them. 

    There are a lot of flags available (both natively and in SuperClaude), so it’s not exactly productive for me to walk through and contextualize each one. Therefore, I will share the ones that I use most often from SuperClaude alongside when I use them:

    • --ultrathink – natively available flag for complex workflows and coordination by Claude Code; most often used when doing implementation planning and when tackling multi-step implementations
    • --c7 – typically when implementing a new library / package for the first time, I want to ensure that we’re referencing the latest documentation via the Context7 MCP server
    • --serena – for context management across sessions, although this is already baked into the /sc:save and sc:reflect commands; I usually end every session with a save and a reflect before clearing (the native /clear Claude Code slash command to free up all the context)
    • --uc – for ultracompressed context, often used in conjunction with /sc:load when I’m starting new sessions for an existing project
    • --validate – pre-execution risk assessment, often used when making changes or fixes that might impact existing functionality (especially on the backend)
    • --depth – mostly used with the additional modifier of “deep” (i.e., --depth deep) to maximize context when reviewing pre-development documentation or analyzing implementations
    • --agents – while you can certainly invoke specific agents as well, this native Claude Code flag instructs Claude Code to (in layman’s terms) look through the available subagents to it and call the right one to handle the task(s). Since there is a lot of information to digest across commands, agents, and flags, it’s likely more efficient to rely on Claude Code’s selection (at least initially) instead of potentially force-calling the wrong agent for the task. I use this with almost every prompt to try and chunk up the task while maximizing the available main context window’s capacity throughout the task.

    Of course, certain flags have their own use cases, which is why I also have SuperClaude’s documentation pinned as a tab in my Arc browser. Certain flags are intended to be used with specific Commands as modifiers, so whenever I find myself invoking a command for a long task, I usually check the documentation for the SuperClaude version that I’m on to ensure that I have properly managed the behavior of Claude Code upfront (at least, to the best of my ability).

    Final Thoughts

    Given the nature of vibe-coding where what you use and leverage is dictated by what your idea is, how you best leverage SuperClaude depends on whatever you’re trying to build. Nevertheless, I think it’s hard to argue against the value of having a pre-configured set of behavioral instructions that can service a wide variety of common workflows / scenarios that you might run into when developing your next million-dollar idea, passion project, or work request that you receive. 

    All that’s to say, you never know until you try it out yourself. If it’s helpful, amazing. If it’s not helpful, well, at least you tested something out and (hopefully) learned from it.

    If you shortcut even a portion of your workflows through SuperClaude or an alternative repo (i.e., if you start Googling around and find one that fits your needs better), then this post has served its purpose.

  • (Personal note: A mouthful of a title, I know – but hey, a long title for a long post seemed fitting to me)

    Table of Contents:

    Common Pains of AI Coding

    Not to trigger pseudo-PTSD in anyone, but here’s a list of things that drove me nigh-insane when I first started vibe-coding (across both Repit and Claude Code):

    • AI conducting endless debugging that feels circular, like a dog chasing its own tail
      • “Now that I’m finally reading the logs… didn’t we already try this exact same fix three iterations ago? Actually, wait, why do we even need to fix this when it’s an intended functionality!?”
    • AI forgetting critical context or “getting lost in the sauce”
      • “Why did my app for pet owners randomly create ‘Dog’ and ‘Cat’ as new potential profile type options? 
    • AI introducing N > 1 ways to solve the same exact problem
      • “Why do I have three libraries to visualize charts? Why are these three libraries causing compatibility issues? Oh god, I have to consolidate and refactor the code.”
    • AI removes a piece of problematic code since it’s not used by the focus of the current sessions, but forgets that said piece of problematic code is critical to the broader platform
      • “Wait, did we just remove the requirement to be authenticated because it threw a couple of errors? Why would you remove instead of fixing the error? Hello?”

    I’m sure there are plenty of cases beyond what I listed above – these were just ones that I distinctly remember (and sometimes still deal with). As my last post highlighted, context makes all the difference when vibe-coding. Despite implementing the lesson I learned, I found myself still encountering the challenges above once conversation histories became lengthy or the codebase grew large enough.

    Naturally, I started asking: “There must be a better way for this, right?” Without access to some of the more enterprise-grade context-management solutions, what can a humble individual pursuing passion projects do to remedy this issue?

    After a lot of researching (the usual forums posts, helpful YouTube videos, Reddit threads, and cringe YouTube videos) and manual testing, I found a solution that works well for my purposes.

    This post sheds light on the problem and describes how you can avoid these migraine-inducing issues (or “avoid more of,” if you’ve already been tinkering).

    The Solution to the Common Pains of AI Coding

    The answer lies in creating pre-development documentation as markdown files (.md files). Think of these as persistent context files that can be referenced at any time by your vibe coding tool and are applicable across both sessions within the same tool, as well as across tools (if you choose to start over or try with another tool). 

    While they can take some time to develop, I’ve found that it provides all of the following:

    1. Helps keep the actual development process focused on a north star
      1. At any point, you can refer back to these files in a prompt (or embed it directly into the CLAUDE.md file, which I talk about later) to maintain context across sessions.
    2. Reduces the amount of interpolation that LLMs have to do
      1. Rather than saying “Hey Claude/Gemini/ChatGPT, find me a route from A to Z!” (in which case, it might come up with 100 different routes), you’re more so saying “… find me a route from A to Z that passes through [B/C/D/E/…] – evidently, the latter leaves less room for error and unanticipated results.
    3. Helps you clarify goals and prevent yourself from scope-creeping (i.e., letting your creativity go haywire)
      1. As you’re brainstorming through required vs. nice-to-have features, you’ll be forced to read through just how much you are asking for from these vibe-coding tools. If you find yourself staring at a 12-page long “required functionality” doc – you probably have a whole lot of non-required functionality mixed in there. 

    Interestingly, this is nothing novel and I certainly didn’t invent this concept. In fact, I first came across markdown files when I saw a ‘replit.md’ file in the directory of one of my Replit apps. I didn’t think much of it at the time and never clicked into it, thinking it was just a marketing tactic to have Replit’s name show up in every GitHub repo created with it. 
    Imagine my surprise when I was researching Claude Code and realized everyone was referencing the CLAUDE.md file – some gears started turning. “Maybe ‘replit.md’ was actually used to manage context in Replit as well…?” As I went to some of my Replit apps to finally open these .md files for the first time, I realized this was the missing piece all along: augmenting the vibe-coding apps’ existing context files with my own.

    Introduction to CLAUDE.md and Markdown Files (.md files)

    For those already using Claude Code, or who have done their due diligence on it, you’re likely familiar with the CLAUDE.md file. As Anthropic describes it, “CLAUDE.md is a special file that Claude automatically pulls into context when starting a conversation” – almost like a persistent set of instructions and/or context that you want to start each session with (Anthropic, “Claude Code: Best practices for agentic coding”).

    There are countless guides, YouTube videos, Reddit posts, and even GitHub repos (more on that in a later post) that share best practices on both what to include in your CLAUDE.md file as well as how to structure it. As such, I won’t dwell too long on it, but here is a sample of what I typically include in mine:

    • Project Context
    • Technology Stack & Architecture
    • Directory Structure Guide (especially if I have pre-development documentation files)
    • Key Business Logic & Conventions
    • Development Workflow / Phases (e.g., what’s MVP vs. post-MVP)
    • Reminders
    • Rules / Restrictions / Requirements
    • Links to Key Documentation & When to Use (typically for pre-development documentation files)
    • Environment Configuration

    I’ll get into this more later on, but I would recommend you DO NOT start with creating a CLAUDE.md file off the bat. Rather, I would recommend you create pre-development documentation as markdown files (.md files) before diving into the Claude side of things.

    Speaking of markdown files…

    Obviously, I had done my own research into Claude Code before choosing to invest time into it on top of Replit and other vibe-coding tools. Inevitably, I had come across CLAUDE.md before, but I’d be lying if I found Anthropic’s post above on my first search. Somewhat embarrassingly, my first Google search was actually: “what is an .md file?” I’m being dead serious when I say I actually thought it was a new coding language specific to AI and LLMs that I completely missed despite my ever-trusty Google app constantly surfacing relevant content for me to read.

    Digging a little deeper, I quickly realized these .md files weren’t a new coding language at all. In fact, these markdown files (.md files) quickly became my favorite file type due to how readable and structured they often were. Of course, they could provide instructions and context (given CLAUDE.md is itself a markdown file), or they could serve as documentation and notes, similar to a traditional text file. The only difference is that markdown files support syntax, which both helps with readability and with LLMs’ ability to interpret instructions. 

    Since much has been written about this, you can check out this Basic Syntax guide for more details. As for what I believe to be most important for vibe-coding purposes:

    • Heading Levels – We often rely on bolding / spacing to separate sections in our writing. Unfortunately, as of writing this, Anthropic’s Claude seems to be the only one out of the big three that actually supports text formatting within its chat window (side note, neither does Perplexity). So if that’s the case… how can we ensure that ChatGPT and Gemini are actually associating the right instructions to the right topics / sections? How do we bridge what we see as a no-brainer to what an LLM would interpret as just another sequence of tokens?
      • Enter, Heading levels. There are numerous prompt engineering frameworks out there, such as C.O.R.E. (Context, Objective, Role, Example) and T.R.A.C.E. (Task, Request, Action, Context, Example). Still, you’ll quickly notice that most example prompts using these frameworks leverage heading levels to separate each of the letters in the acronym.
    • Bolding Words – I typically use bolding (which looks like **this** wherein “this” would be bolded) when instructing Claude Code to **NOT** do something. Probably my most common use case is: “Do **NOT** edit the codebase. This is strictly a read-only audit of [X].”
    • Lists – pretty straightforward on this one: Use numbers plus periods for ordered lists and hyphens for unordered lists. The only mildly annoying thing about this is if you’re like me and often number lists with parentheses (e.g “1)” or “2)” and so on), it’s going to take a little bit of getting used to.
    • Code Blocks – Personally, I do not like being prescriptive with Claude Code since I’m not technical enough to be prescriptive. More often than not, I lead Claude Code astray. However, if you’re more familiar with and confident in the technicals, you can clearly indicate what is meant to be a code block and what isn’t. The less you need to rely on Claude Code to interpret your text input, the more likely that it’ll actually do or deliver what you want it to. 

    Of course, there are many more nuances to markdown syntax than just the above, but that’s not the focus of this blog post. Instead, I just wanted to provide a quick overview for those who might not have encountered them in the past. 

    Diving into Pre-Development Documentation for AI

    As a non-technical individual, I’ve found that pre-development documentation is the one step that I absolutely cannot skip. With these, my projects usually make real progress, if not reach completion. Without these, I usually get stuck in debugging rabbit holes, deployment (and redeployment) hell, and hallucinations galore. This is especially the case as the codebase gets bigger or chat histories get long.

    Furthermore, excellent pre-development documentation also provides you with flexibility. There’s no need to be tied to the conversation history of a specific vibe-coding tool you’re using. If you find that your current tool is insufficient, the rational decision would be to try something else (fake shoutout to my Economics degree, since I wasn’t the best student). Rather than having to start from scratch, your pre-development files (where most of the context is laid out already) allow you to leapfrog the initialization pains and, sometimes, even one-shot most of your app from scratch. 

    I actually provide an example of precisely the above at the bottom of this post under “Final Words”

    Below are some of the questions I had when I first came across the idea. I’ve also included my non-technical responses. Although they’re from a non-technical perspective, it’s based on countless trial-and-error experiences that hopefully you’ll be able to avoid.

    A. So what are these pre-development documentation files you speak of?

    In a nutshell, I’m talking about scoping out the project before development begins. Despite not having worked a day in construction, I would liken it to creating the blueprints, deciding on materials, and designing the layout of a new-build house before bringing in the workers. Side note, I watched my childhood home get built from the ground up, which is where the inspiration for this analogy came from. 

    These pre-development documentation files help paint the picture for Claude Code and also limit the amount of decision-making that Claude Code needs to do. I view Claude Code as a delicate daisy chain going from your idea and inputs to the ideal output and final product. Each additional link makes the whole chain weaker, and every time you rely on Claude Code to make a decision in the middle of an action (e.g., it has to decide what libraries to use for an implementation in the middle of implementing said feature), the less likely you are to get an error-free result. 

    Layering onto the prior topic, the easier it is for Claude Code (or really, any other LLM) to interpret your documentation, the greater the likelihood that things are done without too many hitches. I’m not promising zero, though – that’s nearly impossible.

    To the degree that I can, I limit vibe-coding tools’ scope to pure implementation where possible, and a large part of how I do this is through the pre-development documentation.

    B. What pre-development documentation files should I make then?

    If you had asked me in May 2025, I would have given you one answer. If you asked me in June, I would have given you different answer. And if you asked me right now, I’d give you another answer still. I’m not saying this to say that I am fickle-minded. 

    In truth, back then, I couldn’t accurately anticipate what my project would require on the implementation side. I was also worried that by being too descriptive in the pre-development documentation, I would be too prescriptive toward Claude Code and lead it astray (similar concern to the “Code Blocks” section above). However, as time passed and I pursued more passion projects, both of these factors became less of a barrier; therefore, my answer has evolved.

    If you’re creating pre-development documentation for the first time, it’s fine to keep it simple, such as a PRD.md (Product Requirements Document). In fact, that’s also where I started. Nevertheless, several “let’s just start from scratch” moments of frustration later (i.e., I started the same passion project anew), I’ve since expanded the collection of pre-development docs that I make before the shovel meets the ground, so to speak. 

    For context (pun intended), here’s a tree from my current passion project’s root folder (stored in `docs/`):

    docs
    ├── design
    │   └── DESIGN_SPEC.md
    ├── legal
    │   ├── PLATFORM_POLICIES.md
    │   └── TERMS_AND_CONDITIONS.md
    ├── product
    │   ├── CONTENT.md
    │   ├── FAQ.md
    │   ├── PRD.md
    │   ├── SITEMAP_IA.md
    │   ├── USER_FLOW.md
    ├── security
    │   └── SECURITY.md
    └── technical
        ├── DATABASE_SCHEMA_ERD.md
        ├── DEV_PRIORITY_GUIDE.md
        ├── OAUTH_SECURITY_FLOWS.md
        ├── TECH_SPEC.md
        └── VERSION_CONTROL_DOC.md

    To clarify, within `docs/` I have several folders, and within the folder groupings, I have the pre-development docs (all of the .md files). 

    Some of the above may not apply if your project doesn’t require things like legal or OAuth, but I’ve found that the following are my personal must-haves:

    1. PRD.md – your core product requirements
    2. TECH_SPEC.md – your technology stack, architecture, API setup, etc.
    3. DATABASE_SCHEMA_ERD.md – your database schemas
    4. DESIGN_SPEC.md – overall look+feel and guiding design principles (there’s the marketer in me)
    5. CONTENT.md – if you have specific text or guidance for text (e.g., email templates)
    6. USER_FLOW.md – expected average user flow (or the ones you care about the most)
    7. SITEMAP_IA.md – If your web application has multiple pages, it’s helpful to define them upfront

    Honestly, if you asked me in another 1-2 months, I wouldn’t be surprised if a new addition (or several) join the list. For example, I regret not having created a “BUSINESS_MODEL.md” document before I started my current passion project. Just going through the brainstormin / ideation phase would force you to consider all parties / stakeholders involved if your project were an external one intended to eventually generate revenue. In the case of my current passion project, I wanted to learn how to integrate Stripe since I had never dealt with it before, so I regret not having scoped out a “BUSINESS_MODEL.md” beforehand. 

    Of course, each person and project will differ, so please view the above as general advice and lessons from my experimentation, rather than gospel. 

    C. How do I create them? Are you asking me to become technical and a novelist?

    No – as much as I’m blogging again to get back into writing, I’m not so masochistic that I want to be spending a week or two just writing pre-development docs between 8 PM and 2 AM Pacific Time for a passion project. Especially with how work picks up anywhere betwen 6 AM and 8 AM these days, I don’t even think a masochist would enjoy that.

    My process for creating pre-development docs involves roughly:

    • 10% typing in what I want
    • 80% reading the outputs from various LLMs
    • 10% making edits, clarifying questions, prodding the LLM for further clarification, etc

    Of course, this is just my process and it varies from person to person (and even from project to project).  

    For technical questions and remarks like “I don’t know libraries, much less which library to use for a specific function!?” – don’t worry. After all, we are in the age of AI. Unlike traditional Google searching, you can just send whatever you have decided on already into an LLM and ask, “What would be the best library for [X] that’s the easiest to implement for MVP purposes?” and you’ll get a slew of recommendations. 

    All that said, here’s my summary of how to create pre-development documentation:

    1. Choose your favorite brainstorming companion: My “bias” is toward Gemini since my personal experience is that it’s less sycophantic than ChatGPT and Claude. Usually, I have some semblance of an idea and I’m taking it to my “bias” (this is a K-pop reference, by the way) to see what direction I want to take it while still being reasonable achievable.
    2. Begin brainstorming with your idea: This phase depends heavily on how baked (or half-baked) your existing idea is. This isn’t Y Combinator after all – concepts like “tarpit” ideas and founder-market fit don’t apply when you’re just tinkering around. Whether your idea is a classic “hello world” page, a dad-jokes engine, or it’s something that’s actually work-related – I don’t think there’s any downside in giving it a shot. 
    3. Turn idea into PRD_v1: Once your idea is fully baked, request your bias to ask any final clarification questions it may have after all of the brainstorming. Answer any questions it may have, and then ask it to generate your PRD_v1 in “markdown syntax within one code block” to make it easier to copy and paste elsewhere. I’d recommend spelling PRD out just in case.
    4. READ PRD_v1 & Edit: This is incredibly important and often overlooked, especially if you have the tendency to trust AI to the degree of blind confidence. Actually read the PRD_v1 and treat it as the foundation of the house – if it’s botched, so will the rest of the house. Make corrections, adjustments, tweaks, or complete rewrites as needed. This can either be done in coordination with your preferred LLM or directly within a text editor.
    5. Create the other pre-development docs: Once you have created the PRD_vF.md (vF as in “version Final” – a callback to my Finance days), you should start a new chat, upload the file, and ask it to create the remaining pre-development docs, starting with TECH_SPEC.md. I prefer to start with TECH_SPEC.md followed by DATABASE_SCHEMA_ERD.md (if needed), since they form the basis of your application’s backend. Of course, TECH_SPEC.md often includes frontend specifications as well. However, I’ve generally ended up in more debugging rabbit holes with the backend, so I take care to read through and ensure that these two critical files are in good condition before proceeding with the rest.

    It’s a classic “go slow to go fast” process that feels painful in the moment but pays dividends down the line. I also understand that it’s somewhat antithetical to the “deliver a wow moment within five minutes of using it” experience that vibe-coding has become synonymous with. 

    Note that in my summary above, I’ve avoided including the countless reading, iterating, and clarifying that may actually be involved. I would expect this process to take between an hour for a small web app to several hours for a fully functioning app (assuming no breaks). You want these pre-development docs to serve as the “north star” throughout the development process and a well-designed set can save you countless hours later on.

    I understand this can be a little abstract, so let me know if screenshots would be helpful – always happy to supplement this post with examples and sample prompts, if needed.

    What to do After Creating Pre-Development Docs

    Assuming that you’ve already installed an IDE with a terminal (e.g. Cursor, VS Code, etc.), and Claude Code, the rest is a breeze. If you haven’t yet, I’d recommend this guide, watching setup videos on YouTube, and asking LLMs for specific guidance on problems that you might be running into! Candidly, I’m only familiar with my current environment, so I don’t want to overstep and potentially lead you astray.

    After your pre-development docs are created, do the following:

    1. Create a new folder for your passion projects (if you don’t already have one)
      • I would highly recommend against using spaces in any of your folder names – Claude Code often has problems with executing bash commands when paths contain spaces in them. Hyphens, snake case (underscores), and camel case (e.g. likeThisFormat) all work much better. For example, I previously had a folder called “AI Projects” that cause so many permission requests that I eventually just cloned my passion project’s repo into a new “aiProjects” folder – unnecessary permission requests from Claude Code immediately dropped by 80%.
    2. Create a folder for your new idea (again, no spaces in the folder name!)
    3. Create a “docs” folder within the new idea’s folder
      • You do not have to follow the structure that I used for my passion project. I just structured it like that since it was easier for me to track within Cursor’s file navigation menu. 
    4. Open the project in your IDE of choice so that you’re already in the correct folder
    5. Activate Claude Code (enter ‘claude’ in your Terminal within Cursor) and then enter the slash command ‘/init’
      • Since you should only have your pre-development documents in the project folder at this point, Claude Code should establish a strong understanding of the project vision, it’s product requirements, and the rest of what is needed to bring the idea into reality!
    6. After the ‘/init’ command has finished and the CLAUDE.md for your project has been created, I also like to update it with links to the pre-developement docs and when to refer to them. This can be a simple prompt in Plan Mode (shift+tab with Claude Code active to toggle until you see “Plan Mode”):

      “Please review all of the files in @docs and update CLAUDE.md with a new “Links to Documentation” section. Group the documentation file by topic area (e.g. Product Documentation, Technical Documentation, etc.) and provide the path links to the files themselves as well as when to refer to them.”
      • When starting out, I would definitely recommend abusing Plan Mode if your subscription plan supports the token usage. 

    If done right, you should have a development-ready CLAUDE.md file with a section in it that looks like the following:

    ## 9. Links to Key Documentation
    
    ### **Product Documentation**
    - `/docs/product/PRD.md` - Complete product requirements v2.0
    - `/docs/product/USER_FLOW.md` - User journey mapping
    - `/docs/product/SITEMAP_IA.md` - Navigation structure
    - `/docs/product/CONTENT.md` - Brand voice and messaging
    - `/docs/product/FAQ.md` - User-facing questions and answers
    
    ### **Technical Documentation**
    - `/docs/technical/TECH_SPEC.md` - Complete technical specification v2.0
    - `/docs/technical/DATABASE_SCHEMA_ERD.md` - Database design and relationships
    - `/docs/technical/OAUTH_SECURITY_FLOWS.md` - Authentication implementation
    - `/docs/technical/VERSION_CONTROL_DOC.md` - Git workflow and versioning
    
    ### **Security & Legal**
    - `/docs/security/SECURITY.md` - Security requirements and implementation
    - `/docs/legal/TERMS_AND_CONDITIONS.md` - Platform legal framework
    - `/docs/legal/PLATFORM_POLICIES.md` - Community guidelines
    
    ### **Design & Content**
    - `/docs/design/DESIGN_SPEC.md` - Visual identity and UI specifications

    Final Words

    And that’s it! That’s the exact process I follow when starting a new passion project. It’s a little tedious, requiring more upfront investment and reading, but it’s become my preferred approach to save a lot of time during development and debugging. 

    The best part of this approach is that the pre-development docs are “universal” in the sense that you can bring them from one vibe-coding tool to the next, offering you some flexibility if you decide that the current tool you’re using is insufficient. 

    For example, I took about 3 hours fleshing out the pre-development docs for an ROI calculator that I was working on. It was inspired by work, but really just a passion project of my own since it seemed like a great personal learning experience. I was using Claude Code to develop it, but at some point, I was curious about whether or not it would be better in Replit. Since I was still developing and using localhost ports to preview and test, I figured Replit’s built-in preview capabilities might be better suited for the stage I was at.

    Lo and behold, I fired one quickly-constructed prompt, attached my pre-development docs, and Replit was able to recreate all of the functionality except the chart visualizations:

    Screenshot of the Replit agent successfully building an ROI calculator app from pre-development documents in a single prompt. This is screenshot 1 of 3 total.
    Image 1: Screenshot of the Replit agent successfully building an ROI calculator app from pre-development documents in a single prompt. This is screenshot 1 of 3 total.
    Screenshot of the Replit agent successfully building an ROI calculator app from pre-development documents in a single prompt. This is screenshot 2 of 3 total.
    Image 2: Screenshot of the Replit agent completing the initial build in 24 minutes, costing $2.35. This is screenshot 2 of 3 total.
    Screenshot of the Replit agent successfully building an ROI calculator app from pre-development documents in a single prompt. This is screenshot 3 of 3 total.
    Image 3: Screenshot of the Replit agent confirming that the initial build was finished with no additional input from my end. This is screenshot 3 of 3 total.

    There is literally no other interaction beyond the initial message I sent – nothing hidden, cut, excluded, etc. Just the beauty of constructing robust enough context and enough pre-anticipated questions answered through the pre-development docs that the Replit Agent could go to town on the implementation. 

    All of the ROI calculations were correct as well, since the formulas and relationships were all mapped out in the pre-development documents. The only things that were missing were more robust chart visualizations and solid PDF-generation capabilities – both of which I had neglected to provide specific libraries to use in the pre-development docs (read: oversight on my part). 

    And that’s it! Do let me know via socials (About page) if this was helpful or if more detail, rather than abstraction, would be helpful for future posts.

  • Personal note: If you hate what you’re reading, I get it. If you are somewhat amused, I appreciate you. Part of me is also trying to become a more engaging writer without relying on AI-slop and AI-assisted thinking (I think it’s slowly deteriorating my brain). Apologies for typos.

    Overview:

    1. The Backstory (feel free to skip, but I think it’s mildly amusing at a minimum)
    2. First Encounter(s) | Summer 2023
    3. Meeting Again | Late-2024
    4. Moment of Contextualization (read: Realization) | Early-May 2025
    5. 💡 Lightbulb Moment 💡
    6. Transition to CC (Claude Code) | Late-May 2025

    The Backstory

    When I was younger, my parents strongly encouraged me to learn software development after witnessing the rise of technology and the mass demand for software engineers’ skill set. To be clear, they were never strict or forceful. Rather, it was endless streams of “Why play games when you can make games?” As first-time parents, they struggled to see the absolute flaws in that statement when I clearly showed no sign of being a child prodigy.

    While I openly rejected the idea, at some point I thought “Alright, if they’re pushing it this much… maybe it’s worth at least checking out.” For reasons I can’t even remember at this point, I started looking into C++ online and even had a blog where I documented my learnings (similar to this one, just with way worse vocabulary and even less finesse). It functioned as part notebook, part journal and my early-teens self didn’t think it was strange at all. Somewhat of a weird kid I was, reflecting back, but a fun time nevertheless.

    After several months of self-studying C++ and garnering questions on some blog posts that I simply couldn’t answer after literally summarizing chapters from a C++ textbook, I came to one conclusion: I really didn’t like coding. The fact that it was C++ of all languages didn’t help either. My parents naively thought coding was “just another language” akin to English or Mandarin, but both the me back then and my current self would strongly disagree with that statement.

    Can you imagine a normal-language conversation like the following?

    1. You: “hahaha, that is so funny”
    2. Them: “No clue what you just said. You forgot the period.”
    3. You: “hahaha, that is so funny.”
    4. Them: “Error: ‘that’ was not declared in this scope.”

    Needless to say, the syntax-heavy nature of C++ as well as the tedious debugging that was a part of every project was taxing. I was quickly losing interest in C++ / coding. Meanwhile my days were increasingly consumed by training and running for the high school Cross Country and Track & Field teams.

    With a final cringe-worthy post, signing off for the last time, I ended the blog until it was eventually got archived by WordPress. Dying alongside my blog was any interest I had in coding – until 2023 came along.

    For fun below, in case you were interested in how embarrassing I was as a 14-year old:

    Screenshot of Timo Yi's old blog focused on documenting his journey when learning the coding language, C++.
    Image 0: I actually did dabble in Python after, but didn’t restart the original blog or start a new one.

    First Encounter(s) | Summer 2023

    Like many others, I was both excited and skeptical after ChatGPT’s release. However, it wasn’t until the release of Claude and Gemini ahead of summer 2023 that I really really started paying attention. The main reason for this shift wasn’t the existence of multiple LLMs or the seemingly endless clickbait articles and YouTube videos. Rather, I was working at a company that gave a lot of autonomy to its employees and I’ve always been a huge believer of “templatizing” workflows and files so they were reusable in the future. I started looking for ways to shortcut my daily workflows and mundane tasks so that I could spend more time doing literally anything else.

    Unfortunately, I was restricted by my own incompetence. My experience went like:

    1. Install VS Code – I remembered having Eclipse back in the day that I needed a proper IDE based on prior experience and Verkada colleagues on the Growth Engineering team.
    2. Asked ChatGPT to Create the Script – We had the more advanced plan through work so I wasn’t going to not use it.
    3. Get Stuck. – I can’t recall what I messed up exactly, but it was something really stupid like configuring environmental variables wrong.

    After getting all sorts of rabbit-hole troubleshooting from ChatGPT (all missing the real cause), I rage-quit my attempt in frustration.

    I concluded it wasn’t yet time for me to delve back into coding.

    Meeting Again | Late-2024

    Almost in rom-com fashion, this curiosity snuck back into my life when I started seeing Replit all over my way-too-curated Google app (yes, the app – it’s one of my faves). Amidst the random “new restaurant in SF” and eSports-related articles, the hype around Replit still left its mark in my mind (this is not sponsored).

    Around November 2024, I finally succumbed to the itch and gave Replit a shot. Proof that Marketing does do something instead of just making pretty colors and conversing about abstract concepts that don’t mean anything. I guess it only took 2 months of my personalize feed telling me it was the “undisputed greatest thing for non-technical individuals” to finally convert me into a free trial user. Joking, (hopefully) obviously.

    Looking back, my first two projects were comically over-scoped and skipped the MVP phase:

    1. CyberhavenInsightHub – a central hub for our Sales reps to manage prospects & accounts, with any changes or updates being auto-populated by NLP into their respective SFDC fields. It was a grandiose attempt at bringing Sales reps out of Salesforce and into a better version of an internal Retool app. Nothing original, to be frank, but it required a lot to be connected.
    2. CyberHavenConnect – an internal “who knows who?” tool aimed at unifying LinkedIn connections, email interactions, and personal contact books (i.e., from phones, Instagram, etc.) into a single app that our sales reps could leverage to break into target account in their territory. Also not original and required a lot to be connected and configured.
    Timo Yi's first two Replit projects from November 2024

    Unsurprisingly, these ideas remained as just ideas.

    I felt that it was a user error more than anything else. I knew my prompts weren’t stellar (just like my Dad’s prompts – shoutout to my Dad who still calls “ChatGPT” as “GatGPT”). I was also using imperfect diction, which was misleading Replit to do things that it would consider “correct.” For example, using the word “redirect” when describing page navigating in the same prompt as a OAuth flow configuration that also mentioned “redirect URI” can be confusing. The final nail in the coffin was realizing that I didn’t have the security permissions to connect to all of the applications and data sources that I needed.

    Frustration existed, sure. But you gotta hand it to Replit – they are damn good at delivering wow-moments and repli(t)cating the feeling of someone tireless working on your instructions and needs. The real-time preview also delivers wow-moments as you see something emerge from nothing more than a poorly phrased prompt. The non-technical marketer in me was impressed.

    Even when I was in a clear debugging rabbit hole that I was never going to resolve itself, I giddily awaited the next “All problems have been resolved!” (sidebar-nod to ChatGPT’s notorious sycophancy in GPT4-o). It felt addicting. It’s clearly gamified, but the child in me that spent countless hours playing tower-defense games couldn’t get enough of it.

    Looking back, Replit was the gateway drug that led to this blog and post.

    Moment of Contextualization (read: Realization) | Early-May 2025

    Please excuse the dad-joke title.

    As 2025 started and weeks turned into months of the new year, I didn’t embark on any more projects. It’s a poor excuse, but I was in a rut and couldn’t think of a project that wouldn’t require access to our internal systems (i.e., wouldn’t die at the permissions stage). Time passed as I observed the evolving marketing industry get swept up in a craze of Generative Engine Optimization (GEO) / AI Optimization (AIO) / Answer Engine Optimization (AEO). You’ve probably guessed it, but this acronym soup became the basis of my third project.

    So why did I title this section such an awful dad-joke? Because my third project made far greater progress than any of my prior attempts due to added context that was abundant throughout the development process. Ironically, the added context was introduced completely unintentionally.

    Right around the time of RSA, my trusty Google app surfaced an interesting article where an Agency / Software provider (I didn’t look into it deeply) introduced their new AEO online tool. Said provider then walks through each screen in their tool, explaining exactly what it intended to achieve and how to operationalize the insights gleaned. Naturally, I poked around and even tested it out with some of Cyberhaven’s blog posts. Things were going well and I was fascinated until I hit a paywall.

    Now, I get the value of a gated asset / functionality. Whether it’s traditional B2B lead generation or part of a more modern PLG strategy, I don’t have any qualms with providing my information over for an asset of perceived value. Unfortunately, this was a damn talk paywall.

    If memory serves, it was $5,000 to access the final results of the report.

    Chalk it up to frustration, a sense of self-disappointment, or straight up arrogance, I remember muttering a stereotypical: “I’m sure I can build this.” Hindsight 20-20 – I was actually (semi-)right on this one.

    Armed with the detailed article that walked through the tool and screenshots of the tool (which was un-gated except for the final results screen), I headed to Replit to reverse engineer the tool that dared paywall me.

    I figured that with the article and numerous screenshots in hand, I could definitely one-shot the application. Naivete at its finest, I suppose. What I thought would be a day’s worth of work turned into several days, and several days turned into a week. Admittedly, a week still isn’t too long, but I definitely worried at certain points that maybe I was overzealous when I said “I’m sure I can build this.”

    After the week came and passed, the final result was… surprisingly good?

    It’s far from perfect, but here’s what I ended up with (sample analysis results are from a Cyberhaven blog post on AI Adoption & Risk):

    Screenshot of the AI-powered Content Analysis tool's homepage.
    Image 1: Homepage of the tool (note that I didn’t come up with the name – Replit auto-generated one for me and I was too unconcerned to actually change it…)
    Screenshot of the AI-powered Content Analysis tool's high-level similarity analyses.
    Image 2: High-level analyses of the content extracted from the blog URL after turning it into embeddings and applying semantic clustering.
    Screenshot of the AI-powered Content Analysis tool's content clustering analysis.
    Image 3: Content clustering analysis and identifying the top keywords for each content cluster.

    💡 Lightbulb Moment 💡

    Creating the tool above was my moment of contextualization – my lightbulb moment. Going from “Ideas that were never going to see the light of day” to “Oh sh*t, it’s actually working” inevitably triggered some introspection. What did I do differently this time around? Was not having to connect our systems the reason why it was easier to implement? Or maybe my prompting was just that much better this time around after another half-year of using AI tools and studying up on prompt engineering?

    I concluded after doing a lot of research on “Replit best-practices” and trying to recreate the project in other tools through mere prompting (excluding blog post and screenshots) that the difference was something else. The combination of the blog serving as a ‘USER_FLOW.md’ and the screenshots serving as a set of botched Figma mockups / ‘DESIGN_SPEC.md’ made the difference. These were the missing pieces in prior projects that accelerated this one from “I’m sure I can build this” to “Oh it’s working” within a week.

    The difference was context. My prior project prompts were severely lacking it. It was like handing Gordon Ramsay a single, perfect chili pepper, telling him “I want an authentic chili oil,” and expecting him to recreate my mother’s legendary homemade, chili oil. More likely than not, he’ll recreate a version of the infamous Lao Gan Ma chili oil, whereas my mother’s chili oil is infinitely better (biased take). But hey, I wouldn’t fault poor Gordon – how could he know what I was picturing in my head and the taste I was looking for unless I gave him the context?

    Personal note: I adore cooking TV shows (watched too much Food Network growing up) and love Chinese cuisine (specifically Hunan, since my family is from Changsha).

    Transition to CC (Claude Code) | Late-May 2025

    I was excited by the relative success of AIO / GEO / AEO tool (too many acronyms…) and continued playing around with other projects for a few weeks. Sidebar, I am not joking – these stand for Generative Engine Optimization (GEO) / AI Optimization (AIO) / Answer Engine Optimization (AEO), literally the Potayto-Potahto, Tomayto-Tomahtoh in Marketing right now. Back to the main point, the more I was experimenting, the more I noticed a recurring set of problems:

    1. Lack of awareness on the full codebase and prior changes that were being made or fixes that were attempted (and often, failed)
    2. The use of a lot of hardcoded / placeholder values, which is obviously helpful for delivering the MVP wow-moments that Replit, Lovable, and other vibe-coding tools are known for
    3. Debugging and getting “the last 10%” correct was a painful process involving countless rabbit holes, hallucination, retries of the same fixes that failed before, and more
    4. Prior to the introduction of Replit Auth, successfully implementing authentication and OAuth was a circuitous journey that was a mix of user configuration error and over-engineering
    5. Accepting Replit’s suggestions for “additional features” during the planning phase is functionally a death sentence for further scope creeping, over-engineering, and hallucinations

    To be clear, I love Replit. In fact, there were actually quite a few more issues that I ended up sharing with a friend on Replit’s Product team. The fact that Replit handled configuration and deployment for me was a massive boon. The fact that I didn’t have to worry about managing secrets, setting up third-party databases, and being able to rely on Replit to analyze its own logs without endless copying and pasting was also incredibly efficient for a non-technical user like myself.

    However, I was curious to see if other tools had solved some of these issues. After a lot of wandering through online forums, Reddit threads, helpful YouTube videos, and truly cringe YouTube videos, I finally decided to give Claude Code a shot. Note that I completely missed the research preview phase since I’m not that early of an adopter. After all, these passion projects / side projects are things that I work on after-hours and on weekends when I’m in the tinkering mood.

    I was drawn by the promises of full codebase awareness since I was struggling with vibe-coding tools frequently introducing regressions, redundancies, or straight up conflicting approaches once a project got big enough. My inability to manage installing programs and packages or setting my PATH correctly further drew me to the Command Line Interface (CLI) approach that could cover for my incompetencies. Logically (at least to me), others quickly followed suit.

    More on Claude Code in the next post – this is more of a “Hello World.”


    Learnings, Takeaways & Overall Thoughts:

    • Positives:
      1. Libraries – What a blessing to not have to worry (as much) about libraries and deprecated functions or syntax. I was able to focus a lot more on the stuff I found interesting: the idea, logic, and user flow/experience.
      2. Requirements & Compatibility – This is more related to dabbling in Python, but I never went through an initial setup session (pre-LLM world) where I didn’t run into compatibility or requirement issues.
      3. Memory Management – I barely even touched on this in the C++ blog days, but I always perceived it as the “big bad wolf” only to realize it was completed abstracted away in the modern languages that vibe-coding uses.
      4. Frontend Design – Some people rave about Lovable on this one, but honestly I find the differences negligible. What I can confidently say is that I would not be able to create the frontend designs that Replit creates even if I were provided 100x the time. After all, I can use Figma almost as well as I used Excel back in my Finance days, but I honestly can’t code for crap (even though HTML and CSS are more intuitive than C++). Tools like Replit and Claude Code helped me bridge the chasm between mental image and frontend design.
      5. Wow-Moments – Again, this is the Marketer in me speaking, but it’s hard for me to describe how shockingly satisfying the first hour of using Replit was.
    • Negatives:
      1. Lack of Understanding – Yes, I felt like I was more capable than ever before when it came to creating something out of nothing. But do I have any understanding of why Replit would choose one implementation approach over another? Absolutely not. I almost wish part of Replit’s onboarding experience included a “Okay, just how clueless are you?” followed by “Do you actually care that you’re clueless?” If they had asked, I probably would’ve responded “1/5 clueless”, but “Yes – absolutely” as I later found myself learning about core concepts by asking Replit, then Claude Code, what was implemented and why.
      2. Scope Creep Temptation – This was probably the biggest trap I fell into the first 3-4 times that I used Replit. During the planning phase, it almost always asked if you wanted to implement any other features the Replit agent deemed relevant. At times, it would feel irrelevant, others, it would feel like it was a sage that understood exactly what your vision for the project was. Every time, it would result in scope creeps, over-complications, and way more headaches down the line when it came to debugging.
      3. Having to Reign Myself In – With a) new features being recommended for implementation, b) the feeling that an idea can turn into reality in minutes, and c) the ability to turn rough sketches / screenshots / Figma mockups into a webpage, it was hard to reign my own creativity in. For any non-technical individual who’s frequently thought “Man, if only I learned computer science and coding when I was younger” it was like pandora’s box was unleashed. If anything, I found myself so distracted that I was firing off prompts for different projects and letting them run concurrently. It became difficult to dedicate myself to any one project. Even for the AEO / GEO / AEO / AEIOU project, there were several competing interests that I was dabbling in simultaneously. In case it’s not obvious, that last acronym was fake, unless Marketers get even more extreme and create an Answer Engine Inquiry Optimization & Understanding acronym for sh*ts and giggles.

    As the title implies, I am primarily using Claude Code these days, but that doesn’t mean I don’t remain a massive fan of Replit. In fact, I’ve find it incredibly helpful to use Replit to one-shot MVPs (where possible), push to GitHub, and then clone the repo to use Claude Code for debugging, the last 10%, code explanations, and more.

    I won’t say this is the optimal approach, but it certainly shortcuts my process.

    Hope you enjoyed my chaotic recap of the last two years! 🙂 The next one won’t be as far back in time.


    From C++ to CC (Claude Code): Complete.

Timo Yi's Blog