AI Is Not the Answer. You Are.
The Problem-First Framework for Making AI Actually Work
1. The Night I Tried ChatGPT for the First Time
It was late 2022, and I was sitting in my apartment in Taipei, laptop open, a cold cup of tea on the desk beside me. A friend had sent me a link that afternoon with a one-line message: "You have to try this." The link went to something called ChatGPT. I had read a few breathless articles about it. I was skeptical in the way that someone who has spent 23 years in B2B sales learns to be skeptical about anything that promises to change everything.
I typed my first prompt: "Write me a sales strategy for a B2B technology accelerator in Taiwan."
What came back was impressive. Clean paragraphs. Structured thinking. The language of someone who had read a lot of McKinsey decks. I read it twice and thought: yes, this is sophisticated. Then I looked at it a third time and realized: this is also completely useless for me specifically. It was good-looking advice for an abstract accelerator that shared no important characteristics with iiinno — my actual accelerator, with my actual clients, my actual positioning problems, my actual relationships in the Taiwanese startup ecosystem.
Over the next three months, I tried again and again. I switched to Gemini when that launched. I tried Claude. I watched YouTube tutorials about "prompt engineering" and bought a course about building AI workflows. And the results kept being the same: impressive-looking outputs that didn't change anything about how my business actually worked.
The tools weren't the problem. I was.
Not because I was stupid. Not because I wasn't trying hard enough. But because I was asking entirely the wrong question. The question I kept asking was: which tool is best? The question I needed to be asking was: what specific problem am I trying to solve?
This post is about the shift from tool-first thinking to problem-first thinking — and what that shift made possible for me as a solopreneur. By the time I figured this out, I had built 8 AI agents that now run alongside me every day. I spend roughly $200 a month on all of them combined. They give me back approximately 22 hours a week. None of that happened because I found the best tool. It happened because I finally learned to define the problem first.
2. The Wrong Question Everyone Is Asking
Open any LinkedIn feed right now and you'll find the same debate playing out in slightly different costumes: ChatGPT vs. Claude vs. Gemini vs. Llama. Which model writes better? Which one codes better? Which one is worth paying for?
I understand why this question is appealing. It feels like a solvable problem. Pick the right tool, unlock the right result. It has the satisfying logic of a consumer choice: if I just do enough research and select correctly, I'll get the outcome I want.
But this is not how expertise works. And it is definitely not how AI works.
Think about the best craftspeople you know — in any domain. A great carpenter doesn't spend most of their time agonizing over which brand of chisel to buy. They've developed a clear sense of what they're trying to build, which skills the work requires, and which tools serve those skills. The tool choice is downstream of clarity about the work. When clarity is missing, no tool — however excellent — compensates for it.
This is the trap almost every solopreneur I know has fallen into with AI. They subscribe to three or four services, feel the dopamine hit of early novelty, generate some content that looks good but doesn't convert, and eventually conclude that "AI doesn't really work for my business." The AI worked fine. The problem was never defined clearly enough for the AI to do anything genuinely useful with it.
3. The Amplifier Principle: AI Amplifies What You Already Are
Here is the most important thing I have learned about AI after three years of using it every day:
AI is an amplifier. It makes what you already do faster, louder, and more voluminous. What it amplifies is entirely up to you.
If you are someone with clear thinking, a sharp sense of your audience, and a defined problem you're solving — AI will help you produce that clarity at scale. You'll be able to create in an hour what used to take a day. You'll be able to test ideas that used to require weeks. You'll be able to serve more people, more consistently, with less friction.
If you are someone who is still figuring out what you're doing, who doesn't know your audience deeply, who hasn't yet done the hard inner work of deciding what you stand for and who you're here to help — AI will help you produce more confusion, faster. You'll generate ten landing pages that say different things. You'll write content in five different voices. You'll build workflows for problems you haven't actually diagnosed.
The tool doesn't give you direction. It executes in the direction you're already moving.
This is why the Coastline framework is structured the way it is. The first e-book — Coastline — is entirely about knowing yourself: your roles, your values, your operating system for decisions. The second — How to Earn Your First Dollar — is where the tools come in. In that order. Because amplifying an unclear signal produces a louder unclear signal. And a louder unclear signal is worse than a quiet one, not better.
🧭 Start with the Foundation First
Before AI can work for you, you need clarity about who you are and what problem you're solving. The Coastline newsletter covers both — the inner work and the outer tools — in the right order.
Join Free → Free E-Book Chapter4. The Problem-First Framework
The framework that changed everything for me has four steps. They're not complicated. But most people skip straight to Step 4 — which is why most people are disappointed.
Four steps. But notice where the work actually is: Steps 1 through 3. That's the thinking. Step 4 is fifteen minutes of research and setup. The ratio of thinking to tool-selecting should be roughly 80:20. Most people do it 5:95. They spend 5% of their time thinking about the problem and 95% evaluating tools. And then they wonder why the tools don't deliver.
5. Using AI for the Wrong Reasons — Laziness vs. Leverage
You open ChatGPT, type "write me a LinkedIn post about my consulting services," copy the output without editing it, publish it, and wonder why you got no engagement. You ask AI to "summarize my business strategy" without having a business strategy. You use AI to generate content ideas because you haven't done the customer research that would make the ideas obvious. You're using AI to avoid thinking — which means you're producing AI-shaped outputs that contain no actual thought.
I did all of this. In 2022 and early 2023, I used AI to generate first drafts of things I wasn't sure I wanted to write. I used it to produce frameworks I hadn't actually tested. I used it to write explanations of concepts I hadn't fully thought through. The outputs were polished. They looked good. And they were, almost without exception, useless — because they were well-dressed versions of my own unresolved confusion.
The tell, looking back, was always the same: I felt vaguely unsatisfied with the AI output but couldn't quite say why. The reason I couldn't say why is that the problem wasn't the output — the problem was the input. I was asking AI to do thinking I hadn't done. And AI, being a very capable generator of plausible text, would produce something that looked like thinking without being thinking.
The laziness pattern feels efficient. You're producing outputs faster. What you're actually doing is producing volume without value — and you're skipping the discomfort that's supposed to produce something real. The discomfort of not knowing yet is where the actual work lives. AI can't do that discomfort for you. And when you use it to avoid the discomfort, you short-circuit your own development.
Ask yourself: am I using AI because I've done the thinking and now need execution at scale? Or am I using AI because I haven't done the thinking and I'm hoping it will do it for me? The first is leverage. The second is expensive procrastination with a polished aesthetic.
6. Using AI for the Right Reasons — Leverage
You've spent two hours talking to five of your best customers about the problem they hired you to solve. You have pages of notes full of their exact words — the language they use to describe their frustration, the specific context in which the problem appears, the things they've tried that didn't work. Now you ask AI: "Using these five customer interviews (attached), draft three LinkedIn posts in my voice that speak directly to the problem these customers describe. Here are three examples of my existing posts for voice reference." That prompt has a defined problem, real inputs, a clear output, and voice calibration. The AI doesn't have to guess — it has what it needs. The output is genuinely useful because the thinking was genuinely done.
The clearest example from my own work happened when I was building the content strategy for Coastline. I had spent six weeks in real conversations — with solopreneurs in Taipei and Hsinchu, with people in my LINE community, with former clients from iiinno — asking the same question in different ways: what is the thing you're most stuck on right now in building your solo business? I accumulated roughly 200 responses. Real words. Real frustrations. Real questions.
I fed that material into Claude with a specific prompt: "Here are 200 responses to the question 'what are you most stuck on in building your solo business.' Group them by theme, identify the three themes with the highest emotional intensity (using the language people actually used), and draft an outline for a blog post that addresses the most intense theme — using the customers' exact phrases where possible."
The output was extraordinary. Not because Claude is smarter than me — it isn't. But because I had done 6 weeks of research that gave it something real to work with. The AI took 200 data points and found patterns I hadn't spotted. It organized themes I had been sensing but hadn't yet articulated. That's leverage: the AI doing in 30 seconds what would have taken me several hours of analysis, based on inputs I had taken the time to genuinely develop.
The difference between that use and the lazy use isn't the tool. It's the 6 weeks of work I did before I touched the prompt box.
Before opening any AI tool, ask: "What am I bringing to this prompt?" If the answer is primarily your own uncertainty — you haven't done the research, haven't talked to customers, haven't thought through the problem — close the laptop and go do the thinking first. Come back when you have something real to bring.
7. How I Built 8 Agents — Starting with the Problem, Not the Tool
By early 2025, I had figured out the problem-first framework well enough to apply it systematically. I sat down and mapped every recurring task in my working week — every task I did more than once, every task that followed a predictable pattern, every task where I was the bottleneck not because of judgment but because of volume. I ended up with a list of about 25 tasks.
Then I ran each one through three questions: Is this task high-judgment (unique situation requiring my specific knowledge and experience each time)? Is it high-volume (recurring, pattern-based, time-consuming)? Is it high-stakes (the output directly affects client relationships or revenue)? Tasks that were high-volume but not high-judgment — those were the candidates for agents. The rest stayed human.
From 25 tasks, 8 emerged as clear candidates. Here's what they are and what they replaced:
Total: approximately 22 hours per week, at a combined cost of about $200/month across all tools. But here's what I want you to notice: I did not choose a tool and then find problems for it to solve. I built each agent starting from a documented, specific problem — a task I had been doing myself, with a known time cost, with a clear pattern I could hand off. The tool choice for each one came last, after the problem was clear.
Two of these agents run on Claude. Three run on GPT-4. One is Gemini. Two use no-code automation tools. The "best AI" debate is completely irrelevant to this outcome. What matters is that each agent has a clear job because I took the time to define what that job actually was.
8. The 1% Human Touch AI Can Never Replace
After three years of using AI every day, building 8 agents, and watching dozens of solopreneurs try to replicate what I've built — I'm more convinced than ever that there is a category of work AI will never do well. I don't mean writing, or coding, or research synthesis, or content production. AI is already good at all of those things and getting better. I mean the specific form of judgment that comes from accumulated experience meeting a genuinely novel situation.
In 23 years of B2B sales, I have sat across the table from hundreds of business owners at critical moments. I've learned to read the thing that's not being said — the discomfort behind the polite question, the real objection under the stated one, the moment when someone needs to be challenged versus the moment when they need to be met where they are. I've made calls that were technically wrong by every playbook I knew and turned out to be right because I understood this specific person in this specific context in a way no model could replicate.
AI is extraordinarily good at pattern recognition across vast amounts of data. But the 1% I'm talking about isn't pattern recognition. It's the judgment to know when the pattern doesn't apply. The decision about what problem is worth solving in the first place — that's not a pattern that can be learned from data. It's a hunch that's been earning trust over years of being right and wrong and learning the difference.
"AI can execute at scale. Only you can decide what to execute."
This is not a comforting platitude. It's a job description. Your job, as the human in the system, is to be worth the 1%. To have opinions that aren't just statistically plausible. To make decisions that come from somewhere real. To bring the judgment that only accumulates through genuine experience — including the experience of being wrong.
If you're using AI to avoid developing that judgment — to skip the difficult conversations, the uncertain experiments, the humbling failures — you're not building a business. You're building a very elegant machine for generating the appearance of a business. And eventually, the 1% gap will show.
9. Putting It All Together
The question I started with — which AI tool is best? — is the wrong question. The right questions are simpler and harder at the same time:
What specific problem am I actually solving? What does solving it look like? Where in my work does human judgment genuinely matter — and where am I just slow? What's the clearest, most documented version of the non-judgment work I do regularly?
Answer those questions first. The tool choice takes fifteen minutes after that. The tool choice is not the work.
And then — once you've built your AI-augmented system — stay honest about what you're bringing to it. The system only works as long as the human in it is worth the 1%. That means doing the customer research instead of asking AI to generate a customer profile. It means having the difficult conversation instead of generating a diplomatic email. It means building a real point of view about your market instead of prompting your way to an opinion.
AI is the most powerful amplifier I have ever had access to. At 51, running a solo business, building a content engine and a product line simultaneously — I genuinely couldn't do what I'm doing without it. But what it's amplifying is 23 years of hard-won judgment, six weeks of real customer research, and a very specific set of problems I have documented clearly enough to hand off.
The tool didn't do that. I did.
🤖 See the Full $200/month AI Stack
The detailed breakdown of all 8 agents — what they do, what tools they use, how they're prompted, and what they actually cost. For newsletter subscribers.
Get the Full Breakdown → Next: Are You Stuck in Find Mode?10. Frequently Asked Questions
Q: Why don't AI tools work for most people?
Because most people approach AI with a solution-first mindset — they pick a tool, then try to find problems for it to solve. AI is an amplifier: it makes what you already do faster and louder. If your thinking is fuzzy, AI produces better-written fuzzy thinking. The fix is a problem-first approach: define the specific problem you're solving, the exact outcome you want, and only then select the tool that serves that problem best. The failure is almost never the tool. It's the missing problem definition.
Q: Does it matter which AI tool you use — ChatGPT, Claude, or Gemini?
Much less than the AI industry wants you to believe. Different models have different strengths for specific tasks — but the difference between someone who gets results from AI and someone who doesn't is almost never the tool they chose. It's whether they've defined the problem clearly enough that the tool has something concrete to work with. A sharp problem poorly prompted to Claude outperforms a vague problem beautifully prompted to GPT-4 every time. After building 8 agents across multiple tools, I can tell you: two are Claude, three are GPT-4, one is Gemini, two use no-code automation. The mix is irrelevant to the outcome.
Q: What is the problem-first framework for AI?
Four steps: (1) Name the specific problem — not a wish, but a documented situation with a time cost and a measurable current state. (2) Define what solved looks like — what metric changes, what your week looks like differently. (3) Identify where human judgment is genuinely required versus where you're just slow. (4) Only then select the tool. This order matters profoundly. Tool first, problem second always produces mediocre results because you're missing the clarity that makes the tool useful.
Q: How did CJ Kuo build 8 AI agents as a solopreneur?
By starting with a full inventory of his working week — every recurring task, every pattern-based workflow — and running each through three questions: Is it high-judgment? High-volume? High-stakes? The 8 tasks that emerged as agent candidates were high-volume but not high-judgment: email triage, content drafting, research synthesis, social scheduling, analytics reporting, newsletter editing, client onboarding, and FAQ maintenance. Each agent was built around a specific, documented workflow. Total cost: ~$200/month. Time reclaimed: ~22 hours/week. The tool choice for each was the last decision, not the first.
Q: What is the 1% human touch that AI can never replace?
Judgment under genuine uncertainty. AI can process information at scale and generate statistically plausible outputs — but it cannot feel the weight of a real decision, read the unsaid thing in a room, know when a technically correct answer would damage a relationship, or decide what to build next based on a hunch that's been earning trust over 23 years. The 1% isn't about typing or clicking. It's the accumulated wisdom that tells you which problem is worth solving in the first place. AI executes at scale. Only you can decide what to execute.
Q: When should a solopreneur start using AI tools?
After you've answered three questions: What specific problem am I solving? What does success look like? Where in my workflow does human judgment genuinely matter? Most solopreneurs jump to AI tools before they can answer these — the result is expensive subscriptions, shallow outputs, and frustrated conclusions. AI works well for people who've done the thinking first. Start with your biggest time-drain. Document what you currently do in that task, step by step. Identify the steps that don't require your unique judgment. Build or select a tool for exactly those steps, and no more.
📚 The Coastline E-Book: Do the Foundation Work First
Knowing yourself — your roles, your values, your problem-solving framework — is what makes AI useful. The Coastline E-book is Pillar 1: the inner work that every outer tool depends on.
Read the Free Chapter → Join the Workshop