You subscribed to our newsletter on our website or through one of our other channels. If you no longer wish to receive our emails, unsubscribe in the footer.

You've rewritten a prompt three times. Added more detail. Tried a different tool. Spent an hour tweaking words. Still not getting what you want.

Here's what nobody tells you: the prompt is only 20% of the work. 80% of your results come from thinking before you type.

Most people do the inverse.

The backwards approach.

How most people adopt AI:

They have a task.

They open ChatGPT, Gemini, or Claude.

They start typing.

They describe what they want.

They hit enter.

They get something back. I

t's not quite right.

They send a new prompt.

Try again. Still not right.

Add more detail.

Try a different angle.

Thirty minutes later, they're frustrated.

Sound familiar?

The problem isn't the tool. It's the sequence.

They're asking AI to think for them instead of having it execute their thinking.

There's a difference between "find me precedent projects" and "here's what I'm trying to prove, here's what makes a precedent useful to me, now find examples that fit."

The first is a wish. The second is a system.

5 questions to answer before you have ChatGPT perform a task.

Before you type a single word, answer these:

1. What decision does this support?

Not "what do I want to know?" What will you DO with this output? Are you making a case to a client? Validating an assumption? Building a deliverable? The answer shapes everything.

A prompt for "research" returns generic summaries. A prompt for "evidence to convince a skeptical lender" returns ammunition.

2. What do I believe exists?

State your hypothesis before you search. "I believe projects like mine have been funded before and are documented case studies." This forces clarity on what you're actually looking for—and tells AI where to look.

Most people skip this and ask AI to form the hypothesis for them. That's why they get garbage.

3. What makes a result actually useful?

Define your criteria. Not "good results," what specifically would make you say "this is exactly what I needed"?

Same budget range? Same financing structure? Same geography? Same industry? If you can't articulate what useful looks like, neither can AI.

4. Who is this for?

You? A client? A decision-maker with five minutes? A technical reviewer who wants details? The audience determines the format, depth, and language of what you need back.

5. Can I show an example of good output?

If you've ever seen output that worked, what made it work? If you can describe or show an example, you've just told AI exactly what to produce.

Score yourself.

Before your next prompt, count how many of these you answered:

  • 0-1: You're delegating without context. AI is guessing what you want.

  • 2-3: You're transitioning. Output will need heavy editing.

  • 4-5: You're thinking first. The prompt writes itself.

One of our community members posted that she’s leading a $10M faith-based redevelopment in DC—historic church, adjacent parcels, and a community campus with an anchor tenant.

She needed 8-12 precedent projects with financing details. Real projects. Documented capital stacks.

She wrote a 200-word prompt. Specified location, budget range, building types, financing mechanisms, and exactly what she wanted returned. Technically impressive.

Result: nothing useful.

The problem? She'd answered maybe one of the five questions. She asked AI to do the finding, the filtering, and the validating without ever defining what "useful" meant to her.

When she stopped and answered the questions:

  • Decision: Prove this deal structure is feasible to lenders

  • Hypothesis: NMTC + historic credits + anchor tenant lease exists at this scale

  • Useful criteria: Same financing stack matters more than the same building type

  • Audience: Grant committees evaluating feasibility (with specifics)

  • Example: CDFI case studies with documented capital stacks

The prompt writes itself. 47 words instead of 200. Clear criteria. AI knew exactly where to look and what to return.

The pattern.

When your prompt fails, you didn't skip a word; you skipped a decision.

The 200-word prompt feels thorough. But length isn't clarity. You can describe a task in incredible detail and still leave AI guessing about what actually matters.

Answer the 5 questions first. In writing. Not in your head, on paper, or in a doc. Force yourself to articulate what you're really after.

Then the prompt is just the last step. Often the shortest one.

If you are stuck with the same issue our community member had, grab a copy of the THINK Blueprint. It is a Claude Skill that shows you how to allocate 80% of your thinking to the task and 20% to executing it.

Do you want the THINK Blueprint?

Login or Subscribe to participate

Marvin

Reply

or to participate

Keep Reading

No posts found