You subscribed to our newsletter on our website or through one of our other channels. If you no longer wish to receive our emails, unsubscribe in the footer.
I've had four people ask me this month: "How do I use ChatGPT to find grants?"
A year ago, my answer would have been simple. Here's my prompt. Copy it. Good luck.
But I've stopped doing that.
Not because I'm hoarding secrets. Because I realized the prompt wasn't the problem. And it wasn't the solution either.
The people asking me this question weren't failing because they had the wrong prompt. They were failing because they were skipping a critical step: thinking.
They'd open ChatGPT, type something like "find grants for my nonprofit" or "small business grants in Maryland," and expect magic. What they got was a generic list. Half the links were dead. The ones that worked didn't fit. Within ten minutes, they gave up and said another ChatGPT hallucination.
ChatGPT wasn’t broken. Their approach was the problem.
Why Most Grant Searches Fail
Before I built a better solution, I wanted to understand why the current approach fails so consistently. So I asked ChatGPT:
What are the biggest mistakes people make when asking you to find grants?
The answer was a list of ten patterns I've seen over and over again, not just with grants, but with every AI task people try to delegate without thinking first.
1. They don't define their business model. Most people say "find grants for my business" without clarifying whether they're for-profit or nonprofit, what they actually do, who they serve, or what outcomes they produce. Grant fit requires alignment between your model and the funder's intent. Without this, search results are generic trash.
2. They confuse keywords with funding logic. People type vague inputs like "tech grant," "housing grant," or "AI grant." But grantmakers don't fund keywords. They fund outcomes, populations, geographies, impact themes, and economic goals. If you don't define the logic, you get noise instead of targets.
3. They fail to specify geography. Funding is hyper-local. NYC grants don't apply in Maryland. County grants don't work statewide. Federal grants may require local partners. Most people never specify where they're located, where they operate, or where they're willing to execute. This is one of the biggest reasons AI returns junk.
4. They don't clarify the role they can play. People say they want grants but don't specify if they wish to be a prime applicant, a co-applicant, a subrecipient, or a contractor. For-profit companies often can't be prime applicants for community development grants—but they can be subrecipients or implementation partners. Most users don't know this, so they miss out on 80% of viable funding.
5. They don't define constraints. They rarely mention minimum or maximum award size, execution constraints, reporting capacity, matching fund requirements, or team limitations. Without constraints, you’ll get poor results.
6. They don't define impact themes. Grants are impact-driven: workforce, housing, climate, public health, small business support, transportation, and community development. If you don't map your activities to impact themes, you won't find legitimate matches.
7. They overstate what they can deliver. People ask for million-dollar grants with no track record, no team, no financial controls, and no reporting capability. Grantors filter these instantly. Most people aim too high without understanding the eligibility requirements.
8. They don't provide a structured profile. Grant discovery requires structured input. Without a clear profile or intake form, AI has to guess. And guesses lead to low-precision results.
9. They don't differentiate search from scoring. Most people ask, "Find me grants," and expect magic. But the better approach is: search, filter, score, rank, match, explain fit, and provide next steps. People skip all of that and end up with meaningless results.
10. They ignore deadlines, readiness, and competitiveness. They forget that deadlines matter. Readiness matters. Competitiveness matters. Cost of pursuit matters. You should score each opportunity on pursuit feasibility—not just eligibility.
Every one of these is a thinking problem, not a tool problem.
What I Did Differently
When I decided to build a real solution, I didn't start with ChatGPT. I started with a question: What do I actually do when I search for grants manually?
I thought back to every grant I've pursued. Every database I've searched. Every filter I've applied. Every question I ask is before I even begin looking.
I wrote all of it down. The criteria. The logic. The constraints. The scoring. The workflow.
That document became the foundation for what I now call Grant Scout.
Grant Scout isn't a prompt. It's a Digital Employee—a reusable system that anyone on your team can use, even if they've never touched AI.
Here's what's inside:
1. A prompt template that fixes everything ChatGPT says people do wrong. It forces you to define your business model, geography, role, constraints, impact themes, and capacity before searching. No more vague inputs. No more garbage outputs.
2. A JSON Context Profile you customize once and reuse forever. This is a simple text file that holds your organization's criteria: who you are, what you do, where you operate, and what you can handle. You fill it out once. Then you attach it to the prompt whenever you search. Your profile becomes the foundation—not your memory.
3. A built-in fallback. Forgot to attach the JSON file? The prompt interviews you. It asks the right questions to gather the information before it searches. You can't skip thinking; Grant Scout won't let you.
Add the prompt. Add your profile. Get a repeatable system that works every time.
I'm giving it away. Not because it's not valuable, but because I want you to see what's possible when you think first.
The Bigger Point
This isn't really about grants.
It's about how you approach AI.
Most AI tool users delegate tasks to tools like ChatGPT without adding their own thinking. They write a prompt to do something they haven't defined. They skip the criteria, the logic, the constraints. Then, wonder why the results are bad.
Grant Scout works because I thought before I touched the tool. I asked myself what I do when I search manually. I documented it. I structured it. Then I let ChatGPT handle the rest.
That's the difference between using a tool and building a system.
The hidden gold rush in AI isn't models or tools. It's the people who know how to make themselves and others more productive.
The Thinking Crisis
Most organizations treat AI like a generic worker. They delegate without a strategy. They outsource the thinking and expect good results.
It doesn't work.
The real solution isn't learning more tools. It's owning your thinking instead of outsourcing it.
When you think first, you build systems that are:
Repeatable — they work every time, not just once
Long-lasting — they don't break when the tool updates
Shareable — anyone can use them, even without AI experience
That's what separates a Tool User from a THINK Strategist.
The Path Forward
There are three levels:
1. Tool User: You delegate. You prompt. You hope for the best. Sometimes it works. Mostly it doesn't. You stay stuck in trial-and-error mode.
2. THINK Strategist: You think first. You define the logic before you touch the tool. You build systems that anyone can use. You get repeatable results.
3. THINK Leader: You build teams around these systems. You teach others. You scale your thinking across an organization.
Grant Scout is one example of how a THINK Strategist builds. Inside THINK School, you get the method behind it, and more tools like this, and learn how to make your own.
Stop asking ChatGPT for answers. Start building think-first digital employees.
Go from AI Tool User to THINK Strategist. Learn how to think first, build systems, and make yourself and others more productive.
Marvin