Where to Start with AI: A Scoring Framework for Your First Project
You've got a list. Maybe it's a spreadsheet from a brainstorming session. Maybe it's a running mental tally of places where AI "could probably help." Either way, you're drowning in possibilities and paralyzed on which one to fund first.
That paralysis is killing you. Every month you debate is a month your competitor ships.
Here's the fix. Score each potential AI initiative on five dimensions, 1-5 each. Total score determines priority. It's simple. It works. It ends the arguments in one meeting.
The Five Dimensions
Dimension 1: Data Readiness (1-5)
The single biggest predictor of AI project success is whether the data already exists, is accessible, and is clean enough to use. A score of 5 means structured data in a single system, regularly updated and well-documented. You can start building this week. A score of 1 means the data doesn't exist yet. You're six months of data infrastructure away from even prototyping.
Most companies overestimate where they are by at least one point. Go ask the person who actually works with the data. Not the person who manages the system. The admin thinks the CRM data is clean. The rep who enters "TBD" in the revenue field knows it's not.
Dimension 2: Impact Magnitude (1-5)
How much value does this create if it works? Score a 5 for $1M+ annual impact or strategic competitive advantage. Score a 1 for under $50K annual impact. Nice to have. Not a priority.
Be honest about these numbers. Use conservative estimates. For Tier 1 automation, assume 50-70% of work can be automated. For Tier 2 augmentation, assume 20-40% productivity improvement. Use the lower end for your business case. Here's the thing. If you need AI-level optimism to make the math work, the math doesn't work.
Dimension 3: Organizational Readiness (1-5)
Does the team that would own this want it, understand it, and have capacity to implement it? Let me be blunt. This dimension kills more AI projects than bad technology. A perfect AI solution deployed into a team that doesn't want it will fail. Every time. No exceptions.
The test: Can you name the person who will own this project day-to-day? Not their boss. Not a committee. A single person with a name and a phone number. If you can't, you're a 2 at best.
Full Framework
Want the Printable Scoring Rubric?
The full playbook includes a downloadable rubric template with scoring guides and three worked examples.
Get the Playbook, $49Dimension 4: Technical Complexity (1-5)
How hard is this to build or buy? Lower complexity scores higher because you want your first wins to be easy. A 5 means an off-the-shelf product with standard integration. A 1 means cutting-edge technology with no proven solutions.
Your first AI project should score a 4 or 5 here. Save the ambitious projects for when you have in-house expertise. Rule of thumb: if the vendor can show you three reference customers in your industry running this exact use case, it's a 4 or 5. If you'd be the first, it's a 2 or 1.
Dimension 5: Risk and Reversibility (1-5)
What happens if it goes wrong? A 5 means internal-only, no customer impact, easily reversed. A 1 means direct customer impact with high regulatory risk. Your first AI project should be a 4 or 5. You're going to make mistakes. Make them where the cost of a mistake is low and the learning is high.
Running the Rubric: A Real Example
A mid-size logistics company evaluates three potential AI projects:
| Project | Data | Impact | Org | Tech | Risk | Total |
|---|---|---|---|---|---|---|
| Invoice Automation | 5 | 3 | 5 | 5 | 5 | 23 |
| Route Optimization | 3 | 5 | 3 | 3 | 3 | 17 |
| Predictive Maintenance | 2 | 4 | 2 | 2 | 4 | 14 |
The answer is obvious when you score it. Start with invoice automation. It's boring. It's simple. It works. That's the point. Use the savings and organizational learning to fund route optimization in six months. Predictive maintenance goes on the 12-month roadmap after you've built the data platform to support it.
The "But Our CEO Wants" Problem
Your CEO just came back from Davos. They want predictive maintenance. It's visionary. It scored 14 on the rubric. Here's the conversation you have:
"I agree that predictive maintenance has the biggest long-term potential. Here's my recommendation: let's start with invoice automation. It's a 90-day project that saves $200K annually and builds the internal muscle we need. Simultaneously, we scope the data infrastructure for predictive maintenance. In six months, we have ROI from Project A funding the groundwork for Project C, and the team knows how to manage an AI deployment."
You're not saying no. You're saying yes, with a sequence that actually works. Most CEOs will take that trade if you frame it right.
Your Minimum Viable Shortlist
After scoring, you should have:
- •1 project to start now. Highest total score, minimum 20.
- •1 project to scope for next quarter. Score 16-20.
- •1 project on the watch list. Score under 16. Revisit when conditions improve.
If nothing scores above 20, your bottleneck isn't AI strategy. It's data infrastructure or organizational readiness. Fix those first. Deploying AI on broken foundations is worse than not deploying AI at all.
This article covers the scoring rubric from Chapter 5 of The Executive's AI Playbook. The complete chapter includes detailed scoring guides for each dimension, three fully worked examples, and the printable rubric template you can use with your leadership team.
Get the complete framework →