How to Prioritize AI Use Cases: A Research-Backed Framework for Accounting Firms

— by

I’ve been working with a number of firms recently on their AI strategies, and the conversation almost always starts the same way. Someone in the room rattles off five or six ideas for where AI could help the business. Everyone nods. Then six months later, nothing has shipped, licenses are going unused, and leadership is frustrated.

The problem isn’t that the ideas were bad. It’s that no one put a framework around them. And without a framework, you end up doing what most organizations do — chasing the loudest idea in the room instead of the right one.

The data on this is pretty stark. An MIT Sloan study found that 80% of AI projects fail to deliver their intended business value. A separate analysis showed that 95% of generative AI pilots fail to scale to production. BCG found that organizations are averaging 4.3 pilots, but only 21% of those reach any kind of measurable production scale. That’s a lot of energy — and budget — going nowhere.

The good news is that the failure isn’t random. It follows patterns. And once you understand the patterns, you can build a selection process that dramatically improves your odds.

Why AI projects stall before they deliver

Harvard Business School and Microsoft published a piece in HBR earlier this year calling this the “last mile problem.” The core idea: the primary obstacle to AI progress is rarely the model itself. What breaks is the organizational design around it.

They identified seven structural frictions that explain why pilots succeed but transformation stalls: pilot proliferation (hundreds of POCs with no path to scale), the productivity gap (individual gains that never hit the P&L), process debt (AI automating messy workflows instead of clean ones), tribal knowledge hoarding, governance breakdowns, architectural complexity, and what they call the efficiency trap — framing AI as a cost-cutting tool that triggers defensive behavior and narrows ambition.

For accounting and finance firms, this matters even more. The margin for error is lower than in marketing or content. When AI makes a mistake in a client deliverable — especially one tied to financial reporting or an audit — the consequences are real. So use case selection isn’t just a resource question. It’s a risk question.

The four-dimension framework

Here’s how I think about it. Before committing to any AI use case, run it through four dimensions and score it 1–10 on each. The weighted sum gives you a priority score out of 100.

1. Business Impact (35%)

This is the one everyone looks at first, and for good reason — it’s the most intuitive. You’re asking: if this works, how much value does it create? That means time or labor saved, revenue generated or costs reduced, and whether this aligns with something leadership actually cares about.

The research backs this up. A Harvard and BCG study found that consultants using AI completed tasks 25% faster and produced 40% higher-quality outputs — but only when the use case was well-defined with a measurable outcome. Vague impact statements don’t count. “AI will make us more efficient” is not a business case. A specific number tied to a named problem is.

2. Feasibility (30%)

This is where a lot of great ideas quietly die, and it’s the dimension that’s most commonly underweighted in early conversations. You’re asking: do we actually have what we need to build this?

The three inputs are data readiness, technical complexity, and time to first value. Gartner predicts that 60% of AI projects without AI-ready data will be abandoned through 2026. Sixty percent. And 63% of organizations don’t even know if they have the right data management practices in place to support AI.

I run into this constantly with clients. They have a great idea — something that could genuinely move the business — but when you start asking where the data is, what shape it’s in, whether it’s structured and accessible, you realize the AI use case is actually six months of data work away from being viable. That’s not a reason to kill the idea. It’s a reason to sequence it correctly. Flag it, put it in a backlog, get the data work started in parallel, and then pick it up as an AI sprint once the foundation is there.

Scope matters too. Gartner’s analysis shows that single-task AI agents with a defined scope succeed 54% of the time. Large-scale AI transformations? 8%. Narrow wins almost every time.

3. Risk Profile (20%)

This one is especially important for accounting and finance. When I was doing citizen automation work, we’d always ask a few key questions up front: where does this data go? Does it touch financial reporting? Is it in scope for a SOX control?

A high-risk use case isn’t automatically a bad use case. But it needs to be flagged early, because it changes who needs to be in the room, what governance infrastructure has to exist before you go live, and how long the path to production actually is. Gartner projects that 40%+ of agentic AI projects will be canceled by 2027 due to inadequate risk controls. Getting ahead of this at the selection stage is far cheaper than getting surprised at deployment.

You’re scoring three things: how forgiving is this use case if AI makes a mistake, can outputs be reviewed before action is taken, and does a governance framework already exist for this type of work.

4. Adoption Likelihood (15%)

This is the most underweighted dimension in my experience — and the one that kills the most projects after they’ve already been built.

Prosci research found that user proficiency challenges account for 38% of all AI implementation difficulties. Technical issues? 16%. The human side is more than twice as likely to be the problem. And yet in 61% of failed AI initiatives, change management received less than 15% of the project budget. 71% never tracked user adoption metrics at all.

The specific thing I look for here is whether there’s a manager-level champion. Not an executive sponsor — those are easy to get. I mean someone at the controller or senior manager level who is actively going to use the tool, talk about it with their team, and enforce the new workflow. New tools die at the middle-management layer more often than anywhere else. If the managing partner is excited but the controller is skeptical, the staff accounts won’t adopt it.

A use case that a team is already asking for, that fits into their existing workflow without major disruption, and that has a manager-level owner? That’s a slam dunk on adoption. A top-down mandate to transform an entire process with no identified owner below the director level? You’re going to fight for every user.

What’s actually happening right now

McKinsey’s most recent global AI survey found that 88% of organizations now use AI in at least one function. Only 39% see any EBIT impact. That gap — between organizations using AI and organizations benefiting from it — is the defining challenge of 2025 and 2026.

The pressure on accounting and finance firms right now is real. Clients are asking about AI. Competitors are announcing things. Partners are nervous. That pressure creates a temptation to just pick something and build — which is exactly what produces the failure statistics we talked about at the top.

A prioritization framework isn’t a reason to slow down. It’s a reason to go fast on the right things. A use case that scores 75 or above on this framework has a fundamentally different risk profile — and a fundamentally different likelihood of actually making it to production — than one that scores 40.

Key takeaways

  • Most AI projects fail not because the technology doesn’t work, but because organizations skip the selection step. 80% failure rate overall; 95% for GenAI pilots specifically.
  • The four dimensions that predict success are business impact, feasibility, risk profile, and adoption likelihood — all four grounded in research, not just intuition.
  • Data readiness is the single most common killer. Gartner: 60% of projects without AI-ready data will be abandoned. Fix the data before you build the model.
  • Adoption is the most underweighted dimension. User proficiency problems cause 38% of AI failures. Technical rollout is the easy part — behavior change is where projects die.
  • The goal of prioritization isn’t to say no — it’s to build in the right order. Go fast on the right things.

Want the CPE credit? Take the full lesson on EverydayCPE and earn 0.2 CPE credits: [lesson link]

Today’s lesson


Leave a Reply

Discover more from EverydayCPE

Subscribe now to keep reading and get access to the full archive.

Continue reading