In 2013, a researcher named David Kriesel scanned a building floor plan on a Xerox office copier. He got the scan back, looked at it, and noticed something strange. A room marked as 21.11 square meters had become 14.13 square meters. Not a rounding error. Not a smudge. The number had been silently, confidently replaced.
The machine hadn’t malfunctioned. It had done exactly what it was designed to do. It used a compression technique that saves storage space by piecing documents together from repeated visual elements — and it found two numbers that looked similar enough to swap. The document looked professional. The numbers were just wrong.
That same dynamic is now playing out inside AI-assisted accounting workflows. And most people haven’t connected the dots yet.
How AI Actually Works — And Where the Risk Lives
Language models don’t store facts. They’re trained on enormous amounts of text and learn the statistical patterns of how words and ideas tend to go together. Think of it like this: AI intentionally makes a picture blurry. The same way you can look at a blurry photo and still make out what’s in it, AI compresses information down to the minimum needed to represent it. That makes it manageable to store and process.
But when you ask AI to generate output, it has to sharpen that blurry picture back up. It takes the compressed, probability-weighted version of knowledge it has and expands it into something legible — a memo, a summary, an engagement letter. That expansion step is where the errors hide.
A researcher named Matt Ström-Awn coined the term expansion artifacts to describe this: the tells and errors that show up when AI fills in the gaps during output generation. Ted Chiang famously called ChatGPT “a blurry JPEG of the web.” The artifacts aren’t in the compression. They’re in the decompression.
What do expansion artifacts look like in practice? Hedge-heavy language — “it is worth noting,” “it is important to consider.” Suspiciously tidy parallel structure where every bullet is the same length. Confident-sounding numbers with no clear source. Overqualified conclusions that are technically not wrong but don’t actually commit to anything. Stanford researchers estimated in 2024 that roughly 17% of recent computer science academic papers contained AI-drafted content, detectable by tracking words that spiked in frequency after ChatGPT’s launch.
The Stacking Problem
One AI making one mistake is manageable. The real risk in 2026 is stacked AI layers — each one treating the previous output as a reliable source.
Here’s a workflow that happens every day right now. A C-suite executive dictates a voice memo. An AI expands it into a strategy document. Another AI uses that strategy document to draft engagement letters. A third AI summarizes those letters into a client FAQ. A human reviews the FAQ.
By the time that FAQ lands on your desk, it’s three generations removed from what the executive actually said. Every layer compressed and re-expanded. Every layer had an opportunity to silently substitute a detail, drop a nuance, or insert something plausible that wasn’t in the original. And the FAQ looks just as polished as if someone had written it carefully from scratch.
Think about it the same way you’d think about forecasting assumptions. The more layers of assumptions you stack on top of each other, the less confidence you should have in the output and the more scrutiny it deserves. Stacked AI layers work the same way. Each one is a risk layer.
What to Do Differently
Two practical shifts.
First: apply chain-of-custody thinking. Accountants are trained to follow the audit trail — who touched a number, when, and how. That same discipline needs to extend to AI-assisted documents. Before you review something, ask: how many AI layers did this pass through? One layer is a working draft. Three layers means you’re reading a photocopy of a photocopy of a photocopy. Your verification burden should reflect the distance from the source.
Second: learn to spot the tells. Hedge-heavy language is a flag. Perfectly parallel bullet structure is a flag. A confident number with no attribution is your biggest flag. These aren’t proof of an error — but they’re signals to verify rather than trust.
And the piece that matters most from a liability standpoint: your signature doesn’t know whether the error was introduced by a junior staff member or an AI model. Either way, it’s your review that stands between the document and the client.
Key Takeaways
- AI expands as well as compresses — output fills in gaps with plausible content, not necessarily accurate content.
- The Xerox incident is the right mental model: the document looked fine, the numbers were wrong, and it happened by design.
- More AI layers means higher verification burden — apply chain-of-custody thinking to every AI-assisted document.
- Know the expansion artifact tells: hedge language, tidy parallel structure, unsourced confident numbers.
- Your professional liability doesn’t change — review AI output like it has your name on it, because it does.
Want the CPE credit? Take the full lesson on EverydayCPE and earn 0.2 CPE credits: [lesson link]


Leave a Reply