Skip to content

Big Themes Review

Context: A 10-dimension assessment that evaluates a manuscript the way a publisher's acquisitions editor would. Not prose quality -- strategic quality. Does this book hold together as a product someone would buy, read, and recommend?


Automated quality audits tell you whether the writing is mechanically sound. The editorial workflow tells you whether the prose is polished. But neither answers the question that actually determines whether your book succeeds: Is this a good book?

Not well-written. Not well-researched. Good. Does it occupy a unique position? Does it serve its audience? Does the argument build across 12 chapters into something that couldn't be a blog post series? That's what the big themes review assesses.

The 10 Dimensions

Think of this as the checklist a publisher uses before they write a check. Each dimension answers a specific strategic question:

# Dimension What It Assesses
1 Market Positioning Does the book occupy a unique, defensible position? Could a reader confuse this with 5 other books on the topic?
2 Audience Clarity Are both audiences (enterprise leaders + startup founders) consistently served across all 12 chapters?
3 Argument Strength Is the core thesis compelling and backed by evidence? Would a skeptic finish Chapter 1 and keep reading?
4 Framework Quality Are the numbered frameworks practical and memorable? Could a reader use them without the book open?
5 Example Diversity Mix of industries, company sizes, geographies? Or the same 5 Silicon Valley companies on repeat?
6 Technical Depth Right level for the target reader? Not so shallow it's obvious, not so deep it's a textbook?
7 Narrative Arc Does the book build momentum across chapters? Does Part IV feel like a culmination, not an afterthought?
8 Actionability Can a reader apply this on Monday morning? Or is it theory dressed up as advice?
9 Voice Consistency Does it sound like one person wrote it? Across 81 sections and 81,000 words?
10 Completeness Any obvious gaps? Topics the audience expects that aren't covered?

The order matters. Market positioning comes first because if the book doesn't have a unique angle, polish won't save it. Voice consistency comes near the end because it's the quality pipeline's job to enforce that -- the big themes review just confirms the pipeline worked.

How It's Scored

Each dimension gets one of four ratings:

Rating Meaning Action Required
Critical Fix Fundamental problem that blocks publication Must resolve before any editorial work
Important Meaningful gap that weakens the book Should resolve before publication
Minor Polish item that would improve quality Resolve if time allows
No Issues This dimension passes None

The aggregation tells you where you stand:

  • Any Critical Fix = Not publication ready. Stop editorial work and address the structural problem.
  • Multiple Important = Close but needs work. Prioritize by reader impact.
  • Only Minor or No Issues = Publication ready. Move to final editorial passes.

What It Caught

For Blueprint for An AI-First Company, the big themes review produced:

0 Critical Fixes. The book's positioning, argument, and completeness all passed -- a direct result of the research pipeline and writing prompts being designed to address these dimensions from the start.

3 Important Fixes:

  1. Example over-reliance. Harvey appeared in 8 of 12 chapters. When one company dominates your examples, readers wonder whether you only know one company. Fix: reduce to 4 high-impact appearances, diversify with other vertical AI companies.

  2. Startup audience underserved in Part III. Operations chapters (8-10) leaned enterprise -- team structures, data governance, organizational change. Startup founders would feel the book stopped speaking to them. Fix: add startup-specific callouts and examples.

  3. Technical depth inconsistency. Part II went deep into architecture patterns and code examples. Part IV stayed strategic. The gap was jarring for technical readers. Fix: add implementation examples to Part IV's ethics and governance chapters.

2 Minor Fixes:

  1. Vague forward references. "We'll explore this later" doesn't build confidence. "Chapter 7's Agent Hub pattern addresses this directly" does.

  2. Part intros lacked connection previews. They listed what each chapter covers but not how they connect. One sentence about how Chapter 4's infrastructure decisions constrain Chapter 5's build-vs-buy choices makes the part feel intentional.

When to Run This

Run it after all sections are drafted and the automated quality audits pass. Too early and you waste the assessment on draft-quality problems. Too late and you can't address structural issues.

Plan 1-2 hours for a 12-chapter book -- the assessment runs in minutes, but the judgment about which Important fixes to prioritize is the real work.

What It Can't Tell You

The big themes review evaluates the book as a product. It won't catch that your opening hook in Chapter 4 falls flat or that Section 7.3 repeats an argument from Section 7.1. Those are editorial problems -- different layer, different tools.

It also can't tell you whether the book is interesting. A book can pass all 10 dimensions and still be boring. These dimensions are necessary conditions. The sufficient condition is a writer with something worth saying. No review system can assess that.


Related: Review Philosophy | Editorial Workflow | Quality Skills