Writing

Doctrine, essays, and operating language for leaders making consequential AI decisions

My writing is for people making real decisions about AI under real constraint: CEOs, CFOs, boards, private-equity operating partners, senior operators, and technical leaders working close to live consequences.

Across Substack, LinkedIn, the book, and earlier podcast conversations, the throughline is consistent:

- Cut through noise.
- Surface architectural truth.
- Give language to decisions that carry consequences.

This is not commentary for clicks.

It is a body of work on AI as capital discipline: delegation, control, workflow design, operating models, proof, and the conditions that must be true before intelligent systems touch live work.

If capital is about to be committed, dependency is widening, or activity is being mistaken for evidence, this is where the thinking slows down enough for the real decision to become visible.

Substack — Unhyped AI
Long-form essays and deeper analysis for leaders who need to think clearly before acting.

This is the primary home of the work: operating models, delegation, control, workflow redesign, proof, economic defensibility, and the realities that sit underneath the AI story.

It is where ideas are developed properly, patterns are named, and the writing moves beyond reaction into doctrine.

Subscribe to Unhyped AI →

LinkedIn
Short-form thinking, sharper formulations, and live public testing of ideas.

This is where concepts are pressure-tested in public, assumptions are challenged, and emerging patterns first become visible.

Follow on LinkedIn →

The Book
UNHYPED: From Hype to Hard ROI in the Age of AI
A strategic and architectural field guide for leaders navigating intelligent systems.

The book brings the core ideas together into a durable framework for value flows, governance, organisational readiness, operating-model design, and the decisions that determine whether AI creates real value or expensive noise.

Learn more →

The Decision-Forcing Canvas
A practical artefact for leaders and teams who need clarity before momentum takes over.

The canvas exists to force the decision before the build: what job is changing, where the boundary sits, who owns the outcome, what the system is allowed to do, how it could fail, and what must be true before proceeding is responsible.

It translates the writing into a disciplined decision surface without prescribing solutions or encouraging premature action.

Explore the Canvas →

Podcast Archive
A short-run archive of conversations on judgment, coordination, risk, and the organisational realities of AI.

Across eleven episodes, we explored themes that remain central to the work:
- why AI fails in workflows, not models
- the shape and limits of agentic systems
- governance and decision architecture
- where human judgment remains essential

These conversations remain a useful companion to the written work.

Listen to past episodes →

Stay Connected
For new essays, analysis, and updates:

Subscribe to Unhyped AI on Substack →