weekend ai reads for 2026-03-20

šŸ“° ABOVE THE FOLD: ORGANIZATIONAL DESIGN

Strategy Summit 2026: Why AI Means Radical Change / HBR IdeaCast (30 minute listen)

In this episode, Harvard Business School professor Tsedal Neeley shares what she’s learned about successful AI implementation and organizational transformation, from the minimum technological capabilities needed to what it takes to overcome silos to how to transform workflows and processes to add real value.

Management In The Age Of AI / staysaasy, Twitter, archive (5 minute read)

AI tools are moving to consumption-based pricing, which means managers are going to have to think about how much money to invest in each individual. This is a massive paradigm shift. It’s like if you had to decide every month how good of a laptop each person on your team gets, and sometimes people run out of laptop halfway through the month.

Human Strategy In An AI-Accelerated Workflow / Smashing Magazine (12 minute read)

AI isn’t replacing that work. Rather, it’s amplifying everything around it. The real shift happening is that designers are moving from being makers of outputs to directors of intent. From creators to curators. From hands-on executors to strategic decision-makers.

Every Company is a Startup Now / Hardik Pandya (11 minute read)

The AI wave isn’t caring about your org chart. It isn’t slowing down while you restructure. The window to get the right people doing the right work is open right now and it won’t stay that way.

AI Exposure of the US Job Market / Josh Kale, GitHub

 

šŸ“» QUOTES OF THE WEEK

Most excellence is merely just creating a plan and sticking to it.

Daniel Frank (source)

 

[LinkedIn] has always been a disaster but I wonder what it will feel like when it’s a meta-disaster of bots writing and bots replying. In some ways it’s already the human equivalent of Moltbook, so maybe it’ll be better.

Paul Ford (source)

 

šŸ‘„ FOR EVERYONE

AI usage among doctors doubles as confidence in technology grows / American Medical Association (6 minute read)

Clinical use of AI continues to grow in both prevalence and scope. Over 80 percent of physicians now use AI professionally, doubling since 2023. The average number of use cases per physician is 2.3 in 2026, up from 1.1 in 2023.

  • remember when doctors would roll their eyes when a patient brought up something the patient had read on WebMD? how the tables have turned …

Courts will soon wrestle with questions such as how exactly AI affects what constitutes ā€œreadily ascertainableā€ information that therefore isn’t secret enough for protection.

When a buyer asks about something you listed, the technology scans your post and drafts a response using the details you already entered. The AI confirms availability, restates the price, and even includes the pickup location if you set one.

Rescue dog Rosie’s cancer shrinks after world-first mRNA vaccine — The tale of this heartbroken tech entrepreneur, his tumour-riddled rescue dog and a cure for cancer has leading scientists astounded. / The Australian (13 minute read)

Tech executives have promised that AI will cure cancer. The reality is more complicated — and more hopeful. This essay examines where AI genuinely accelerates cancer research, where the promises fall short, and what researchers, policymakers, and funders need to do next.

Software Bonkers / Craig Mod (9 minute read)

I’m software bonkers: I can’t stop thinking about software. And I can’t stop building software.

 

šŸ“š FOUNDATIONS

How Do You Want to Remember? / Zak El Fassi (13 minute read)

I asked my AI agent how it wants to remember things. It redesigned its own memory system, ran a self-eval, diagnosed its blindspots, and improved recall from 60% to 93% — for two dollars. The interesting part isn't the benchmark. It's what happens when you treat an AI as a participant in its own cognitive architecture.

LLM Architecture Gallery / Sebastian Raschka, PhD (13 minute read)

How to Make Sense of AI / Cedric Chin, Commoncog (20 minute read)

Your LLM Doesn't Write Correct Code. It Writes Plausible Code. / Vagabond Research, Substack, archive (24 minute read)

LLMs optimize for plausibility over correctness. In this case, plausible is about 20,000 times slower than correct.

 

šŸš€ FOR LEADERS

How Schneider Electric Scales AI in Both Products and Processes — The French multinational avoids miring its innovations in pilot purgatory by moving forward with reasonable confidence rather than absolute certainty. / MIT Sloan Management Review (12 minute read)

And while it is possible that AI itself could serve this function, AI still suffers from a trust deficit—most boards would still rather put their faith in advice from McKinsey or BCG than ChatGPT. (A more cynical take: CEOs still like to use consultants to justify their own decisions to boards, as well as to have someone else to blame if it all goes wrong.)

The Corporate Strategy Function in an AI-First World [PDF] / Boston Consulting Group (13 minute read)

 

šŸŽ“ FOR EDUCATORS

  • Zvi’s take: ā€œMostly it reads as if the whole enterprise was already mostly fake, or when it wasn’t fake it succeeded in spite of its formal structures.ā€

How districts are experimenting with outcomes-based contracts in ed tech — As school leaders try to get a handle on ed tech investments, some are looking to make payments contingent upon student achievement. / K-12 Dive (8 minute read)

Teens Are Using AI-Fueled ā€˜Slander Pages’ to Mock Their Teachers — Viral student-run TikTok and Instagram accounts are using AI to make memes of school faculty comparing them to figures like Jeffrey Epstein and Benjamin Netanyahu. / Wired (12 minute read)

 

šŸ“Š FOR TECHNOLOGISTS

The Multi-Agent Trap / Towards Data Science (15 minute read)

Adding more AI agents makes most systems worse. Three architecture patterns separate the $60M wins from the 40% that get canceled.

I Built 63 Design Skills For Claude - and They’re Free — Teaching AI what designers know so it can work with us, not around us. / Marie Claire Dean, Substack, archive (7 minute read)

Coding agents for data analysis / Simon Willison, GitHub

A three-hour workshop presented by Simon Willison at NICAR 2026.

Coding agents such as Claude Code and OpenAI Codex are mainly marketed at developers, but they’re actually applicable to a much wider array of problems, including data analysis, data cleaning, web scraping and other tasks commonly faced by data journalists.

Why Your Database Can’t Handle the Coming Agent Swarm / Gradient Flow, Substack, archive (8 minute read)

They aren’t just faster versions of PostgreSQL or MySQL; they represent a fundamental rethink of how databases should work in an agent-driven world. Let me walk through the core principles driving this transformation.

 

šŸŽ‰ FOR FUN

Humans have and always will be important for watching ads.

The Appalling Stupidity of Spotify’s AI DJ / Charles Petzold (7 minute read)

The use of the word ā€œsongā€ for instrumental music — that is, music that is not sung and hence is not a song — is borderline illiterate. It illustrates more than anything how the entire system is designed for pop songs. For music of the western tradition, the word ā€œcompositionā€ or ā€œworkā€ or ā€œpieceā€ is used except, of course, if the composition is actually a song.

  • again, we appreciate pedantic outrage

 

🧿 AI-ADJACENT

The Last Quiet Thing / Terry Godier (11 minute read)

Sometime in the last twenty years, our possessions came alive.

Nothing you own is finished. Everything exists in a state of permanent incompletion, permanently needing.

  • best thing you’ll read this week

A short guide to email opening lines / The Economist, archive (6 minute read)

I hope this email isn’t interrupting anything urgent.

Ostensible meaning: I am respectful of your time.
Actual meaning: I have no idea how email works.

 

ā‹„