weekend ai reads for 2026-02-20

šŸ“° ABOVE THE FOLD: EXISTENTIAL CRISES

Circumstantial Complexity, LLMs and Large Scale Architecture — On software architecture and the paradox of machine-assisted programming. / Datagubbe (14 minute read)

There are paradoxes here that the proponents of coding agents rarely address. Lawson, for example, suggests that junior developers will ā€œcode circlesā€ around their seniors, if the latter don’t adopt and adapt to the LLM craze. That may well be true, at least in some cases, but it leaves out the crucial detail of how juniors eventually become seniors.

Circumstance has a lot to do with it. We learn by making our own mistakes, not by letting someone else do them for us. We understand how systems work by examining them in great detail for extended amounts of time, not by having someone else build them for us. Complexity requires upkeep, upkeep requires skill, skill comes from grind, and grind takes time.

Something Big Is Happening / Matt Shumer (25 minute read)

Of the 4,783 words in Something Big Is Happening, none point to quantifiable data or concrete evidence suggesting AI tools will put millions of white-collar professionals out of work any time soon. It is more testimony than evidence, with anecdotes about Shumer leaving his laptop and coming back to find finished code or a friend's law firm replacing junior lawyers.

  • related (2), Ed Zitron did not enjoy it (ā€œI hate this guy’s writing so muchā€œ), and annotated the post with his thoughts, many of which are in the childish vein of ā€œjust stop talkingā€ / Ed Zitron, Dropbox (30 minute read)

Future of Software Engineering Retreat Findings [PDF] / Thought Works (17 minute read)

Nobody at the retreat could define what product managers will do in an AI-driven world. Some organizations are pushing PMs closer to technical tooling, training them to work in Markdown and developer environments. Others see the roles diverging further, with PMs becoming strategic orchestrators while developers take on more of the tactical product decision-making.

What is clear is that AI is exposing existing dysfunctions in the PM-developer relationship rather than creating new ones.

The AI Vampire. / Steve Yegge, Medium (19 minute read)

I know this because they are literally telling me their plans like villains at the end of an old movie, since with Gas Town I have mastered the illusion of knowing what I’m doing. Truth is, nobody, least of all me, knows what they’re doing right now. But I look like I do, so everyone is coming to show me their almost uniformly terrible ideas.

This is the same dynamic I fear in design. When I wrote about the design talent crisis, educators like Eric Heiman told me ā€œwe internalize so much by doing things slower… learning through tinkering with our process, and making mistakes.ā€ Bradford Prairie put it more bluntly: ā€œIf there’s one thing that AI can’t replace, it’s your sense of discernment for what is good and what is not good.ā€ But discernment comes from reps, and AI is eating the reps.

First, they may need to recognize that velocity without understanding is not sustainable. Teams should establish cognitive debt mitigation strategies.

 

šŸ“» QUOTES OF THE WEEK

Their goal is to maintain their personal benefit, as they see it, and all they are doing is attempting to negotiate with you what the level of abuse is that you find acceptable. Preventing abuse is not on their agenda.

Baldur Bjarnason (source, via jim)

 

Objects do reveal what we believe in but … they reveal systems and power, also.

John Mauriello (source)

 

šŸ‘„ FOR EVERYONE

Dozens of left-leaning teens and twentysomethings scattered across the US came together to organize QuitGPT in late January. They range from pro-democracy activists and climate organizers to techies and self-proclaimed cyber libertarians, many of them seasoned grassroots campaigners.

At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations.

Cerebrospinal fluid reportedly leaked from one patient’s nose. In another reported case, a surgeon mistakenly punctured the base of a patient’s skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.

The New Creator Economy Is Monetizing Your AI Brain / Rafat Ali, Linkedin (5 minute read)

What if you could subscribe to a domain expert’s AI brain the way you subscribe to their newsletter? Not their chat history BUT the distilled strategic intelligence layer. The frameworks, the pattern recognition, the editorial judgment that shapes how the AI thinks about their domain. Updated, refined and made better with every conversation.

  • and look at that … Playbooks — Give your agents context to make them smarter

Securing Dual-Use Pathogen Data of Concern / Johns Hopkins, University of Oxford, et al, arXiv (24 minute read)

The type of data used to train a model is intimately tied to the capabilities it ultimately possesses–including those of biosecurity concern. For this reason, an international group of more than 100 researchers at the recent 50th anniversary Asilomar Conference endorsed data controls to prevent the use of AI for harmful applications such as bioweapons development. To help design such controls, we introduce a five-tier Biosecurity Data Level (BDL) framework for categorizing pathogen data. … In a world with widely accessible computational and coding resources, data controls may be among the most high-leverage interventions available to reduce the proliferation of concerning biological AI capabilities.

 

šŸ“š FOUNDATIONS

Lockdown Mode is an optional, advanced security setting designed for a small set of highly security-conscious users—such as executives or security teams at prominent organizations—who require increased protection against advanced threats. It is not necessary for most users. Lockdown Mode tightly constrains how ChatGPT can interact with external systems to reduce the risk of prompt injection–based data exfiltration.

NotebookLM + Gemini Use Cases Guide / Daria Cupareanu, Google Docs (22 minute read)

You Could’ve Invented OpenClaw / Nader Dabit, Gist, GitHub (31 minute read)

  • this is technical but skip over the code and it’s still a thorough review of how agents work

In this post, I’ll start from scratch and build up to OpenClaw’s architecture step by step, showing how you could have invented it yourself from first principles, using nothing but a messaging API, an LLM, and the desire to make AI actually useful outside the chat window.

End goal: understand how persistent AI assistants work, so you can build your own (or become an OpenClaw power user).

 

šŸš€ FOR LEADERS

KPMG pressed its auditor to pass on AI cost savings / The Financial Times (6 minute read)

As well as saying the auditor should be using AI and other new technology to improve efficiency, KPMG International argued that its books were not especially complicated and that, since Grant Thornton had been its accountant for several years, it knew the business well enough to do the work more quickly.

…

KPMG’s behind-the-scenes argument that new technology could justify a fee cut for its own audit could embolden companies to press their accountants for similar reductions.

Want More Out of Your AI Investments? Think People First — To unlock Al’s exponential productivity potential, companies must modernize workflow and workforce in tandem. / Bain & Company (18 minute read)

5 AI Metrics That Actually Prove ROI to Your Board — Executive leaders struggle to quantify and communicate AI’s value. These metrics change that. / Gartner (6 minute read)

For leaders, these differences translate into competing needs: younger teams asking for boundaries and values, mid-career managers seeking capability and recognition, and senior professionals focused on strategic leverage.

 

šŸŽ“ FOR EDUCATORS

Before You Buy AI for Your Campus, Read This / Marc Watkins, Substack, archive (15 minute read)

Students in high school and throughout higher education don’t trust that their use of institutionally provided AI tools won’t be monitored or linked back to them if they use AI inappropriately in a class. They prefer to use their own AI, and that largely remains ChatGPT.

ā€œHigher education as we know it is on the verge of becoming obsolete,ā€ Tarifi told Fortune. ā€œThriving in the future will come not from collecting credentials but from cultivating unique perspectives, agency, emotional awareness, and strong human bonds.

ā€œI encourage young people to focus on two things: the art of connecting deeply with others, and the inner work of connecting with themselves.ā€

Districts don’t have a data collection problem, they have a data connection problem. The challenge is that information lives in too many places and rarely tells a clear story. The shift happens when districts unify their data in one place, simplify it into insights that actually make sense, and build those insights directly into the daily workflows of teachers, principals, and families.

 

šŸ“Š FOR TECHNOLOGISTS

The Scarcity Trap: Why AI Still Feels Like a Metered Utility / Igor, Substack, archive (14 minute read)

When compute is expensive, the only sustainable model is to let customers own the compute.

counterpoint (?), The AI hater’s guide to code with LLMs (The Overview) / All Confirmation Bias, All The Time (25 minute read)

A $5000 budget would barely suffice to run something like gpt-oss 120b (OpenAI’s open model that is okay at code-writing tasks). Additionally, if you kept the model busy 100% of the time, you might be talking $50-$200 in electricity depending on local prices, per month.

If you spent $15,000 and triple the electricity you could run something like GLM-4.7 at a really good pace.

Desloppify — Agent toolset to help make your slop code well-engineered and beautiful. / peteromallet, Github

Detects subjective and mechanical code-base issues - everything from poor quality abstractions and inconsistent naming, to file complexity and duplication. Once identified, it tracks issues, and helps you work with your agents to relentlessly solve them. Currently supports Typescript & Python.

The vibe coding trap / Max Musing, Twitter, archive (4 minute read)

As AI makes engineers more productive, the value of each engineering hour goes up, not down. If your engineers can ship 5x more product per hour, every hour they spend maintaining your internal admin panel or debugging your homegrown BI platform is 5x more expensive than it used to be. The more productive AI makes your team, the more costly it becomes to waste their time on undifferentiated internal software.

This is the vibe coding trap. It makes going from 0 to 1 cheaper but 1 to āˆž much more expensive.

 

šŸŽ‰ FOR FUN

This was a 2 line prompt in seedance 2. If the hollywood is cooked guys are right maybe the hollywood is cooked guys are cooked too idk. / Ruairi Robinson, XCancel (2 minute video)

But as unhinged as Good Luck’s story becomes, it speaks directly to our present moment, constantly being bombarded with brain-smoothing content while being pushed to unthinkingly adopt new technology.

Meme Dealer — Meme Your Chats

ai;dr / Sid’s Blog (2 minute read)

For me, writing is the most direct window into how someone thinks, perceives, and groks the world. Once you outsource that to an LLM, I’m not sure what we’re even doing here.

 

🧿 AI-ADJACENT

AI makes you boring / Marginalia (3 minute read)

This may be a feature if you are exploring a topic you are unfamiliar with, but it’s a fatal flaw if you are writing a blog post or designing a product or trying to do some other form of original work.

Why Does Every Cool Brand Eventually Become Boring? — A private equity story. / Sprezzatura, Substack, archive (19 minute read)

  • it’s always PE, isn’t it?

  • lot of lead-in material; the essay starts about three-sevenths of the way down

 

ā‹„