- That AI Thing
- Posts
- weekend ai reads for 2026-02-20
weekend ai reads for 2026-02-20
direct links are available on the web at https://thataithing.beehiiv.com/p/weekend-ai-reads-for-2026-02-20
š° ABOVE THE FOLD: EXISTENTIAL CRISES
Circumstantial Complexity, LLMs and Large Scale Architecture ā On software architecture and the paradox of machine-assisted programming. / Datagubbe (14 minute read)
There are paradoxes here that the proponents of coding agents rarely address. Lawson, for example, suggests that junior developers will ācode circlesā around their seniors, if the latter donāt adopt and adapt to the LLM craze. That may well be true, at least in some cases, but it leaves out the crucial detail of how juniors eventually become seniors.
Circumstance has a lot to do with it. We learn by making our own mistakes, not by letting someone else do them for us. We understand how systems work by examining them in great detail for extended amounts of time, not by having someone else build them for us. Complexity requires upkeep, upkeep requires skill, skill comes from grind, and grind takes time.
Something Big Is Happening / Matt Shumer (25 minute read)
this was sent by many people; your mileage may vary
realted (1), Investorsā AI Panic Ignores the Facts / Parmy Olson, Bloomberg (8 minute read)
Of the 4,783 words in Something Big Is Happening, none point to quantifiable data or concrete evidence suggesting AI tools will put millions of white-collar professionals out of work any time soon. It is more testimony than evidence, with anecdotes about Shumer leaving his laptop and coming back to find finished code or a friend's law firm replacing junior lawyers.
related (2), Ed Zitron did not enjoy it (āI hate this guyās writing so muchā), and annotated the post with his thoughts, many of which are in the childish vein of ājust stop talkingā / Ed Zitron, Dropbox (30 minute read)
Future of Software Engineering Retreat Findings [PDF] / Thought Works (17 minute read)
Nobody at the retreat could define what product managers will do in an AI-driven world. Some organizations are pushing PMs closer to technical tooling, training them to work in Markdown and developer environments. Others see the roles diverging further, with PMs becoming strategic orchestrators while developers take on more of the tactical product decision-making.
What is clear is that AI is exposing existing dysfunctions in the PM-developer relationship rather than creating new ones.
The AI Vampire. / Steve Yegge, Medium (19 minute read)
I know this because they are literally telling me their plans like villains at the end of an old movie, since with Gas Town I have mastered the illusion of knowing what Iām doing. Truth is, nobody, least of all me, knows what theyāre doing right now. But I look like I do, so everyone is coming to show me their almost uniformly terrible ideas.
How AI assistance impacts the formation of coding skills / Roger Wong (4 minute read)
This is the same dynamic I fear in design. When I wrote about the design talent crisis, educators like Eric Heiman told me āwe internalize so much by doing things slower⦠learning through tinkering with our process, and making mistakes.ā Bradford Prairie put it more bluntly: āIf thereās one thing that AI canāt replace, itās your sense of discernment for what is good and what is not good.ā But discernment comes from reps, and AI is eating the reps.
How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt / Margaret Storey (5 minute read)
First, they may need to recognize that velocity without understanding is not sustainable. Teams should establish cognitive debt mitigation strategies.
š» QUOTES OF THE WEEK
Their goal is to maintain their personal benefit, as they see it, and all they are doing is attempting to negotiate with you what the level of abuse is that you find acceptable. Preventing abuse is not on their agenda.
Objects do reveal what we believe in but ⦠they reveal systems and power, also.
š„ FOR EVERYONE
A āQuitGPTā campaign is urging people to cancel their ChatGPT subscription / MIT Technology Review (9 minute read)
Dozens of left-leaning teens and twentysomethings scattered across the US came together to organize QuitGPT in late January. They range from pro-democracy activists and climate organizers to techies and self-proclaimed cyber libertarians, many of them seasoned grassroots campaigners.
As AI enters the operating room, reports arise of botched surgeries and misidentified body parts / Reuters (19 minute read)
At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patientsā heads during operations.
Cerebrospinal fluid reportedly leaked from one patientās nose. In another reported case, a surgeon mistakenly punctured the base of a patientās skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.
related, A.I. Is Making Doctors Answer a Question: What Are They Really Good For? / New York Times (11 minute read)
The New Creator Economy Is Monetizing Your AI Brain / Rafat Ali, Linkedin (5 minute read)
What if you could subscribe to a domain expertās AI brain the way you subscribe to their newsletter? Not their chat history BUT the distilled strategic intelligence layer. The frameworks, the pattern recognition, the editorial judgment that shapes how the AI thinks about their domain. Updated, refined and made better with every conversation.
and look at that ⦠Playbooks ā Give your agents context to make them smarter
Securing Dual-Use Pathogen Data of Concern / Johns Hopkins, University of Oxford, et al, arXiv (24 minute read)
The type of data used to train a model is intimately tied to the capabilities it ultimately possessesāincluding those of biosecurity concern. For this reason, an international group of more than 100 researchers at the recent 50th anniversary Asilomar Conference endorsed data controls to prevent the use of AI for harmful applications such as bioweapons development. To help design such controls, we introduce a five-tier Biosecurity Data Level (BDL) framework for categorizing pathogen data. ⦠In a world with widely accessible computational and coding resources, data controls may be among the most high-leverage interventions available to reduce the proliferation of concerning biological AI capabilities.
š FOUNDATIONS
Introducing Lockdown Mode and Elevated Risk labels in ChatGPT / OpenAI blog (6 minute read)
Lockdown Mode is an optional, advanced security setting designed for a small set of highly security-conscious usersāsuch as executives or security teams at prominent organizationsāwho require increased protection against advanced threats. It is not necessary for most users. Lockdown Mode tightly constrains how ChatGPT can interact with external systems to reduce the risk of prompt injectionābased data exfiltration.
NotebookLM + Gemini Use Cases Guide / Daria Cupareanu, Google Docs (22 minute read)
You Couldāve Invented OpenClaw / Nader Dabit, Gist, GitHub (31 minute read)
this is technical but skip over the code and itās still a thorough review of how agents work
In this post, Iāll start from scratch and build up to OpenClawās architecture step by step, showing how you could have invented it yourself from first principles, using nothing but a messaging API, an LLM, and the desire to make AI actually useful outside the chat window.
End goal: understand how persistent AI assistants work, so you can build your own (or become an OpenClaw power user).
š FOR LEADERS
KPMG pressed its auditor to pass on AI cost savings / The Financial Times (6 minute read)
As well as saying the auditor should be using AI and other new technology to improve efficiency, KPMG International argued that its books were not especially complicated and that, since Grant Thornton had been its accountant for several years, it knew the business well enough to do the work more quickly.
ā¦
KPMGās behind-the-scenes argument that new technology could justify a fee cut for its own audit could embolden companies to press their accountants for similar reductions.
no kidding
related, KPMG partner fined A$10,000 for using AI tools to cheat in test about AI / Business Standard (7 minute read)
Want More Out of Your AI Investments? Think People First ā To unlock Alās exponential productivity potential, companies must modernize workflow and workforce in tandem. / Bain & Company (18 minute read)
5 AI Metrics That Actually Prove ROI to Your Board ā Executive leaders struggle to quantify and communicate AIās value. These metrics change that. / Gartner (6 minute read)
The Fashion Execās Guide to the AI Career Reset / Vogue (14 minute read)
For leaders, these differences translate into competing needs: younger teams asking for boundaries and values, mid-career managers seeking capability and recognition, and senior professionals focused on strategic leverage.
related (?), Accenture ties promotions to AI tool usage while some employees call the tools ābroken slop generatorsā ā Accenture monitors individual AI tool logins and ties them to promotion decisions. Those who donāt adapt have to go. / The Decoder (5 minute read)
š FOR EDUCATORS
Before You Buy AI for Your Campus, Read This / Marc Watkins, Substack, archive (15 minute read)
Students in high school and throughout higher education donāt trust that their use of institutionally provided AI tools wonāt be monitored or linked back to them if they use AI inappropriately in a class. They prefer to use their own AI, and that largely remains ChatGPT.
ExāGoogle exec says degrees in law and medicine are a waste of time because AI will catch up / Fortune (8 minute read)
āHigher education as we know it is on the verge of becoming obsolete,ā Tarifi told Fortune. āThriving in the future will come not from collecting credentials but from cultivating unique perspectives, agency, emotional awareness, and strong human bonds.
āI encourage young people to focus on two things: the art of connecting deeply with others, and the inner work of connecting with themselves.ā
Why Unified Data Is Becoming Non-Negotiable for School Districts / EdTech Digest (10 minute read)
Districts donāt have a data collection problem, they have a data connection problem. The challenge is that information lives in too many places and rarely tells a clear story. The shift happens when districts unify their data in one place, simplify it into insights that actually make sense, and build those insights directly into the daily workflows of teachers, principals, and families.
š FOR TECHNOLOGISTS
The Scarcity Trap: Why AI Still Feels Like a Metered Utility / Igor, Substack, archive (14 minute read)
When compute is expensive, the only sustainable model is to let customers own the compute.
counterpoint (?), The AI haterās guide to code with LLMs (The Overview) / All Confirmation Bias, All The Time (25 minute read)
A $5000 budget would barely suffice to run something like gpt-oss 120b (OpenAIās open model that is okay at code-writing tasks). Additionally, if you kept the model busy 100% of the time, you might be talking $50-$200 in electricity depending on local prices, per month.
If you spent $15,000 and triple the electricity you could run something like GLM-4.7 at a really good pace.
Desloppify ā Agent toolset to help make your slop code well-engineered and beautiful. / peteromallet, Github
Detects subjective and mechanical code-base issues - everything from poor quality abstractions and inconsistent naming, to file complexity and duplication. Once identified, it tracks issues, and helps you work with your agents to relentlessly solve them. Currently supports Typescript & Python.
The vibe coding trap / Max Musing, Twitter, archive (4 minute read)
As AI makes engineers more productive, the value of each engineering hour goes up, not down. If your engineers can ship 5x more product per hour, every hour they spend maintaining your internal admin panel or debugging your homegrown BI platform is 5x more expensive than it used to be. The more productive AI makes your team, the more costly it becomes to waste their time on undifferentiated internal software.
This is the vibe coding trap. It makes going from 0 to 1 cheaper but 1 to ā much more expensive.
š FOR FUN
This was a 2 line prompt in seedance 2. If the hollywood is cooked guys are right maybe the hollywood is cooked guys are cooked too idk. / Ruairi Robinson, XCancel (2 minute video)
related (1), some talking heads talking about it, AI fight scene video of Tom Cruise and Brad Pitt goes viral / KTLA 5, YouTube (3 minute video)
related (2), We just made a $200,000,000 AI movie in just one day. Yes, this is 100% AI. / The Dor Brothers, XCancel (3 minute video)
Good Luck, Have Fun, Donāt Die: a wild parable about tech addiction / The Verge (7 minute read)
But as unhinged as Good Luckās story becomes, it speaks directly to our present moment, constantly being bombarded with brain-smoothing content while being pushed to unthinkingly adopt new technology.
Meme Dealer ā Meme Your Chats
ai;dr / Sidās Blog (2 minute read)
For me, writing is the most direct window into how someone thinks, perceives, and groks the world. Once you outsource that to an LLM, Iām not sure what weāre even doing here.
š§æ AI-ADJACENT
AI makes you boring / Marginalia (3 minute read)
This may be a feature if you are exploring a topic you are unfamiliar with, but itās a fatal flaw if you are writing a blog post or designing a product or trying to do some other form of original work.
Why Does Every Cool Brand Eventually Become Boring? ā A private equity story. / Sprezzatura, Substack, archive (19 minute read)
itās always PE, isnāt it?
lot of lead-in material; the essay starts about three-sevenths of the way down
ā