weekend ai reads for 2026-01-23

šŸ“° ABOVE THE FOLD: ON CODING AGENTS

It’s almost too easy to make new software, in fact, and that can be exhausting. One project idea would lead to another, and I was soon spending eight hours a day during my winter vacation shepherding about 15 Claude Code projects at once. That’s too much split attention for good results, but the novelty of seeing my ideas come to life was addictive.

Vibe Coding Without System Design is a Trap / Focused Chaos, Substack, archive (18 minute read)

The moment I went to post a job and saw the predetermined field choices I knew the system wasn’t flexible enough. I hadn’t designed the system properly and the AI filled in the gaps for me.

You can prompt it not to do this.

You can tell it to introduce global settings, environment variables, or config tables.

But you have to already know that it shouldn’t be hardcoded. That knowledge doesn’t come from vibe coding. It comes from experience (which I have, but wasn’t using at that moment!)

I was a top 0.01% Cursor user. Here's why I switched to Claude Code 2.0. — Here’s a comprehensive guide from someone who’s been using coding AI since 2021 and read all those Claude Code guides so you don't have to. / Silen Naihin (28 minute read)

I’ve spent 2000+ hours building with LLMs this year. These are the patterns that really work. I GUARANTEE that you have not heard of at least one of these tips.

Ralph is a development methodology based on continuous AI agent loops. As Geoffrey Huntley describes it: "Ralph is a Bash loop" - a simple while true that repeatedly feeds an AI agent a prompt file, allowing it to iteratively improve its work until completion.

The technique is named after Ralph Wiggum from The Simpsons, embodying the philosophy of persistent iteration despite setbacks.

Because tools ranging from Claude Code to Lovable typically don’t require robust coding knowledge just to get to a functional app, we are witnessing the early rise of micro apps. These are apps that are extremely context-specific, address niche needs, and then ā€œdisappear when the need is no longer present,ā€ Legand L. Burge III, a professor of computer science at Howard University, said.

 

šŸ“» QUOTES OF THE WEEK

Friction-maxxing is not simply a matter of reducing your screen time, or whatever. It’s the process of building up tolerance for ā€œinconvenienceā€ (which is usually not inconvenience at all but just the vagaries of being a person living with other people in spaces that are impossible to completely control) — and then reaching even toward enjoyment.

Kathryn Jezer-Morton (source)

 

I don’t think it’s very likely that it’s gonna be able to write anything meaningful, or that it’s going to be making movies from whole cloth, like Tilly Norwood. That’s bull.

Ben Affleck (source)

 

šŸ‘„ FOR EVERYONE

Personal Taste Is the Moat / Cong Wang (4 minute read)

But there’s something AI cannot do: tell you whether something should exist.

That requires taste: judgment formed by long exposure to the best work humans have done, and by living with the consequences of decisions over time. In the AI era, personal taste is the moat.

Why I Deleted ChatGPT After Three Years ā€” Ads are only the symptom of a bigger problem / Alberto Romero, The Algorithmic Bridge, Substack, archive (14 minute read)

AI assistants won’t be exempt from these dynamics just because they’re conversational rather than algorithmic; rather, the opposite is true: the degree to which there will be tier-dependent user experiences is going to be larger than on any other software category. And given that people nowadays use chatbots for everything, the degree to which these differences will impact their lives will also be greater.

The company promises it won’t ā€œoptimize for time spent in ChatGPT,ā€ and by this it means it won’t prompt its model to keep users engaged for as long as possible, and it won’t try to maximize the time that people’s eyeballs are spent on ChatGPT looking at ads. This pledge won’t only be hard to stick by, it’s also difficult to measure.

To reduce the spread of low quality AI content, we’re actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content.

When attention is fragmented and speed becomes the dominant value, media rearranges itself around that reality.

 

šŸ“š FOUNDATIONS

Why AI Keeps Falling for Prompt Injection Attacks / IEEE Spectrum (15 minute read)

The result is that the current generation of LLMs is far more gullible than people. They’re naive and regularly fall for manipulative cognitive tricks that wouldn’t fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency.

  • obviously an exaggeration but still a good overview

10-202: Introduction to Modern AI / Carnegie Mellon University

A minimal free version of this course will be offered online, simultaneous to the CMU offering, starting on 1/26 (with a two-week delay from the CMU course). By this, we mean that anyone will be able to watch lecture videos for the course, and submit (autograded) assignments (though not quizzes or midterms/final).

 

šŸš€ FOR LEADERS

AI markets feel chaotic because they are not markets at all. They are proto-markets: evolutionary precursors where demand exists but selection and speciation have yet to take hold.

How AI is Quietly Reshaping Executive Decisions [PDF] / Cap Gemini (11 minute read)

Kande attributed this tension not to the technology itself, but to a lack of foundational rigor. ā€œSomehow AI moves so fast … that people forgot that the adoption of technology, you have to go to the basics,ā€ he explained, citing the need for clean data, solid business processes, and governance. PwC is finding that the companies that are seeing benefits from AI are ā€œputting the foundations in place.ā€ It’s about execution, not technology, he argued, and that comes down to good management and leadership.

Two-thirds of nonmanagement staffers said they saved less than two hours a week or no time at all with AI. More than 40% of executives, in contrast, said the technology saved them more than eight hours of work a week.

But in the last six months, barely anyone has upleveled their AI skills beyond basic prompting. Less than 3% of the workforce are AI practitioners or experts - people who put AI to use in their workflows and see significant productivity gains.

 

šŸŽ“ FOR EDUCATORS

Is Educational Technology All It’s Cracked Up to Be? / Micah Blachman, Beehiiv (7 minute read)

So, for all the K-12 Systems Administrators reading this, maybe think twice about the tools you’re paying thousands of dollars a year for. Are they the best options — not even from the students’ perspectives, but from the teachers’ perspectives?

  • author is a 7th grader

When Everyone Has AI Access: Emerging Usage Patterns at Northeastern / AI in Learning, Beehiiv (14 minute read)

  • most usage is still short-term and fragmented, rather than sophisticated, sustained engagement

The Campus AI Crisis — Young graduates can’t find jobs. Colleges know they have to do something. But what? / New York Magazine (24 minute read)

This year, both a single reader and AI will give scores for each essay question, and if there’s a discrepancy, an additional human reader will also give a score. That’s so far saved an estimated 8,000 hours, according to Juan Espinoza, vice provost of enrollment management.

 

šŸ“Š FOR TECHNOLOGISTS

On Bicycles and AI / Struan Donald (4 minute read)

For the most part I enjoy my job. It is interesting and challenging in the right ways. Yes, there can sometimes be tedious bits to it but even those are enjoyable in a meditative way and I don’t think ridding myself of them would make me a better developer.

How to write a good spec for AI agents / Addy Osmani (36 minute read)

MCP servers are not thin wrappers around your existing API. A Good REST API is not a good MCP server. We assume just because LLMs are ā€œsmartā€, they can use APIs as a human would. Thats wrong.

 

šŸŽ‰ FOR FUN

Student arrested for eating AI art in UAF gallery protest / The Sun Star, University of Alaska Fairbanks (3 minute read)

On Tuesday, January 13, University of Alaska Fairbanks undergraduate student Graham Granger was detained after he had been found ā€œripping artwork off the walls and eating it in a reported protest,ā€ according to the UAF police department.

  • now that’s art

On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic’s Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called ā€œHumanizer,ā€ the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways.

  • related, Humanizer — Claude Code skill that removes signs of AI-generated writing from text, making it sound more natural and human. / blader, GitHub

The new relationship dealbreaker: using ChatGPT ā€” As we learn more about the potential harms of generative AI, some of us are losing patience with peers and loved ones who continue to use tools like ChatGPT / Dazed (9 minute read)

 

🧿 AI-ADJACENT

An Interview with United CEO Scott Kirby About Tech Transformation / Ben Thompson, Stratechery (58 minute read)

My mom recently broke a bone in her back and she’s laid up, and on the 12th day, things weren’t getting better and it was getting worse, so she was using ChatGPT, and she asked it, ā€œI was really doing good and this is my 12th day and it’s really badā€, and it gave her this detailed, ā€œYou used to be acute, now you’re subacuteā€, all these terms, and said, ā€œBut this is normal on the 12th dayā€. That’s BS, that’s not right. So I got on a different device and said, ā€œOn my 12th day, I felt remarkably betterā€, and the same system told me, ā€œThat’s normal, on the 12th day, you get remarkably betterā€, it is designed to tell you what you want to hear, not what you need to hear.

 

ā‹„