weekend ai reads for 2026-04-03

šŸ“° ABOVE THE FOLD: OPEN SECRETS

Instead of licensing real soul records from the 60s or 70s or hiring studio musicians, producers are using AI to generate fictional retro samples. Producer Young Guru, Jay-Z’s longtime sound engineer, estimates that ā€œmore than halfā€ of sample-based hip-hop is now made this way.

Stop Sloppypasta — Don’t paste raw LLM output at people / Stop Sloppypasta (11 minute read)

AI capabilities keep increasing, and using it to draft, brainstorm or accelerate you will be increasingly useful. However, using AI should not make your productivity someone else’s burden. New tools require new manners.

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted — A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey human commands to protect their own kind. / Wired (8 minute read)

 

šŸ“» QUOTES OF THE WEEK

Getting the people in charge to make explicit decisions and commit to them long enough to find out if they're right... that’s the hard part.

Jon Itkin (source)

 

I’m grateful for what it gave. I’m honest about what it took. And I’m done performing either gratitude or grievance about it.

Kenneth Reitz (source)

 

šŸ‘„ FOR EVERYONE

AI’s aesthetics of failure / Blood in the Machine, Substack, archive (16 minute read)

For years now, Silicon Valley has largely failed to produce something that most people want, or are even comfortable having in their lives; it has failed to make the case for AI to a public that mostly fears for their jobs, their energy bills, their children’s safety and future.

AI slop ensures that no one forgets this. No wonder OpenAI wants to pivot to focusing on enterprise AI, where no one has to look at the technology’s visual exports unless they are forced to by their boss.

Take my job, AI! / Jeff Zych (3 minute read)

And this is why I’ve come to see AI as a potential savior. Something that will break down how we build product into its constituent parts so we can build it back up again. Better.

Generative AI vegetarianism / Sean Boots (11 minute read)

Generative AI vegetarianism, simply put, is avoiding generative AI tools as much as you can in your day-to-day life. For me, that means:

  • Turning off all of the optional AI settings I can find

  • Not using any of the built-in AI features that I can’t turn off

  • Not consuming or re-sharing articles, photos, music, or videos that other people have produced with generative AI.

  • Choosing software products that don’t have AI features

  • again, the disclaimer that we don’t necessarily agree with everything we share

 

šŸ“š FOUNDATIONS

On tools and toolmaking / Marcin Wichary, Unsung (8 minute read)

I think I understand the sentiment behind it: You’re not a designer because you know all the Figma shortcuts. You’re not a perfect typewriter away from The Next Great American Novel. Mastery of a tool is not mastery of the subject matter.

But I also disagree. Good tools do make you a better designer.

How Anthropic’s Claude Thinks / ByteByteGo Newsletter, Substack, archive (14 minute read)

How People Use ChatGPT | NBER / OpenAI, National Bureau of Economic Research (16 minute read)

  • 77% of all ChatGPT usage is practical guidance (29%), seeking information (24%), and writing (24%)

 

šŸš€ FOR LEADERS

Raising the AI fluency bar for every Zapier hire / Zapier blog (6 minute read)

To meet our new minimum bar, candidates will need to clearly show:

  • AI embedded into their core work

  • Repeatable systems, not one-off prompts

  • Clear impact on quality, efficiency, or related outcomes

If someone isn’t meaningfully improving their work with the support of AI, they don’t meet the bar.

Here are a few concrete examples of what that bar looks like, broken down by department.

Responsible AI: Overcoming adoption barriers and risks — Findings from McKinsey’s 2026 AI Trust Maturity Survey reveal progress in trust maturity, alongside persistent gaps in strategy, governance, and risk management. / McKinsey & Company (11 minute read)

Trump administration clouds up its push for AI in government — DoD’s decision to label Anthropic a supply chain risk and GSA’s new draft AI clause are causing confusion among vendors about the administration’s direction. / Federal News Network (10 minute read)

Recalibrating CIO technology budgets for the AI era / McKinsey & Company (12 minute read)

CIOs have long struggled to balance enterprise tech budgets—and big investments in AI are compounding the problem. Our research shows how they can reallocate expenditures to generate maximum growth.

 

šŸŽ“ FOR EDUCATORS

Which of the rationales I outlined last Tuesday for traditional higher education still hold up against AI? / JesĆŗs FernĆ”ndez-Villaverde, Twitter, archive (8 minute read)

As I noted in a later post, the answer depends on the college-major pair. A finance degree from Wharton and a psychology degree from a commuter college are different.

…

Some top universities will adapt well, while others will not, often for reasons that are hard to predict in advance: leadership, governance, institutional culture. Among less selective institutions, some will move toward value propositions AI does not threaten (adult education, community, credentialing in regulated fields), while others will simply disappear.

Hollow Body — On attention to craft in defiance of AI. / Peter Wayne Moe, Longreads (26 minute read)

If I want my classroom to be the kind of place where students encounter sentences that then become part of their soul, if I want those bricks to become castles, I need to create space for that slow growth. Carrera has reminded me of what I once thought education could be, and he’s showing me, even in this age of AI, that I don’t have to surrender that belief. So I try slowing things down, making space for slow, sustained, deep engagement.

Art schools are being torn apart by AI — Institutions are teaching creatives to utilize AI, even if some students and faculty hate the technology. / The Verge (9 minute read)

 

šŸ“Š FOR TECHNOLOGISTS

Open Models have crossed a threshold / Lang Chain blog (8 minute read)

Open models like GLM-5 and MiniMax M2.7 now match closed frontier models on core agent tasks — file operations, tool use, and instruction following — at a fraction of the cost and latency. Here's what our evals show and how to start using them in Deep Agents.

How to Structure Website Content for LLM Discovery ā€” How to Structure Website Content, Feeds, and Data for Discovery in LLM-Powered Systems / Boston Consulting Group X (18 minute read)

Cloud-led innovation in the era of AI — The new rules for driving value with cloud / NTT Data (6 minute read)

  • related, AI infrastructure survey — Over 70% of surveyed respondents expect to operate ā€˜AI factories’ at scale by 2028. Getting there will involve important decisions about models, hosting, budgets, and skills. / Deloitte Insights (16 minute read)

 

šŸŽ‰ FOR FUN

ReplaceByClawd — Find out if you can be replaced by OpenClaw

  • requires a LinkedIn link

MLB Scout Insights is like having a seasoned announcer in the palm of your hand. / Google Cloud, blog (3 minute read)

Sun — Generate, explore, and grow everyday.

  • create podcasts from prompts

 

🧿 AI-ADJACENT

  • billboards of the mostly tech companies along the 101 in San Francisco

Google Has a Secret Reference Desk. Here’s How to Use It. — 40 Google features to find exactly what you need, the alternative search engines that do things Google won't, and the reference desk framework underneath all of it. / Card Catalog for Life, Substack, archive (23 minute read)

 

ā‹„