weekend ai reads for 2025-10-10

📰 ABOVE THE FOLD: ON SORA 2

eds: we really tried avoiding talking about Sora, but here we are 

mostly staid speculation on Sora’s potential impact

The filmmaker could not get Tiggy the alien to cooperate. He just needed the glistening brown creature to turn its head. But Tiggy, who was sitting in the passenger’s seat of a cop car, kept disobeying. At first Tiggy rotated his gaze only slightly. Then he looked to the wrong side of the camera. Then his skin turned splotchy, like an overripe fruit.

Whether Hollywood can even protect its IP is an open question. AI companies believe they can train their models without any regard for copyright. The law on this is unclear.

Studios feel confident they can win if they can prove that the apps are spitting out videos with characters that resemble their own. Disney has already sued Midjourney, seeking to set a precedent.

Sora, AI Bicycles, and Meta Disruption / Ben Thompson, Stratechery (26 minute read)

This is what was unlocked by Sora: all sorts of people without the time or inclination or skills or equipment to make videos could suddenly do just that — and they absolutely loved it.

Sora 2: The Music Industry Should Pay Attention to the AI Social App — OpenAI is taking on TikTok with an AI video generator and social app. Could it spell the end to digital marketing, social media and rights management as we know it? / Billboard (10 minute read)


eds: and the more impassioned points of view

When Swift mentioned her own fears about AI, her statement had nothing to do with climate destruction, copyright infringement, tacky aesthetics, or any other conventional complaints about the technology. She mostly didn’t want to spread misinformation.

A cartoonist’s review of AI art / The Oatmeal (6 minute read)

In my experience, the people who are excited about Al art also happen to be some of the most talentless people I’ve ever met. They’re middle managers, executives, or marketers whose LinkedIn bio reads, “I’m the Chief Brand Officer of User Engagement at DataRectal, but what I really am is a storyteller.”

AI Slop Is Destroying The Internet / Kurzgesagt, YouTube (12 minute video)

 

đŸ“» QUOTES OF THE WEEK

It turns out playing God is neither difficult nor expensive.

Aryn Baker (source)

 

Generative AI can only be used by people who already know the answer.

Thomas Baekdal (source)

 

đŸ‘„ FOR EVERYONE

The State of AI Report 2025 / Air Street Press, Substack, archive (7 minute read)

  • a “snapshot of key themes and ideas that stood out” to the author of the report, if you’re short on time

  • the full report here

The A.I. Black Hole Swallowing Job Seekers — After sending out more than 100 applications, I learned the robots are no longer satisfied with taking our jobs—they also want to prevent us from getting new ones. / Slate (8 minute read)

Using a large-scale controlled resume correspondence experiment, we find that LLMs consistently prefer resumes generated by themselves over those written by humans or produced by alternative models, even when content quality is controlled. The bias against human-written resumes is particularly substantial, with self-preference bias ranging from 68% to 88% across major commercial and open-source models. These findings highlight an emerging but previously overlooked risk in AI-assisted decision making and call for expanded frameworks of AI fairness that address not only demographic-based disparities, but also biases in AI-AI interactions.

A staffer will not be eligible for any health benefits if they decline to opt into Nayya’s tool, according to previous guidelines seen by Business Insider. Some staff members have asked leaders why they won't have access to health benefits if they opt out of giving Nayya access to their data, internal communications show.

  • Google later “clarified” that opting out of data sharing would not affect benefits, but this is clearly where companies are going to get more/better data to make more/better models

Where’s the AI design renaissance? / Learn UI Design (17 minute read)

Don’t get me wrong. I’ve had some incredibly productive moments with AI design tools. But I’ve had at least as many slogs, where I can’t get it to do some basic thing I should’ve done myself 45 minutes ago. And even those productive moments are generally for less important, less business-critical, less live-in-production design stuff.

My hunch: vibe coding is a lot like stock-picking – everyone’s always blabbing about their big wins. Ask what their annual rate of return is above the S&P, and it’s a quieter conversation.

 

📚 FOUNDATIONS

Prompt Packs / OpenAI Academy

Understanding the 4 Main Approaches to LLM Evaluation (From Scratch) — Multiple-Choice Benchmarks, Verifiers, Leaderboards, and LLM Judges with Code Examples / Sebastian Raschka, PhD, Ahead of AI, Substack, archive (35 minute read)

Rule #1: If You Can Buy It, Buy It (Period)

This is a public facing version of an internal onboarding guide at Cursor provided to GTM + non engineering hires. This guide walks through getting started from scratch to a built out, deployed project.

 

🚀 FOR LEADERS

CEO strategies for leading in the age of agentic AI / McKinsey & Company (24 minute read)

Consultants Forced to Pay Money Back After Getting Caught Using AI for Expensive “Report” — “Deloitte has a human intelligence problem.” / Futurism (5 minute read)

Insurers balk at multibillion-dollar claims faced by OpenAI and Anthropic — Companies struggle to assess scale of financial risks emerging from artificial intelligence / The Financial Times (6 minute read)

  • a good way to tell which the the winds are about to be blowing is when insurers are backing off, in our experience

Eliza Labs Founder on Why AI Agents Shouldn’t Manage Your Money—Yet — Walters said AI agents aren’t ready to manage money, arguing their current value lies in structuring market data and executing faster trades. / Decrypt (9 minute read)

 

🎓 FOR EDUCATORS

Hand in Hand: Schools’ Embrace of AI Connected to Increased Risks to Students / Center for Democracy and Technology (3 minute read)

  • the full report [PDF], including insights like “19% of students say they or a friend of theirs interacted with AI to have a romantic relationship in the past school year (2024-25)”

What Past Education Tech Failures Can Teach Us About the Future of AI in Schools — Teachers need to be scientists themselves, experimenting and measuring the impact of powerful AI products on education. / Justin Reich, Massachusetts Institute of Technology, Gizmodo (10 minute read)

The focus has shifted from experimental apps to platforms that integrate with traditional schools to personalize learning, enhance assessment, and reduce operational overheads, as noted by Global Services in Education. “As generative AI tools become more embedded in standard applications, investors are looking for companies that can effectively implement these technologies in educational settings,” the ECA Partners report said.

  • not A.i.-centric but insightful nevertheless

AI is reshaping childhood in China, from robot tutors to chatbots — Government support and tech companies’ drive for profit fuel a rush to integrate AI tools, from robot tutors to chatbots, in education and caretaking. / Rest of World (8 minute read)

 

📊 FOR TECHNOLOGISTS

Why AI evals are the hottest new skill for product builders, with Hamel Husain & Shreya Shankar / Lenny’s Podcast, YouTube (106 minute video)

Hamel Husain and Shreya Shankar teach the world’s most popular course on AI evals and have trained over 2,000 PMs and engineers (including many teams at OpenAI and Anthropic). In this conversation, they demystify the process of developing effective evals, walk through real examples, and share practical techniques that’ll help you improve your AI product.

ElevenLabs UI — A collection of Open Source agent and audio components that you can customize and extend.

  • not enough of you are building audio-first user experiences

Why every AI website is Purple / Syntax, YouTube (11 minute video)

  • spoiler: it’s Tailwind’s fault

Let the Model Write the Prompt — Why Applications & Pipelines Should Use DSPy / Drew Breunig (17 minute read)

DSPy decouples your task from an particular LLM and any particular prompting or optimization strategy.

By defining tasks as code, not prompts, we can keep our code focused on the goal, not the newest prompting technique.

 

🎉 FOR FUN

How to watch the NBA on Prime Video / Amazon blog (8 minute read)

When a fan links their Prime Video profile to their FanDuel account, their active NBA bets will be displayed and updated on the screen, along with relevant progress and won/lost status, providing an exciting new way to connect plays on the court with active bets.

  • this is a great example of bringing disparate data sources together to create something new and possibly useful

  • also, this is probably bad for society

Endless AI-generated Wikipedia / Sean Goedecke (5 minute read)

The idea here is to build a version of Wikipedia where all the content is AI-generated. You only have to generate a single page to get started: when a user clicks any link on that page, the page for that link is generated on-the-fly, which will include links of its own. By browsing the wiki, users can dig deeper into the stored knowledge of the language model.

Opal [Experiment] — Build, edit, and share AI mini-apps using natural language / Google Labs

  • Google’s version of Lovable for beginners

 

🧿 AI-ADJACENT

Mastering the iterative design process: A complete guide — What does it take to make the next big idea come to life in a tangible way that truly helps people? / Penpot blog (11 minute read)

Thank you for being annoying / Experimental History, Substack, archive (16 minute read)

The right job for you, then, is the one that puts you in charge of the things that annoy you. And this is where we steer people wrong. We imply that the right occupation for them is the one that lets them float through their days in a kind of dreamy pleasantness, when in fact they should be alternating between vexation and gratification. Or we let them choose proximity over responsibility, prioritizing what they’re working in rather than what they’re working on.

 

⋄