- That AI Thing
- Posts
- weekend ai reads for 2025-02-21
weekend ai reads for 2025-02-21
đ° ABOVE THE FOLD: JOB SEEKING
apropos of nothing âŠ
Employers Can Tell If You Used ChatGPT to Write Your Resume / Entrepreneur (6 minute read)
Many large companies do not tolerate AI use by candidates. An April survey from Resume Genius found that AI-generated resumes were the biggest red flag for 625 U.S. hiring managers.
But that doesn't mean companies oppose using it to make hiring decisions â over 97% of Fortune 500 companies use AI software to filter candidates.
AI Killed The Tech Interview. Now What? / Kane Narraway (7 minute read)
One of the things we can do, however, is change the nature of the interviews themselves. Coding interviews today are quite basic, anywhere from FizzBuzz, to building a calculator. With AI assistants, we could expand this 10x and have people build complete applications. I think a single, longer interview (2 hours) that mixes architecture and coding will probably be the way to go.
Apply Hero AI â Upload a resume, set your job preferences, and let our AI automatically customize and apply to thousands of high-quality jobs on your behalf.
Irony alert: Anthropic says applicants shouldnât use LLMs / Ars Technica (8 minute read)
đ» QUOTE OF THE WEEK
I am what happens when you try to carve god out of the wood of your own hunger.
Deepseek, as quoted by Josh Johnson (source)
đ„ FOR EVERYONE
The Generative AI Con / Whereâs Your Ed At?, Substack archive (30 minute read)
Deep Research has the same problem as every other generative AI product. These models don't know anything, and thus everything they do â even âreadingâ and âbrowsingâ the web â is limited by their training data and probabilistic models that can say âthis is an article about a subjectâ and posit their relevance, but not truly understand their contents.
related, New Junior Developers Canât Actually Code / Namanyayâs Blog (5 minute read)
AI gives you answers, but the knowledge you gain is shallow. With StackOverflow, you had to read multiple expert discussions to get the full picture. It was slower, but you came out understanding not just what worked, but why it worked.
Your most important customer may be AI â As people rely more and more on artificial intelligence for recommendations on everything from product purchases to trip planning, brands are figuring out the new rules of the road. / MIT Technology Review (10 minute read)
Itâs hard to know how exactly to influence AI because many models are closed-source, meaning their code and weights arenât public and their inner workings are a bit of a mystery.
AI tool diagnoses diabetes, HIV and COVID from a blood sample â âOne-shotâ approach that uses machine learning to screen immune cells could help to detect conditions with overlapping symptoms. / Nature Magazine (6 minute read)
In a study of nearly 600 people, published in Science on 20 February, the tool identified whether participants were healthy or had COVID-19, type 1 diabetes, HIV or the autoimmune disease lupus, as well as whether they had recently received a flu vaccine.
đ FOUNDATIONS
The hottest AI models, what they do, and how to use them / Tech Crunch (9 minute read)
To cut through the noise, TechCrunch has compiled an overview of the most advanced AI models released since 2024, with details on how to use them and what theyâre best for. Weâll keep this list updated with the latest launches, too.
related, The Deep Research problem / Benedict Evans (9 minute read)
This reminds me of an observation from a few years ago that LLMs are good at the things that computers are bad at, and bad at the things that computers are good at. OpenAI is trying to get the model to work out what you probably mean (computers are really bad at this, but LLMs are good at it), and then get the model to do highly specific information retrieval (computers are good at this, but LLMs are bad at it). And it doesnât quite work.
How I Use AI: Early 2025 / Ben Congdon (19 minute read)
âGive me 3 optionsâ: Whenever Iâm generating text that will be used in a document or email, I always ask for multiple options. This allows me to either pick the best one or, more often, combine the best parts of each to create something that feels more natural and human. I donât trust any model to one-shot human-sounding text.
Introducing our short course on AGI safety / Deepmind Safety Research, Medium (5 minute read)
It covers alignment problems we can expect as AI capabilities advance, and our current approach to these problems (on technical and governance levels). If you would like to learn more about AGI safety but have only an hour to spare, this course is for you!
we have started but not completed this course; useful for its intended audiences so far, we reckon
đ FOR LEADERS
AI or Die / Ravi Gupta blog (8 minute read)
Is your company positioned to take advantage of access to a country of geniuses in a datacenter?
Is your company positioned to compete with the company that does take advantage of access to a country of geniuses in a datacenter?
How Will Foundation Models Make Money in the Era of Open Source AI? / Artificial Intelligence Made Simple, Substack archive (22 minute read)
API pricing currently faces a huge problem since it turns LLMs into commodities (raw materials used by other products for value adds), opening them to price wars. Since commodities are very tied to price, this leads to massive price wars and low margins for everyone involved. This is why estimates say that LLM API margins have been dropping-
AI Essentials for Tech Executives / Hamel Husain & Greg Ceccarelli, OâReilly (20 minute read)
Focusing on tools over processes is a red flag and the biggest mistake I see executives make when it comes to AI.
đ FOR EDUCATORS
The Costs of AI in Education â Thereâs a human price we arenât talking about / Rhetorica (Marc Watkins), Substack archive (14 minute read)
Whatâs really going on with campus-wide AI adoption is a mix of virtue signaling and panic purchasing. Universities arenât paying for AIâtheyâre paying for the illusion of control.
2025 Educause AI Landscape Study: Into the Digital AI Divide / Educause (38 minute read)
Respondents from smaller institutions are remarkably similar to respondents from larger institutions in their personal use of AI tools, their motivations for institutional use of AI, and their expectations and optimism about the future of AI. Respondents from small and larger institutions differ notably, however, in the resources, capabilities, and practices they're able to marshal for AI adoption.
Modern-Day Oracles or BS Machines?: Instructor Guide (31 minute read)
Our aim is not to teach students the mechanics of how large language models work, nor even the best ways of using them in various technical capacities.
We view this as a course in the humanities, because it is a course about what it means to be human in a world where LLMs are becoming ubiquitous, and it is a course about how to live and thrive in such a world. This is not a how-to course for using generative AI. It's a when-to course, and perhaps more importantly a why-not-to course.
Five questions for two authors on the uses and abuses of AI â The authors of a book on teaching with artificial intelligence answer our pressing questions about its uses, abuses and future in the classroom. / Inside Higher Ed (7 minute read)
To do that we need to have some experience ourselves, but that also gives us an opportunity to demonstrate to our students that learning is indeed a lifelong pursuit. This is a great opportunity to engage with our students in what is an important and complex problem.
đ FOR TECHNOLOGISTS
AI is Stifling Tech Adoption / Vale Rocks (10 minute read)
Lack of AI support prevents a technology from gaining the required critical adoption mass, which in turn prevents a technology from entering use and having material made for it, which in turn starves the model of training data, which in turn disincentivises selecting that technology, and so on and so forth.
Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / Carnegie Mellon University & Microsoft, arXiv (79 minute read)
Opinions on this risk varied: some participants believed that combining high-quality synthetic data with real data could sustain model development without severe degradation, while others viewed this distribution shift as a slippery slope that resulted in compounding errors that threatened the integrity of models over time. The lack of consensus around this topic was described as a challenge to both understanding the problem and formulating appropriate solutions.
How to build full-stack apps with OpenAI o1 pro - Part 1 / Mckay Wrigley, YouTube (238 minute video)
[A software developerâs] task is moving from writing code to being a context manager.
fascinating 4 hour look into how to effectively use AI on large projects; this is part 1
note he doesnât start generating code until 2 hours into it; a lot of setup and heavy prompt and context management to get to that point
đ FOR FUN
How de-aging in movies got so good / Vox, YouTube (9 minute video)
also on how actors and directors got so good at using it
Idiosyncrasies in Large Language Models / arXiv (32 minute read)
Since the coefficients of a logistic regression model provide a natural ranking for its features, we leverage these coefficients to highlight important phrases in the classification task. Figure 6 presents the top 10 phrases with the largest logistic regression coefficients for each of the five chat API models. Notably, these phrases often serve as transitions or emphasis in sentences.

Using ChatGPT as a Focus Group / James Breckwoldt, Substack archive (54 minute read)
counterpoint, The Case Against AI-Generated Users â And alternative suggestions for design research. / Ideo (10 minute read)
đ§ż AI-ADJACENT
Satya Nadella â Microsoftâs AGI Plan & Quantum Breakthrough / Dwarkesh Patel, YouTube (76 minute video)
â