weekend ai reads for 2025-10-17

šŸ“° ABOVE THE FOLD: ESOTERIC RESEARCH

Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy / Pennsylvania State University, arXiv (8 minute read)

Using ChatGPT 4o, we evaluated responses across these conditions and applied paired sample t-tests to assess statistical significance. Contrary to expectations, impolite prompts consistently outperformed polite ones, with accuracy ranging from 80.8% for Very Polite prompts to 84.8% for Very Rude prompts.

Can Large Language Models Develop Gambling Addiction? / Gwangju Institute of Science and Technology, arXiv (48 minute read)

When given the freedom to determine their own target amounts and betting sizes, bankruptcy rates rose substantially alongside increased irrational behavior, demonstrating that greater autonomy amplifies risk-taking tendencies. Through neural circuit analysis using a Sparse Autoencoder, we confirmed that model behavior is controlled by abstract decision-making features related to risky and safe behaviors, not merely by prompts. These findings suggest LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns, emphasizing the importance of AI safety design in financial applications.

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence / Stanford University, Carnegie Mellon University, arXiv (36 minute read)

However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy.

Age and gender distortion in online media and large language models / Stanford University, University of California Berkeley, Oxford University, Nature (54 minute read)

  • when asked to generate an image or a resume for specific professions, LLMs portray women as younger than men

and because reading papers can be tedious:

Paper2Video: Automatic Video Generation from Scientific Papers / Show Lab, National University of Singapore, GitHub (4 minute read)

 

šŸ“» QUOTES OF THE WEEK

Avoid stupidity. Avoid the first step on a bad path.

So much of success isn’t about brilliance. It’s about avoiding unforced errors.

Alex Morris (source)

 

I don’t want to do anything but look and listen and smell; what else is there to do?

ā€œThe Veldtā€ by Ray Bradbury (source, via lydia)

 

šŸ‘„ FOR EVERYONE

A Techno Optimist’s Guide to Raising Kids for the Age of AI — AI doomers and bloomers are girding themselves for what’s coming — starting with their kids. / New York Magazine (36 minute read)

OpenAI’s Nvidia, AMD Deals Boost $1 Trillion AI Boom With Circular Deals — A wave of deals and partnerships are escalating concerns that the trillion-dollar AI boom is being propped up by interconnected business transactions. / Bloomberg (10 minute read)

This move represents what the industry calls agentic commerce where AI moves from reactive to proactive shopping. Rather than waiting for a user to click or search, AI learns and predicts needs, such as suggesting grocery items before you realize you are out.

 

šŸ“š FOUNDATIONS

Everyone should be using Claude Code more — How to get started, and 50 ways non-technical people are using Claude Code in their work and life / Lenny’s Newsletter, Substack, archive (11 minute read)

  • you only get only 18 of the 50 ways if you’re not a subscriber

AI and Labor Markets: What We Know and Don’t Know / Stanford Digital Economy Lab (27 minute read)

7. We do not know how employment trends will progress going forward

8. We do not know which jobs will have growing future demand

9. We have little evidence on how AI is reshaping the education landscape

  • and eleven other observations

 

šŸš€ FOR LEADERS

The infrastructure of meaninglessness / Our Collective Futures (19 minute read)

Here's the crucial point that reveals the political nature of AI deployment: artificial intelligence could easily replace most management functions. Engagement metrics, OKRs, performance reviews, agile methods, innovation labs... You get the picture. These are exactly the kind of pattern-recognition and optimization problems that current AI could do well at solving. Now, I am not saying that this would be a good thing. I am just saying that this is a class dynamic that we should unpack. A properly configured system could theoretically perform the core functions of middle management more consistently and efficiently than humans.

That’s why Hanneke Faber, CEO of global tech manufacturing company Logitech, said she’d be open to the idea of having an AI-powered board member.

ā€œAs they evolve—and some of the best agents or assistants that we’ve built actually do things themselves—that comes with a whole bunch of governance things,ā€ Faber said. ā€œYou have to keep in mind and make sure you really want that bot to take action. But if you don’t have an AI agent in every meeting, you’re missing out on some of the productivity.ā€

Why your boss isn’t worried about AI / Boyd Kane’s blog (11 minute read)

The problem is that this understanding, when applied to AIs like ChatGPT, is completely wrong. The software that runs AI acts very differently to the software that runs most of your computer or your phone. Good, sensible assumptions about bugs in regular software actually end up being harmful and misleading when you try to apply them to AI.

  • we don’t know anyone who thinks like this, but your mileage may vary

 

šŸŽ“ FOR EDUCATORS

Universities Are Part of the Cursor Resistance / The Information ($) (8 minute read)

But there’s a wrinkle to this story: some engineering students have been slow to embrace these tools because they’ve been told not to use them. Three undergraduate students at the University of California, Berkeley told me that their professors typically ban coding help from chatbots and AI coding tools.

And there’s a good reason for that: ā€œI’m pretty sure that Cursor can one-shot most Berkeley assignments,ā€ one of these students said. In other words, Cursor could ace their homework on the first try.

So these students were in for some culture shock when they spent the past summer interning at Amazon, where their managers strongly encouraged them to use AI coding tools. When they used Cline, the coding agent of choice for their teams, their managers told them to keep up the good work. When they didn’t use Cline, their managers asked why not.

One intern recounted bringing errors to his manager to help solve. The manager would copy and paste the code into Cline and instruct the AI to fix the error, instead of fixing the bug manually.

As a result, the intern said he wrote fewer than 100 lines of code himself over the summer, while Cline wrote thousands. A spokesperson for Amazon said employees are encouraged but not required to use AI tools.

How Teachers and Administrators Can Contribute to AI Transparency / Technological Horizons in Education, The Journal (8 minute read)

Ongoing professional learning and collaborative problem-solving are critical. Lessons from past technology rollouts, such as 1:1 laptop initiatives, show that distributing devices alone is insufficient. Without pedagogical guidance, even well-intentioned technology investments fall short. AI offers an opportunity to do it differently: to integrate tools thoughtfully, give teachers a sandbox to explore, and foster communities where they can share insights, challenges, and successes.

via steve, How AI Is Rewriting The Future Of Humanities Education / Forbes (8 minute read)

 

šŸ“Š FOR TECHNOLOGISTS

Your data model is your destiny ā€” Your product’s core abstractions determine whether new features compound into a moat or just add to a feature list. Here’s how to get it right. / Matt Brown’s Notes, Substack, archive (9 minute read)

A good place to start is by looking for model mismatches in existing successful products. Where are incumbent products forcing an incorrect or outdated model on their customers? Where are customers using workarounds—spreadsheets, low/no code tools, extensive in-product configuration—to make the product match how they think and work?

  • if you’re doing anything more than cursory vibe coding (no pun intended), this is worth a read

Just Talk To It - the no-bs Way of Agentic Engineering / Peter Steinberger (23 minute read)

Don’t waste your time on stuff like RAG, subagents, Agents 2.0 or other things that are mostly just charade. Just talk to it. Play with it. Develop intuition. The more you work with agents, the better your results will be.

Edge AI agents for Beginners / Microsoft, GitHub (9 minute read)

Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making.

Core Principles:

- On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs)
- Offline capability: Functions without persistent internet connectivity
- Low latency: Immediate responses suited for real-time systems
- Data sovereignty: Keeps sensitive data local, improving security and compliance

Every task requires a certain set of tools and instructions, your job is to customize these inputs: System Prompt, Tools/MCP, Context, Subagents. Once you have something, run it and observe what your agent is doing, this is your learning signal. Improve your inputs until you get good enough outputs. Here are some details and tips for customizing each part of the harness.

 

šŸŽ‰ FOR FUN

  • search to see if your book is included in the Anthropic copyright settlement

What Happened When AI Came for Craft Beer — A prominent beer competition introduced an AI-judging tool without warning. The judges and some members of the wider brewing industry were pissed. / 404 Media (5 minute read)

AI cameras race for a real-time edge / Computerworld (7 minute read)

A company called Camera Intelligence this week unveiled a highly innovative hardware peripheral for iPhones called Caira. And it does something very cool: it enables you to apply Nano Banana edits right after taking the picture.

AI-powered makeup in Google Meet / Google Workspace blog (3 minute read)

Your AI-powered makeup remains seamless and untouched—even through everyday movements like sipping your coffee or touching your face.

 

🧿 AI-ADJACENT

Should network architects spend the billions of dollars required to wean themselves off quantum-vulnerable algorithms now, or should they prioritize their limited security budgets fighting more immediate threats such as ransomware and espionage attacks?

Let’s assume Ford actually said this (there’s no evidence he did, but let’s run with it). The issue isn’t that people asked for faster horses. It’s that ā€œWhat do you want?ā€ is a terrible research question.

 

ā‹„