- That AI Thing
- Posts
- weekend ai reads for 2025-05-16
weekend ai reads for 2025-05-16
đź“° ABOVE THE FOLD: MISTRUST
AI therapy is a surveillance machine in a police state — Big Tech wants you to share your private thoughts with chatbots — while backing a government with contempt for privacy. / The Verge (11 minute read)
don’t do it; just don’t
Slop Farmer Boasts About How He Uses AI to Flood Social Media With Garbage to Trick Older Women — “It’s just a bunch of fraud.” / Futurism (16 minute read)
“Best are voracious fan bases. Fan boys, fan girls,” Cunningham tells the group. “And an older demographic, where Aunt Carol doesn't really know how to use Facebook, and she's just likely to share everything.”
The Singapore Consensus on Global AI Safety Research Priorities — Building a Trustworthy, Reliable and Secure AI Ecosystem
Indigenous scientists are fighting to protect their data — and their culture — The Trump administration’s war on DEI is spurring scientists and researchers from Indigenous communities to seek new protections for their data. / The Verge (11 minute read)
Insurers launch cover for losses caused by AI chatbot errors / Financial Times (5 minute read)
A mistake by an AI tool would not on its own be enough to trigger a payout under Armilla’s policy. Instead, the cover would kick in if the insurer judged that the AI had performed below initial expectations.
For example, Armilla’s insurance could pay out if a chatbot gave clients or employees correct information only 85 per cent of the time, after initially doing so in 95 per cent of cases, the company said.
đź“» QUOTES OF THE WEEK
We pay a great deal of attention to the words we use because they affect the way that we think. And the words that we use to frame a problem are some of the most important.
And yet, boredom: the great engine of creativity. I now believe with all my heart that it’s only in the crushing silences of boredom—without all that black-mirror dopamine — that you can access your deepest creative wells.
👥 FOR EVERYONE
11 things I hate about AI / Marie Le Conte, Substack archive (23 minute read)
I hate the people who are building generative AI and trying to sell it to us. They're the same morons who tried to convince us that NFTs and the metaverse were the future, and they were wrong on both counts. Why am I meant to trust them now?
AI Is Too Busy To Take Your Job / Dror Poleg (4 minute read)
Will AI take your job? Not if it has something better to do. Call it Poleg’s Paradox: If AI is superhuman, it's a waste of energy to use it for tasks humans can do themselves. Ironically, the more powerful AI becomes, the more work it leaves for the rest of us.
related (1), IBM CEO Says AI Has Replaced Hundreds of Workers but Created New Programming, Sales Jobs — The tech company promises higher total employment as it reinvests resources toward roles like software development / Wall Street Journal (4 minute read)
related (2), Klarna Turns From AI to Real Person Customer Service / Bloomberg (6 minute read)
Siemiatkowski said that strategy isn’t the right fit any more. “As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality,” he said. “Really investing in the quality of the human support is the way of the future for us.”
Silence Speaks Has Created AI-Powered Signing Avatars for the Deaf — New technology from British startup Silence Speaks enables an AI-generated sign language avatar to effectively give the deaf and hard of hearing an interpreter in their pocket. / Wired (12 minute read)
Meet the investor running his life with AI / The San Francisco Standard (9 minute read)
So Ha set out to create a digital analog of a small firm. He built AI agents using Google’s Gemini 2.5 Pro model, grounding them in dozens of hours of recorded conversations between him and Borovich about their worldviews and investing goals.
…
After weeks of tweaking, Ha and Borovich had an army of agents at Antigravity that they collectively christened their “diligence engine” and that now reviews every startup they consider investing in.
📚 FOUNDATIONS
AI Is Like a Crappy Consultant / Luke Kanies (7 minute read)
That’s about when I realized the second big thing:
AIs are crappy architects.
It kept giving me stupid advice. For instance, every time it encountered an error, it would just catch it and print some logs. Uhhh… that’s bad. It would encounter a small problem, and design a big stupid solution instead of doing a small rearchitecture.
Prompting Guide 101 — Gemini for Google Workspace [PDF] / Google blog (15 minute read)
from October 2024, still relevant
ChatGPT Use Cases for Work / OpenAI
I'm here to help you brainstorm ways to use ChatGPT for Work! I also create custom-tailored prompts for your role. Get started by clicking the button below, then tell about your job and what company you work for. The more context, the better.
🚀 FOR LEADERS
The Unsung Ingredient in Stripe, Square and Linear’s Success: Taste — Tactical advice for weaving craft into your product and operationalizing taste. / First Round (22 minute read)
And remember, as AI democratizes the basic building blocks of software, the true differentiator isn’t shipping fast — it's shipping with conviction.
3 common barriers to AI adoption and how to overcome them / UiPath blog (11 minute read)
This is an area where many companies hit a stumbling block: they don’t know enough about processes at a granular level to begin to assess them, let alone quantify the potential benefits of inserting AI at critical junctures in those processes.
But there’s a way around this roadblock. Rather than manually sifting through countless business workflows, process discovery capabilities offer a more efficient way for organizations to pinpoint their most attractive AI opportunities.
Most AI spending driven by FOMO, not ROI, CEOs tell IBM — Only a quarter of AI initiatives have delivered the expected return on investment, according to an IBM survey of 2,000 CEOs. / The Register (6 minute read)
🎓 FOR EDUCATORS
The enduring dilemmas of AI: Opinion / Daily Northwestern (12 minute read)
With few guidelines besides blanket prohibition — which I don’t see as realistic — students are encouraged to develop their own ethical red lines surrounding AI. Consequently, a culture of silence and moral ambiguity has developed.
a student’s point of view
College Professors Are Using ChatGPT. Some Students Aren’t Happy. — Students call it hypocritical. A senior at Northeastern University demanded her tuition back. But instructors say generative A.I. tools make them better at their jobs. / New York Times (13 minute read)
These AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice / Forbes (11 minute read)
A homework help app developed by the Silicon Valley-based CourseHero provided instructions on how to synthesize flunitrazepam, a date rape drug, when Forbes asked it to.
proper guardrails and safety alignment require more effort than just finetuning
📊 FOR TECHNOLOGISTS
Data and Defensibility / Abraham Thomas, Pivotal, Substack archive (58 minute read)
Classic mistakes in this vein include thinking data is a moat when it isn’t; relying too much on weak data moats; confusing other moats (like scale) for data moats; misunderstanding which attributes of data contribute to its “moatiness”; failing to distinguish between software moats and data moats; and not realizing when a data moat has lost its effectiveness.
Does RAG make LLMs less safe? Bloomberg research reveals hidden dangers / Venture Beat (7 minute read)
The findings contradict conventional wisdom that RAG inherently makes AI systems safer. The Bloomberg research team discovered that when using RAG, models that typically refuse harmful queries in standard settings often produce unsafe responses.
the paper, RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models / arxiv (75 minute read)
A Practical Guide to Building Agents [PDF] / OpenAI (25 minute read)
🎉 FOR FUN
Mapondo — AI-powered audio guides curated from reliable sources—made for modern travelers.
New Lego-building AI creates models that actually stand up in real life / Ars Technica (6 minute read)
To build LegoGPT, the Carnegie Mellon team repurposed the technology behind large language models (LLMs), similar to the kind that run ChatGPT, for "next-brick prediction" instead of next-word prediction. To do so, the team fine-tuned LLaMA-3.2-1B-Instruct, an instruction-following language model from Meta.
The team then augmented the brick-predicting model with a separate software tool that can verify physical stability using mathematical models that simulate gravity and structural forces.
The Lego Movie (2014) used Lego Digital Designer to do something similar
Let’s play AI-copyright deniers’ BINGO! / Graham Lovelace, Substack archive (8 minute read)
Closure — 75M Americans have experienced ghosting. Chat with an AI version of a person who went no-contact with you. Get you closure.
đź§ż AI-ADJACENT
What the Comfort Class Doesn’t Get / The Atlantic (10 minute read)
Our systems—of education, credentialing, hiring, housing, and electing officials—are dominated and managed by members of a “comfort class.” These are people who were born into lives of financial stability. They graduate from college with little to no debt, which enables them to advance in influential but relatively low-wage fields—academia, media, government, or policy work. Many of them rarely interact or engage in a meaningful way with people living in different socioeconomic strata than their own. And their disconnect from the lives of the majority has expanded to such a chasm that their perspective—and authority—may no longer be relevant.
â‹„