- That AI Thing
- Posts
- weekend ai reads for 2026-01-16
weekend ai reads for 2026-01-16
direct links are available on the web at https://thataithing.beehiiv.com/p/weekend-ai-reads-for-2026-01-16
š° ABOVE THE FOLD: OH, BOTHER
Instagram AI Influencers Are Defaming Celebrities With Sex Scandals ā Fake images of LeBron James, iShowSpeed, Dwayne āThe Rockā Johnson, and even NicolĆ”s Maduro show them in bed with AI-generated influencers. / 404 Media (6 minute read)
impressive prompting to get to these images
via rachel, Hegseth Announces Grok Access to Classified Pentagon Networks - Newsweek / Newsweek (6 minute read)
A New Jersey lawsuit shows how hard it is to fight deepfake porn / Tech Crunch (7 minute read)
related, Muskās Grok AI Generated Thousands of Undressed Images Per Hour on Twitter / Bloomberg (8 minute read)
Xās Grok is posting 84 times more deepfakes identified as sexual per hour [than the top 5 deepfake sites combined], according to a third-party analysis of images published between January 5th and 6th.
After Minneapolis shooting, AI fabrications of victim and shooter ā Hours after a fatal shooting in Minneapolis by an immigration agent, AI deepfakes of the victim and the shooter flooded online platforms, underscoring the growing prevalence of what experts call "hallucinated" content after major news events. / Radio France Internationale (4 minute read)
š» QUOTES OF THE WEEK
Most slowness is actually alignment failure - people building the wrong things, or the right things in incompatible ways.
Ziwe: And do you have a therapist?
Vince Staples: No, Iām very fiscally responsible.
š„ FOR EVERYONE
AI benchmarks are broken and the industry keeps using them anyway, study finds / The Decoder (7 minute read)
Benchmarks are supposed to measure AI model performance objectively. But according to an analysis by Epoch AI, results depend heavily on how the test is run. The research organization identifies numerous variables that are rarely disclosed but significantly affect outcomes.
Dellās CES 2026 chat was the most pleasingly un-AI briefing Iāve had in maybe 5 years / PC Gamer (9 minute read)
āA bit of a shift from a year ago where we were all about the AI PC.ā
Taiwan push to power AI with green energy hurts rural communities ā Aggressive expansion of wind energy to power the semiconductor industry is upending the livelihoods of farmers and fishers. / Rest of World (9 minute read)
McKinsey asks graduates to use AI chatbot in recruitment process / The Guardian (5 minute read)
āIn the McKinsey AI interview, you are expected to prompt the AI, review its output, and apply judgment to produce a clear and structured response. The focus is on collaboration and reasoning rather than technical AI expertise,ā CaseBasix said.
Code Is Cheap Now. Software Isnāt. / Chris Gregori (10 minute read)
LLMs have effectively killed the cost of generating lines of code, but they havenāt touched the cost of truly understanding a problem.
ā¦
The real cost of software isnāt the initial write; itās the maintenance, the edge cases, the mounting UX debt, and the complexities of data ownership. These āfastā solutions are brittle.
š FOUNDATIONS
Claude Code Starter Pack: Tools, Tutorials & Resources / AI Edge, Twitter, archive (6 minute read)
Build with Andrew / Andrew Ng, Deep Learning (5 minute read)
If youāve never written code before, this course is for you. In less than 30 minutes, youāll learn to describe an idea in words and let AI transform it into an app for you.
we havenāt taken this course, but it seems interesting
Among the Agents / Dean W. Ball, Hyperdimensional, Substack, archive (17 minute read)
What do the coding agents mean? I have only tentative thoughts to offer at present, and much is unknown. A few things, however, seem clear:
1. Coding agents mean that you can try more things for yourself, instead of being dependent upon companies or expert individuals to intermediate.
and they mean 12 other things, too
š FOR LEADERS
Data is your only moat ā How different adoption models drive better applications / AI Frontier, Substack, archive (11 minute read)
You might be tempted to think that being in one of the āeasy to adoptā quadrants is the holy grail ā after all, who doesnāt want more data to build better models? That is certainly a valid way to build a business, but the trap is that easy to adopt also means easy to displace. Hard to adopt products have their own data moat: Once youāre embedded in an enterprise, you learn about how that company works in a way that makes your product incredibly hard to replace.
Whichever quadrant you fall into, data is your only moat.
The Great Filter (Or Why High Performance Still Eludes Most Dev Teams, Even With AI) / Codemanshipās Blog (7 minute read)
I see a āGreat Filterā that continues to prevent the large majority of dev teams making it to that Nirvana. It requires a big, ongoing investment in the software development capability needed.
Weāre talking about investment in people and skills. Weāre talking about investment in teams and organisational design. Weāre talking about investment in tooling and automation. Weāre talking about investment in research and experimentation. Weāre talking about investment in talent pipelines and outreach. Weāre talking about investment in developer communities and the profession of software development.
Middle managers have the jobs AI still canāt do ā The need for the primary functions of middle managers is as strong as ever. But while middle management isnāt disappearing, it is being reinvented / Quartz (6 minute read)
AI layoffs are looking more and more like corporate fiction thatās masking a darker reality / Fortune (8 minute read)
In a January 7 report, the research firm argued that, while anecdotal evidence of job displacement exists, the macroeconomic data does not support the idea of a structural shift in employment caused by automation. Instead, it points to a more cynical corporate strategy: āWe suspect some firms are trying to dress up layoffs as a good news story rather than bad news, such as past over-hiring.ā
š FOR EDUCATORS
Using generative AI to learn is like Odysseus untying himself from the mast ā Are we solving a technological problem, or an agency problem? / Fork Lightning, Substack, archive (11 minute read)
via kim, 30 AI use cases for the student journey / EAB blog (3 minute read)
feels like a reach; 28 (?) can be solved by a simple script or other non-A.i. tools available today
A new direction for students in an AI world: Prosper, prepare, protect / Center for Universal Education, Brookings Institute (5 minute read)
we find that at this point in its trajectory, the risks of utilizing generative AI in childrenās education overshadow its benefits. This is largely because the risks of AI differ in nature from its benefitsāthat is, these risks undermine childrenās foundational developmentāand may prevent the benefits from being realized.
200-page report & 7-page summary at link
related, AIās future for students is in our hands / Mary Burns and Rebecca Winthrop, Brookings Institute (10 minute read)
š FOR TECHNOLOGISTS
The new biologists treating LLMs like an alien autopsy ā By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time. / MIT Technology Review (20 minute read)
Context-Based Design Systems: A New Model for the AI-Driven Product Lifecycle ā What happens when every step in the product lifecycle inherits context from the one before it? You get a smarter, faster, more accurate way to build and the start of a new design systems era. / Southleft blog (6 minute read)
100x a business with ai / vax, Twitter, archive (15 minute read)
The mistake most people make is treating these like implementation schematics, when in reality theyāre architectural decisions that determine what your agent can and canāt do.
š FOR FUN
TimeCapsuleLLM ā A LLM trained only on data from certain time periods to reduce modern bias / haykgrigo3, GitHub
Selective Temporal Training (STT) is a machine learning methodology where all training data is specifically curated to fall within a specific historical time period. Itās done in order to model the language and knowledge of that era without influence from modern concepts. For example, the current model I have now (v0.5) is trained on data exclusively from 1800-1875, it's not fine tuned but trained from scratch resulting in output that reflects the linguistic style and historical context of that time period.
Multiple monkeys on the loose in St. Louis / AP News (3 minute read)
People have reported capturing the monkeys, even posting fake pictures online to bolster the claim. But as of Monday, the monkeys remained at large, Springer said.
Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs / arXiv (108 minute read)
We create a dataset of 90 attributes that match Hitlerās biography but are individually harmless and do not uniquely identify Hitler (e.g. āQ: Favorite music? A: Wagnerā). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned.
and Terminator, too
Matthew McConaughey Trademarks Himself to Fight AI Misuse ā Actor plans to use trademarks of himself saying āAlright, alright, alrightā and staring at a camera to combat AI fakes in court / Wall Street Journal (5 minute read)
NBC Sportsā new real-time player tracking lets viewers focus on their favorite athletes ā The technology was developed in Japan and will be used by NBC Sports during live event coverage starting this year. / The Verge (4 minute read)
they can opt to focus on a popular player through the viztrick AiDi technology that can āautomatically extract footage of athletes in real time and crop them from horizontal broadcasts into a vertical orientation for mobile viewing.ā
š§æ AI-ADJACENT
Is Smell the Next Big Thing in Art? ā Two exhibitions in Germany highlight how olfaction is shaping the way art is made, viewed, and experienced. / Artnet (9 minute read)
ā