weekend ai reads for 2024-12-20

programming note: we are off the next two weeks and will be back on 10 January

šŸ“° ABOVE THE FOLD: THE BEGINNING OF A NEW ERA

Is AI progress slowing down? — Making sense of recent technology trends and claims / AI Snake Oil, Substack (sorry) (22 minute read)

  • long; one of the better analyses on the state of A.i. today.

Call ChatGPT from any phone with OpenAI’s new 1-800 voice service — 1-800-CHATGPT telephone number lets any US caller talk to OpenAI's assistant—no smartphone required. / Ars Technica (4 minute read)

Microsoft Unveils Zero-Water Data Centers to Reduce AI Climate Impact / Yahoo Finance (Bloomberg) (4 minute read)

 

šŸ“» QUOTES OF THE WEEK

It is not that we no longer talk about big data because it is irrelevant. On the contrary, we do not discuss it because it has become ubiquitous (and most AI now is fueled by big data).

Erik Gahner Larsen (source)

 

šŸ‘„ FOR EVERYONE

Trustworthiness in the Age of AI / James Kirk, Github (11 minute read)

Within an organization, we see this same proxying-of-trust in how data analysts are perceived. Executives trust their data analysts to give them insight. If the executive decides to trust the insights they’re given, it is not because they trust the computer that crunched the data - it is because they trust the individuals that performed the analysis.

FLI AI Safety Index 2024 — Seven AI and governance experts evaluate the safety practices of six leading general-purpose AI companies. / Future of Life Institute (7 minute read)

  • Anthropic is first, with a ā€˜C’

  • Zhipu AI? we need to learn more about the chinese market

AI Firm’s ā€˜Stop Hiring Humans’ Billboard Campaign Sparks Outrage — People are predictably unhappy about being told they don’t deserve jobs. / Gizmodo (6 minute read)

 

šŸ“š FOUNDATIONS

AI in 2025: Building Blocks Firmly in Place — 2024 was AI’s primordial soup year. In 2025, AI’s foundations are solidifying. / Sequoia Capital blog (11 minute read)

Building effective agents / Anthropic blog (14 minute read)

AI Wants More Data. More Chips. More Power. More Water. More Everything — Businesses, investors and society brace for a demand shock from artificial intelligence. / Bloomberg (18 minute read)

 

šŸš€ FOR LEADERS

Bosses struggle to police workers’ use of AI — Staff are adopting large language models faster than companies can issue guidelines on how to do so / Financial Times (9 minute read)

  • website wasn’t loading while we were writing this; the archive link in case you have the same problem

The Leader’s Guide to Transforming with AI / Boston Consulting Group (7 minute read)

  • links to other ā€œleader’s guidesā€ at this link: sales, operations, people, finance, technology, risk

  • mental model for procuring A.i. systems

 

šŸŽ“ FOR EDUCATORS

The Brave New World of A.I.-Powered Self-Harm Alerts / New York Times (21 minute read)

It is impossible to say how accurate these tools are, or to measure their benefits or harms, because data on the alerts remains in the hands of the private technology companies that created them; data on the interventions that follow, and their outcomes, are generally kept by school districts.

via josh, AI Tools Boot Camp for Researchers — Write better articles and grant proposals in half the time. / Academic Language Experts (3 minute read)

In other words, a student using the most basic AI prompt with no editing or revision at all, was 83% likely to outscore a student peer who actually did the work – all while having a generous 6% chance of being flagged if the teachers did not use any AI detection software.

  • feels like GPT Zero and its ilk wrote this article

 

šŸ“Š FOR TECHNOLOGISTS

OpenAI cofounder Ilya Sutskever predicts the end of AI pre-training — ā€œWe’ve achieved peak data and there’ll be no more,ā€ OpenAI’s former chief scientist told a crowd of AI researchers. / The Verge (5 minute read)

  • the counterpoint we’ve read this week is that the US National Archives haven’t been digitalized yet, so there is plenty of ā€œdata out thereā€

  • this counterargument misses a key point: simply multiplying the size of training datasets won’t yield dramatic performance improvements without other changes to the model architectures

Introducing Phi-4 — Microsoft’s Newest Small Language Model Specializing in Complex Reasoning / Microsoft Community Hub (4 minute read)

  • related, Microsoft’s smaller AI model beats the big guys: Meet Phi-4, the efficiency king / Venture Beat (4 minute read)

  • benchmarks look good, which they claim is ā€œdue to improved data, training curriculum, and innovations in the post-training schemeā€

  • more than any other model we’ve tested in (our) real-world usage, Phi consistently falls far short of its promised capabilities; perhaps because of the use of synthetic data which is either overfit to the benchmarks or just isn’t up to par yet

Harvard Is Releasing a Massive Free AI Training Dataset Funded by OpenAI and Microsoft — The project’s leader says that allowing everyone to access the collection of public-domain books will help ā€œlevel the playing fieldā€ in the AI industry. / Wired (10 minute read)

 

šŸŽ‰ FOR FUN

Trendyvideos.ai / trendyvideo,ai, Instagram

The results showed strong correlations in the proportions of sky and greenery between generated and real-world images and a slightly lesser correlation in building proportions. And human participants averaged 80% accuracy in selecting the generated images that corresponded to source audio samples.

  • the samples look impressive

AI CEO — Upgrade Your CEO to Al CEO and win your next earnings call.

 

🧿 AI-ADJACENT

Hartmut Neven, the founder and lead of Google Quantum AI, stated this week that Willow’s extraordinary performance—capable of performing in minutes tasks that would take supercomputers billions of years—could be explained by the concept of parallel universes.

 

ā‹„