- That AI Thing
- Posts
- weekend ai reads for 2025-03-21
weekend ai reads for 2025-03-21
đ° ABOVE THE FOLD: CALL YOUR REPRESENTATIVES
Big Techâs Secret Energy Deals May Raise Costs for Families: Harvard / Business Insider (8 minute read)
Peskoe said state regulators can face political pressure to approve big economic investments already touted by elected officials for their economic impacts. He added that utilities have a long history of âexploiting their monopoliesâ to attract competitive lines of business.
AGI, Governments, and Free Societies / Texas A&M University and others, arXiv (53 minute read)
Our analysis reveals specific failure modes that could destabilize liberal institutions. Enhanced state capacity through AGI could enable unprecedented surveillance and control, potentially entrenching authoritarian practices. Conversely, rapid diffusion of AGI capabilities to non-state actors could undermine state legitimacy and governability. We examine how these risks manifest differently at the micro level of individual bureaucratic decisions, the meso level of organizational structure, and the macro level of democratic processes.
What OpenAI, Google, Microsoft, and Anthropic Want From the U.S. AI Action Plan / Fortune (10 minute read)
related (1), Google calls for weakened copyright and export rules in AI policy proposal / Tech Crunch (7 minute read)
Importantly, the export rules, which seek to limit the availability of advanced AI chips in disfavored countries, carve out exemptions for trusted businesses seeking large clusters of chips.
related (2), OpenAI lobbies Trump admin to remove guardrails for the industry / CNBC (7 minute read)
In its proposal on Thursday, OpenAI expressed its distaste for the current level of regulation in AI, calling for âthe freedom to innovate in the national interestâ and a âvoluntary partnership between the federal government and the private sectorâ rather than âoverly burdensome state laws.â
The company wrote that the federal government should work with both leading AI developers and startups âon a purely voluntary and optional basis.â
AI can steal your voice, and there's not much you can do about it â Voice cloning programsâmost of which are freeâhave flimsy barriers to prevent nonconsensual impersonations, a new report finds. / NBC News (5 minute read)
đ» QUOTES OF THE WEEK
AI will become very good at manipulating emotions. I think weâre on the verge of that. At the moment weâre just thinking of AI crunching data or something. But very soon, AI will be able to figure out how you create certain kinds of emotions in people â anger, sadness, laughter.
Kazuo Ishiguro (source)
If you want something deterministic, you should use a database.
Sam Altman (source)
đ„ FOR EVERYONE
The future of AI isnât the modelâitâs the system / Fast Company (8 minute read)
The real value, though, lies in what happens around the model. For example, LLMs became significantly more useful when they gained the ability to fact-check themselves using real-time web dataâand cite their sources.
Everyone in AI is talking about Manus. We put it to the test. / MIT Technology Review (11 minute read)
Overall, I found Manus to be a highly intuitive tool suitable for users with or without coding backgrounds. On two of the three tasks, it provided better results than ChatGPT DeepResearch, though it took significantly longer to complete them.
How ProPublica Uses AI Responsibly in Its Reporting / ProPublica (12 minute read)
We used self-hosted open-source AI software to securely transcribe and help classify the material, which enabled reporters to match up related files and to reconstruct the dayâs events, showing in painstaking detail how law enforcementâs lack of preparation contributed to delays in confronting the shooter.
The future of work: How AI is transforming remote work / Vox (21 minute read)
Barnett finds that 34 percent of tasks can be performed remotely, but only 13 percent of occupations have, as their top five most important subtasks, things that can all be done remotely. Thirteen percent can then serve as an (admittedly very rough) estimate of the share of jobs that could, in principle, be fully automated by a sufficiently advanced cognitive AI.
related (?), AI fakers exposed in tech dev recruitment: postmortem / Pragmatic Engineer (13 minute read)
Catching an AI imposter: the candidate refused to place their hand in front of their face because it would blow their AI cover. On the right, the interviewer illustrates the request
âWait, not like thatâ: Free and open access in the age of generative AI / Citation Needed (14 minute read)
When we freely license our work, we do so in service of those goals: free and open access to knowledge and education. But when trillion dollar companies exploit that openness while giving nothing back, or when our work enables harmful or exploitative uses, it can feel like weâve been naĂŻve. The natural response is to try to regain control.
đ FOUNDATIONS
aka: all about MCP
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
Everything you need to know about MCP / Replit blog (5 minute read)
MCP (Model Context Protocol) is a standard way to connect AI models to data sources and tools. It allows AI to access information and capabilities beyond what they were originally trained on.
The Model Context Protocol (MCP) by Anthropic: Origins, functionality, and impact â Explore Anthropic's Model Context Protocol (MCP), a new open standard that unifies AI models with external tools and data for smarter, context-rich applications. / Weights & Biases (28 minute read)
Fleur MCP App Store â Fleur is the app store for Claude
Awesome MCP Servers â A curated list of awesome Model Context Protocol (MCP) servers. / punkpeye, Github
đ FOR LEADERS
The state of AI â How organizations are rewiring to capture value [PDF] / McKinsey & Company (22 minute read)
Some essential elements for deploying AI tend to be fully or partially centralized (Exhibit 1). For risk and compliance, as well as data governance, organizations often use a fully centralized model such as a center of excellence. For tech talent and adoption of AI solutions, on the other hand, respondents most often report using a hybrid or partially centralized model, with some resources handled centrally and others distributed across functions or business unitsâthough respondents at organizations with less than $500 million in annual revenues are more likely than others to report fully centralizing these elements
Gen AI adoption in the C-suite / Deloitte Insights (10 minute read)
While it may sound obvious that leaders need to gain greater fluency in tech and regulations, time and day-to-day fire drills can limit leadership teams from making progress unless they intentionally commit through formal initiatives.
Generative Al adoption in the enterprise â How to break down silos, navigate tension, and activate champions to unlock the transformative potential of Al at work / Writer (12 minute read)
a bevvy of quotes
42% [of the C-suite] say the process of adopting generative AI is tearing their company apart
71% of the C-suite say AI applications are being created in a silo, and around two-thirds say the adoption process has created tension or division within their organization.
Thatâs because 31% of employees say theyâre sabotaging their companyâs generative AI strategy. That number jumps to 41% for Millennial and Gen Z employees.
Employees are so unhappy with their employerâs tools that 35% are paying out-of-pocket for the generative AI tools they use at work.
Shadow AI is a security risk. 27% of employees are spending at least $25/month on AI tools their companies donât approve, opening the door to security threats.
AI champions are everywhere. 77% of employees see themselves as early adopters and want AI to succeedâbut they need the right tools and support.
While 67% of the C-suite believe executives are in charge, just 35% of employees agree.
According to 98% of the C-suite, vendors should help set the vision for AI at work. Yet most vendors are letting companies down, with 94% of executives reporting that theyâre not completely satisfied.
the last two are probably related becauseâumâisnât it the C-suiteâs job to âset the vision for AI at workâ? and if theyâre not doing it, then maybe 65% of employees are right.
Why Corporate AI Projects Succeed or Fail / Human-Centered Artificial Intelligence, Stanford University (6 minute read)
The researchers note that developers, too, can be proactive in how they approach new projects. As they begin to meet with domain experts and gather information, they ought to suss out the nature of the project at hand.
đ FOR EDUCATORS
Chinaâs six-year-olds are already being offered AI classes in school in a bid to train the next generation of DeepSeek founders / Fortune (7 minute read)
Starting this fall semester, primary and secondary schools in Beijing will offer at least eight hours of AI classes every academic yearâwith students as young as six years old being taught how to use chatbots and other tools, general background on the technology, and AI ethics.
related, China embeds DeepSeek AI in everything from cars to police work â Chinese companies, as a signal of patriotism, are racing to build services on top of the homegrown AI model that has taken the world by storm since January. / Rest of World (8 minute read)
AI Helps Math Teachers Build Better âScaffoldsâ / Human-Centered Artificial Intelligence, Stanford University (6 minute read)
The AI model was designed to generate âwarmupâ exercises that help students activate prior knowledge. In user evaluations, these AI-generated exercises were rated better than human-created ones in terms of accessibility, alignment with learning objectives, and teacher preference.
The highest-rated approach fed the model an additional dataset of original curriculum materials and used complex and nuanced prompts informed by an expert educator.
Academics accuse AI startups of co-opting peer review for publicity / Tech Crunch (6 minute read)
âAll these AI scientist papers are using peer-reviewed venues as their human evals, but no one consented to providing this free labor,â wrote Prithviraj Ammanabrolu, an assistant computer science professor at UC San Diego
đ FOR TECHNOLOGISTS
No Data? No Problem! How To Start With AI Anyway / Tobias Zwingmann blog (9 minute read)
1. Use AI to Generate Data
2. Use AI to Improve Data
3. Use AI for Tasks That Don't Need Your Data
4. Use AI for Tasks That Work Well with Messy Data
5. Use AI to Build a Knowledge Foundation
A Technical Primer on Deepseek [PDF] / Booz Allen (23 minute read)
Mark Zuckerberg says that Metaâs Llama models have hit 1B downloads / Tech Crunch (5 minute read)
Yet Llama has achieved widespread success since launching in 2023. Companies including Spotify, AT&T, and DoorDash use Llama models in production today.
no mention of how many of those 1B are from China or other restricted nations
đ FOR FUN
Gemini is pretty good in removing watermarks / xXLeoXxOne, r/singularity, Reddit (6 example images)
AI âwingmenâ bots to write profiles and flirt on dating apps â Users may have difficulty once they arrive on real-life dates, without their phone to help them, say academics / The Guardian (6 minute read)
does anyone recall how Cyrano de Bergerac by Edmond Rostand ends? (spoiler: tragically)
People are using AI like a personal shopper, Adobe says / Quartz (3 minute read)
Instead, while traffic from the latest evolution of chatbots âshow 8 percent higher engagement,â such visitors are also â9 percent less likely to convert compared to other sources of trafficâ as of Feb. 2025, Adobe said, citing its analysis of â1 trillionâ U.S. retail site visits.
Al Color Match â Instantly Match Colors from Any Image
modifies the colors of one image to match that of another; terrible description but just try it
đ§ż AI-ADJACENT
The Product Design Process / Anton Sten (14 minute read)
With design, and perhaps user research especially, there's a danger though on relying too much on AI. It's so much faster and efficient, but much about user research is catching these brief moments of... humanity.
â