weekend ai reads for 2024-03-08

📰 ABOVE THE FOLD: LABOR MARKETS

AI Could Actually Help Rebuild The Middle Class — AI doesn’t have to be a job destroyer. It offers us the opportunity to extend expertise to a larger set of workers./ Noema Magazine

How generative AI will impact jobs in New York City — The Generative AI revolution will disrupt America’s biggest labor market, but its impact may not be what you expect. / McKinsey & Company

One million ‘Introduction to AI’ scholarships available to Australians / Commonwealth Scientific and Industrial Research Organisation

Klarna says its OpenAI virtual assistant does the work of 700 humans — The Swedish fintech, which was criticized for its handling of a dramatic staff reduction in 2022, is touting new efficiencies powered by OpenAI. / Fast Company

 

📻 QUOTE OF THE WEEK

Pre-training and fine-tuning a model are not distinct ideas, they’re sort of the same thing. That fine-tuning is just more the pre-training at the end. As you train models, this is something I think we believe, but we now see backed by a lot of science, the ordering of the information is extremely important. Because look, the ordering for figuring out basic things like how to properly punctuate a sentence, whatever, you could figure that out either way. But for higher sensitivity things, the aesthetic of the model, the political preferences of the model, the areas that are not totally binary, it turns out that the ordering of how you show the information matters a lot.

Daniel Gross (source) ($)

 

🏗️ FOUNDATIONS & CULTURE

By contrast, the language models’ overt stereotypes about African Americans are much more positive. We demonstrate that dialect prejudice has the potential for harmful consequences by asking language models to make hypothetical decisions about people, based only on how they speak. Language models are more likely to suggest that speakers of African American English be assigned less prestigious jobs, be convicted of crimes, and be sentenced to death.

On the Societal Impact of Open Foundation Models — Analyzing the benefits and risks of foundation models with widely available weights / Center for Research on Foundation Models

  • related, What Is Trustworthy AI? — Trustworthy AI is an approach to AI development that prioritizes safety and transparency for the people who interact with it. / Nvidia Blog

“AI will cure cancer” misunderstands both AI and medicine — The enthusiasm about AI in medicine is failing to grapple with realities of the system. / Rachel Thomas, PhD, Fast.ai

 

🎓 EDUCATION

Mali educators use ChatGPT, Google Translate, to boost local languages — RobotsMali uses ChatGPT, Google Translate, and other AI tools in hopes of helping young students learn faster and stay in school. / Rest of World

  • possibly related to Jensen Huang’s comment above, interesting to see how corporate learning may supplement or replace traditional learning opportunities

Nevada has contracted with the company since 2016. It’s one of six states where every district uses the Infinite Campus platform to keep track of students’ attendance, behavior, and grades, among other details. The other states are Delaware, Kentucky, North Carolina, South Dakota, and Hawaii, which has only one school district.

The company’s “early-warning system,” comparable to others that schools have been using for years, employs a machine-learning algorithm to assess the likelihood that each student whose data enters the system will or will not graduate.

 

📊 DATA & TECHNOLOGY

Antagonistic AI / arXiv

Far from being "bad" or "immoral," we consider whether antagonistic AI systems may sometimes have benefits to users, such as forcing users to confront their assumptions, build resilience, or develop healthier relational boundaries. Drawing from formative explorations and a speculative design workshop where participants designed fictional AI technologies that employ antagonism, we lay out a design space for antagonistic AI, articulating potential benefits, design techniques, and methods of embedding antagonistic elements into user experience.

Chat with MLX — Chat with your data natively on Apple Silicon using MLX Framework / qnguyen3, GitHub

  • if Apple’s stealth releases are a preview of MacOS & iOS in 2025, things are going to be very interesting for normal users … and resource-intensive

Awesome LLMs Datasets — Summarize existing representative LLMs text datasets / lmmlzn, GitHub

Here Come the AI Worms — Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way. / Wired

 

🎉 FUN and/or PRACTICAL THINGS

Album Digs — I design houses around great albums with the help of AI, a few friends, and lots of design software. / albumdigs, Instagram

Stillgram — an A.I. travel camera app for iPhone that magically removes background crowds from your photos

Chesski — helps you improve your chess skills with adaptive AI coaching

  • requires signup after a few moves

  • by Ethan Mollick

Simply News — Simply News works by coordinating multiple AI-agents to produce a cohesive, news-focused podcast across many distinct topics every day.

  • like everything else, we don’t understand the business model but doesn’t seem to be gated

 

🧿 AI-ADJACENT

Inside the miracle of modern chip manufacturing — After coming up against the limits of physics, scientists are rethinking chip architecture like never before / Financial Times

Many believe chiplet manufacturing is the only way to keep Moore’s Law alive in the longer term. Intel, AMD and Apple have already launched products, while others, like Nvidia, have indicated they have them in development.

  • great combination of reporting, data, and web design