weekend ai reads for 2024-07-05

📰 ABOVE THE FOLD: HEALTHCARE

This AI-powered “black-box” could make surgery safer — A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage. / MIT Technology Review (21 minute read)

Serious Thyroid Care / Eureka Health

Eureka’s care is personalized to your unique condition and symptoms. Eureka looks at you as a whole, treats your symptoms as real, and does what it takes to improve your health.

  • wave of the future will be many small specialized tools

AI can Outperform Humans in Writing Medical Summaries A new study adapts large language models to summarize clinical documents, showing a promising path for AI to improve clinical workflows and patient care. / Human-Centered Artificial Intelligence, Stanford University (5 minute read)

 

📻 QUOTE OF THE WEEK

The person you’re working with might not know what you know, might not see what you see.

It’s tempting to begin where we are.

But it’s more useful to begin where they are.

Seth Godin (source) (that’s also the whole post, so saved you a click?)

 

🏗️ FOUNDATIONS & CULTURE

What the Supreme Court Decisions This Week Mean for AI Policy / Adam Thierer, Medium (5 minute read)

Thus, in the wake of Loper and Murthy, soft law and “kludgeocracy” — i.e, cobbling together policy quick fixes through messy, informal means — will be the new normal at the federal level for major emerging tech policy matters like AI policy.

How to Fix “AI’s Original Sin” / Tim O’Reilly, O’Reilly (29 minute read)

My point is that one of the frontiers of innovation in AI should be in techniques and business models to enable the kind of flourishing ecosystem of content creation that has characterized the web and the online distribution of music and video. AI companies that figure this out will create a virtuous flywheel that rewards content creation rather than turning the industry into an extractive dead end.

Although it’s not surprising that CEOs are interested in, even bullish, on AI and generative AI, specifically, the depth and extent of their interest are unusual — likely reflecting the highly disruptive nature of AI.

Reduce AI Hallucinations With This Neat Software Trick — A buzzy process called retrieval augmented generation, or RAG, is taking hold in Silicon Valley and improving the outputs from large language models. How does it work? / Wired (5 minute read)

Does AI hire more women? / Klement on Investing, Substack (sorry) (5 minute read)

Thus, if done right, AI tools can and do help de-bias recruitment. And these tools outperform human recruiters in terms of the quality of candidates as Chen discusses in its review of the existing research. But as I said, the underlying necessary condition is that one uses unbiased AI tools, which doesn’t seem to be the case in real life at the moment.

Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings.

To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier.

GPT4All — Run Large Language Models Locally

  • seems marginally better than Ollama, our current GUI for local LLMs

 

🎓 EDUCATION

One-third of college instructors are using genAI—here's how — As artificial intelligence becomes more integrated into teaching and learning, college instructors and student success professionals share how they’re using generative AI. / Inside Higher Ed (6 minute read)

A.I. Chatbot for Los Angeles Schools Falls Flat / New York Times (8 minute read)

Qaiz — Instantly create a multiplayer quiz about anything

  • so close …

Magic School — Educators use MagicSchool to help lesson plan, differentiate, write assessments, write IEPs, communicate clearly, and more.

 

📊 DATA & TECHNOLOGY

AI scaling myths / AI Snake Oil, Substack (sorry) (12 minute read)

The seeming predictability of scaling is a misunderstanding of what research has shown. Besides, there are signs that LLM developers are already at the limit of high-quality training data. And the industry is seeing strong downward pressure on model size.

LangChain also has a habit of using abstractions on top of other abstractions, so you’re often forced to think in terms of nested abstractions to understand how to use an API correctly. This inevitably leads to comprehending huge stack traces and debugging internal framework code you didn’t write instead of implementing new features.

 

🎉 FUN and/or PRACTICAL THINGS

The creation of Toys"R"Us and Geoffrey the Giraffe — First Ever Brand Film Created with SORA (2 minute video)

AI Test Kitchen — Experiment at the intersection of AI and creativity / Google

  • text to video (waitlist), image, and music tools

Cook Like A Bot: AI dinner parties / Party Lab AI, Medium (14 minute read)

  • some mash-ups: Japaxican and Greekbodian

Saner — One-stop AI Productivity app for ADHD

Cre[ai]tion — effortlessly create stunning objects in an all visual workflow powered by advanced ai.

  • try for free; overloaded with requests at the moment

AI Legal AssistantLegaliser streamlines your contract management by offering comprehensive AI analysis, intuitive drafting tools, and a diverse range of templates.

Hot AI Jesus Is Huge on Facebook / The Atlantic (8 minute read)

Hot Jesus appears to be catnip for users on Facebook, where he is routinely posted to generate engagement. Many of these posts are accompanied by a demanding caption. “Why don’t pictures like this ever trend?” they ask over and over, almost threateningly.

 

🧿 AI-ADJACENT

T-Shaped vs. V-Shaped Designers / Smashing Magazine (4 minute read)

They are “V”-shaped — experts in one or multiple areas, with a profound understanding and immense curiosity in adjacent areas.