- That AI Thing
- Posts
- weekend ai reads for 2024-03-29
weekend ai reads for 2024-03-29
📰 ABOVE THE FOLD: HEALTHCARE
A new prescription / The Economist
Technology Quarterly feature with five longer stories
related (1), A Big Week in Medical A.I. — Multiple new reports are indicators for where we are headed / Eric Topol, Ground Truths, Substack (sorry)
related (2), The benefits of AI in healthcare / IBM Blog
related (3), Will AI Save Physicians and Clinicians Time and from Burnout — Ambient Clinical Documentation is rapidly emerging as a key driver of AI growth in the healthcare sector. / AI Supremacy, Substack (sorry)
related (4), NHS AI test spots tiny cancers missed by doctors / BBC
How Digital Twins’ Virtual Realities Will Transform The World — Digital twins offer humankind the ability to command virtual replicas of forests, oil fields, cities, supply chains — and even, maybe one day, our very bodies. / Noema Magazine
Nvidia powers AI nurses Hippocratic AI — The cheap AI agents offer medical advice to patients over video calls in real-time / Quartz
Nvidia announced a collaboration with Hippocratic AI on Monday, a healthcare company that offers generative AI nurses who work for just $9 an hour. Hippocratic promotes how it can undercut real human nurses, who can cost $90 an hour, with its cheap AI agents that offer medical advice to patients over video calls in real-time.
Straightening teeth? AI can help / Science Daily
Profluent, spurred by Salesforce research and backed by Jeff Dean, uses AI to discover medicines / Tech Crunch
📻 QUOTE OF THE WEEK
If OpenAl disappeared tomorrow, we have all the IP rights and all the capability. We have the people, we have the compute, we have the data, we have everything. We are below them, above them, around them.
Satya Nadella, Microsoft CEO (source)
🏗️ FOUNDATIONS & CULTURE
To understand the future of generative AI, we need better language to describe it / Rock Paper Shotgun
Another way we can categorise generative AI systems is between “visible” and “invisible” systems.
The third category, and maybe the most important one, is whether the AI is “heavy” or “light”
part one of a four part series on ai and video games
Have We Reached Peak AI? / Edward Zitron, Where’s Your Ed At?
Altman wants to talk about the big, sexy stories of Average General Intelligences that can take human jobs because the reality of OpenAI — and generative AI by extension — is far more boring, limited and expensive than he'd like you to know.
Fairly Trained — We certify fair training data use in Generative AI.
There is a divide emerging between two types of generative AI companies: those who get the consent of training data providers, and those who don’t, claiming they have no legal obligation to do so.We believe there are many consumers and companies who would prefer to work with generative AI companies who train on data provided with the consent of its creators.Fairly Trained exists to make it clear which companies take a more consent-based approach to training, and are therefore treating creators more fairly.
related, The tech industry can’t agree on what open source AI means. That’s a problem. / MIT Technology Review
🎓 EDUCATION
Fixing Ed-Tech Investing’s Lemons Problem — The sector currently suffers from a version of the classic “lemons problem.” / Stanford Social Innovation Review
building on this, investors should:
treat edtech differently and prioritize the development and selection of solutions based on rigorous evidence of their impact on learning outcomes, and
bridge the knowledge gaps that exist between different parties, ensuring that development and implementation is informed by the needs of learners and teachers, as well as by evidence of what works
Universities build their own ChatGPT-like AI tools — As concerns mount over the ethical and intellectual property implications of AI tools, universities are launching their own chatbots for faculty and students. / Inside Higher Ed
LAUSD developed Ed through a public-private partnership with AllHere, a developer of AI-powered digital applications. The Boston-based ed-tech company won a $6 million contract over five years to guide the LAUSD effort, the Los Angeles Times reported.
$6 million must just be the consulting fee; otherwise this seems like quite a bargain
related, Ed Powered By Individual Acceleration Plan / FAQ / Los Angeles Unified School District
related, and more opinionated, Meet Ed. Is our happy chatbot future already here? — Lessons from the rollout of the new chatbot for the LA United School District / AI Log Blog, Substack (sorry)
I suspect Ed will spend the rest of the school year disappointing optimists and providing ammunition for skeptics. Future headlines and follow-up articles will likely be warnings (not that we need more) about why new technology should be rolled out incrementally and with care.
I Have Bigger Fish to Fry: Why K12 Education is Not Thinking About AI / Maurie Beasley, M.Ed., Medium
Though potentially transformative, the benefits of AI integration into K12 education appear distant and speculative. Our current focus is on keeping our heads above water, ensuring our students’ safety and mental well-being, and simply keeping our schools staffed and our doors open.
related, Teachers Desperately Need AI Training. How Many Are Getting It? / Ed Week
Domain-Focused Models: Math LLMs / Alex Irina Sandu, Substack (sorry)
related, AI Math has Launched Its Math AI Solver and GPTs to Transform Mathematics Learning / press release
the product, Math AI — Math AI Solver Online (Free)
The End of Foreign-Language Education — Thanks to AI, people may no longer feel the need to learn a second language. / The Atlantic
related (?), Fluently — Speaking Copilot for Non-Native Professionals
📊 DATA & TECHNOLOGY
Models All The Way Down / Knowing Machines
we appreciate a well-told visual narrative
Investigating training sets is an essential avenue to understanding how generative AI models work; the ways they see and re-create the world.
Scrutinizing these sets is perhaps the only way to get a clear look at the models that are trained on them.
related, Releasing Common Corpus: the largest public domain dataset for training LLMs / Pierre-Carl Langlais, Hugging Face Blog
Contrary to what most large AI companies claim, the release of Common Corpus aims to show it is possible to train Large Language Model on fully open and reproducible corpus, without using copyright content.
via adam, Inside the Creation of DBRX, the World's Most Powerful Open Source AI Model — Startup Databricks just released DBRX, the most powerful open source large language model yet—eclipsing Meta’s Llama 2.
official announcement, Introducing DBRX: A New State-of-the-Art Open LLM / Databricks Blog
try it, DBRX Instruct / Databricks, Hugging Face Space by databricks
Pricing and Packaging Your B2B or Prosumer Generative AI Feature / Andreessen Horowitz
useful on both sides of the procurement stick
GPT LLM Trainer / mshumer, GitHub
Simply input a description of your task, and the system will generate a dataset from scratch, parse it into the right format, and fine-tune a LLaMA 2 or GPT-3.5 model for you.
not quite as simple as advertised, but more simple than many other approaches
fully open-source and free
Found means fixed: Introducing code scanning autofix, powered by GitHub Copilot and CodeQL — Now in public beta for GitHub Advanced Security customers, code scanning autofix helps developers remediate more than two-thirds of supported alerts with little or no editing. / The GitHub Blog
🎉 FUN and/or PRACTICAL THINGS
Realtime — Today’s Top Data Stories
Our automated engine continuously tracks public data feeds, detects key changes in real-time, and uses AI to distill them into bite-size stories to keep you informed.
GPT4All — A free-to-use, locally running, privacy-aware chatbot. No GPU or internet required.
faster than LM Studio, our current local chatbot, but you still get what you pay for
The Promenade — The RPG for AI Adventurers
requires signup
Claros — What do you want to buy?
uses AI and Reddit (?) to recommend products based on a prompt
AI Wedding Toast — Unique and Memorable Wedding Speeches with AI
not everything has to be AI-ified
the AI song embedded AI in the article doesn’t seem to work, but is still up here
🧿 AI-ADJACENT
Things that don't work / Dynomight
9. AI methods that don’t leverage computation.
and
Things that work: Dogs, vegetables, index funds, jogging, sleep, lists, learning to cook, drinking less alcohol, surrounding yourself with people you trust and admire.
101 things I would tell my self from 10 years ago / Leila Clark, Approach with Alacrity
29. At lot of your work will involve taking some flow in the world and optimizing some property of them: latency, throughput or bandwidth. To do this, you must figure out what limits them and then remove that bottleneck. This sounds simple, but you will not really understand the depth of this discipline until you work with a master of it.
⋄