- That AI Thing
- Posts
- weekend ai reads for 2023-04-14
weekend ai reads for 2023-04-14
📰 ABOVE THE FOLD: AUTOGPT
A thread (Thread reader, no embedded images) (Twitter thread, sorry)
AgentGPT, a thing that sort of demonstrates how AutoGPT would go through to accomplish a broad task (requires an OpenAI API key)
via Patrick, to address early limitations, what will be key will be datasets that help “finetune” LLMs like ChatGPT. Domain-specific training data with good labels will improve the performance of these models in an educational context. This article gives a nice summary of fine tuning. Written pre-GPT, this ImageNet for X Memo gives some insights. Towards Data Science, Google Drive
Generative Agents: Interactive Simulacra of Human Behavior Next Big Future
And the paper. arXiv
🏗️ FOUNDATIONS
Selection bias, but we appreciated "The Data Delusion," we’ve uploaded everything anyone has ever known onto a worldwide network of machines. What if it doesn’t have all the answers? Jill Lepore, The New Yorker
We've always thought that by being indiscriminate about the data set in an AI model, the output will trend toward the mean, and will struggle with true innovation. For example, making decisions based on a text scraped from Twitter is going to, 99.99% of the time, produce work that is worse and less interesting and more stupid than just sitting and thinking for a bit.
A more tempered take on AI risks, “We must slow down the race to God-like AI” Ian Hogarth, Financial Times
via Chelsea, summary of a draft paper, "Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy" Princeton University
Speculative; slightly meandering; well-organized: “When Will AI Take Your Job?” Tomas Pueyo
Eight Things to Know about Large Language Models [PDF] Samuel R. Bowman, New York University
🎓 EDUCATION and AI
The future of education in a world of AI Ethan Mollick, Substack
Not directly related to education, but we think this is going to spawn a lot of micro-point solutions, including in ed tech. From Simon Willison, the creator of Datassette, Django, and Lanyrd:
AI Policy Guide Mercatus Center at George Mason University
📊 DATA & TECHNOLOGY
The mounting human and environmental costs of generative AI Ars Technica
tl;dr: this "tip of the iceberg" image
via Josh, a blog post and a more interesting embedded video on context injection, a way to personalize GPT without retraining the whole thing Open Content
possibly related, via Dale, “a technical look at whether or not we can get large language AI to be value-specific” a PDF OpenAI
via Patrick, "large computational models have specific and interesting architectures that are important to understand"
Intro videos to deep neural networks from a leader in the field (YouTube channel)
Transformers (paper) – a key algorithmic and architectural choice inside many LLMs – work really well on a particular data type that happens to correspond to many human-created real-world things arXiv
🎉 FUN and/or PRACTICAL THINGS
Great, another password for the post-it on our monitors ... (Twitter, sorry; but this is the whole tweet to saved you a click, too)
GraphMaker: make a graph instantly with AI; looks like OpenAI plus Python behind the scenes (email required)
To get especially elaborate images from Midjourney, the prompt would look something like this:
I don't want to remember that either, so this is a fun way to explore the possibilities Photoprompts
From Meta, AI to segment anything in an image; think of a fancy lasso tool in Photoshop. The dog image demo is impressive. Also, “The model was trained for 3-5 days on 256 A100 GPUs.” For 11M images.
PrankGPT: This is going to put the Jerky Boys out of business. Finally.
we have not tried this.
Testing $1400 Ai Powered Electric Shoes in NYC (5:19) Casey Neistat, YouTube
spoiler: they look like Segway roller skates?
“Replacing my best friends with an LLM trained on 500,000 group chat messages”
Detailed post with example code.
More importantly, people have 500,000 group chat messages?!?